ASM Fails to Start After rootupgrade.sh on the First Node

If you’ve ever performed a rolling upgrade from Oracle Grid Infrastructure 12c to 19c in a RAC environment, you may have hit this one: after running rootupgrade.sh on the first node, the ora.asm resource fails to come online, and the upgrade stalls.

Symptoms:

  • rootupgrade.sh completes on node 1 but ora.asm fails to start
  • CRS alert log shows errors like: CRS-2674: Start of 'ora.asm' on '<node1>' failed
  • crsctl stat res ora.asm -t shows OFFLINE on node 1
  • ASM alert log may show mount errors or listener registration failures

Root cause

This issue is documented under Doc ID 2606735.1 and is related to the ASM SPFILE location and CRS resource configuration misalignment during the upgrade transition window. The 19c CRS stack is running on node 1 while the 12c stack is still active on remaining nodes — creating a brief but critical incompatibility in how ora.asm is registered and started.

Workaround during upgrade:

  1. Before running rootupgrade.sh on node 1, ensure ASM is using a PFILE instead of an SPFILE stored on ASM diskgroups that require the instance to already be mounted.
  2. Run the upgrade with the ASM SPFILE temporarily backed up to a local filesystem location.
  3. After the upgrade on all nodes, restore the SPFILE to ASM.

Related bugs:

This issue is closely related to Bug 30265357 and Bug 30452852, which I’ll cover in the next two weeks. If you’re planning a 12c-to-19c Grid Infrastructure upgrade, read all three before you start the maintenance window.

Always test your upgrade procedure in a non-production environment first, and keep My Oracle Support (MOS) Doc ID 2606735.1 bookmarked.

Enhanced JSON Functions in Oracle 23ai

Oracle has had JSON support since 12c, but 23ai significantly extends the JSON function library and makes JSON more deeply integrated with SQL. Here’s a tour of what’s new.

JSON_MERGEPATCH — patching JSON documents:

UPDATE preferences
SET    settings = JSON_MERGEPATCH(
    settings,
    '{"theme": "dark", "notifications": {"email": false}}'
)
WHERE user_id = 42;

JSON_MERGEPATCH follows RFC 7396 — it merges a patch document into an existing JSON value, replacing changed keys and removing keys set to null.

JSON_TRANSFORM — targeted updates:

UPDATE documents
SET    data = JSON_TRANSFORM(
    data,
    SET '$.status'      = 'archived',
    SET '$.archived_at' = SYSTIMESTAMP,
    REMOVE '$.draft_notes'
)
WHERE data.status = 'published';

JSON_TRANSFORM lets you SET, REMOVE, APPEND, and REPLACE specific paths in a JSON document without rewriting the entire value.

IS JSON CHECK constraint with schema:

CREATE TABLE events (
    event_id NUMBER,
    payload  CLOB
        CONSTRAINT payload_is_json CHECK (payload IS JSON)
        CONSTRAINT payload_schema CHECK (
            payload IS JSON VALIDATE '{
                "type": "object",
                "required": ["event_type", "timestamp"],
                "properties": {
                    "event_type": {"type": "string"},
                    "timestamp": {"type": "string", "format": "date-time"}
                }
            }'
        )
);

JSON_SCHEMA_VALIDATE: Validate a JSON value against a JSON Schema (draft-07 compatible) at query time.

SELECT JSON_SCHEMA_VALIDATE(payload, :my_schema) AS is_valid
FROM events;

JSON capabilities in Oracle 23ai are now comparable to dedicated document databases, while maintaining full ACID compliance and relational query capabilities. A strong foundation for hybrid workloads.

DEFAULT ON NULL: Smarter Column Defaults in Oracle 23ai

Oracle has had DEFAULT column values for a long time. But there was an annoying gap: if you explicitly INSERT NULL into a column that has a default, Oracle uses NULL — not the default. The default only applies when the column is omitted from the INSERT statement entirely.

Oracle 23ai closes this gap with DEFAULT ON NULL.

Classic DEFAULT behavior (still applies):

CREATE TABLE requests (
    id       NUMBER,
    priority VARCHAR2(10) DEFAULT 'NORMAL'
);

-- Omitting priority: uses default → 'NORMAL'
INSERT INTO requests (id) VALUES (1);

-- Explicitly inserting NULL: stores NULL (default ignored)
INSERT INTO requests (id, priority) VALUES (2, NULL);

With DEFAULT ON NULL:

CREATE TABLE requests (
    id       NUMBER,
    priority VARCHAR2(10) DEFAULT ON NULL 'NORMAL'
);

-- Explicitly inserting NULL: uses default → 'NORMAL'
INSERT INTO requests (id, priority) VALUES (2, NULL);

-- NULL can no longer be stored in this column via INSERT or UPDATE

Adding to an existing column:

ALTER TABLE requests MODIFY priority DEFAULT ON NULL 'NORMAL';

Real-world uses:

  • Audit columns like created_by that should always have a value
  • Status columns where NULL is never a valid state
  • Configuration values with meaningful defaults that should survive explicit NULL inserts from ORMs or API frameworks that always pass all columns

This bridges the semantic gap between application-layer “I didn’t provide a value” and “I provided NULL,” which are often treated the same way by ORMs, REST frameworks, and bulk insert tools.

SKEWNESS and KURTOSIS: Statistical Functions Come to Oracle SQL

Oracle 23ai adds four new statistical aggregate functions that give SQL developers direct access to distribution shape analysis without leaving the database.

SKEWNESS_POP and SKEWNESS_SAMP

Skewness measures the asymmetry of a distribution around its mean.

  • Positive skew: long tail on the right (e.g., most employees earn average salaries, but a few earn very high)
  • Negative skew: long tail on the left
  • Zero: symmetric distribution
SELECT
    department_id,
    ROUND(SKEWNESS_POP(salary), 4)  AS pop_skewness,
    ROUND(SKEWNESS_SAMP(salary), 4) AS sample_skewness,
    COUNT(*) AS headcount
FROM employees
GROUP BY department_id
ORDER BY ABS(SKEWNESS_SAMP(salary)) DESC;

Use SKEWNESS_POP when you have the full population. Use SKEWNESS_SAMP when working with a sample (applies Bessel’s correction).

KURTOSIS_POP and KURTOSIS_SAMP

Kurtosis measures the “tailedness” of a distribution — whether extreme values are frequent or rare compared to a normal distribution.

SELECT
    product_category,
    ROUND(KURTOSIS_SAMP(order_value), 4) AS kurtosis,
    STDDEV(order_value)                  AS std_deviation,
    AVG(order_value)                     AS mean_value
FROM orders
GROUP BY product_category
HAVING COUNT(*) > 100;  -- meaningful only with sufficient data

High kurtosis means more extreme outliers than expected. This is important for financial risk modeling, anomaly detection, and capacity planning.

Before 23ai: You had to calculate skewness and kurtosis manually using complex expressions involving SUM(POWER(x - mean, 3)) and normalization factors. Or export data to Python/R. Now it’s a single function call in SQL, composable with GROUP BY and window functions.

Table Aliases in DML: Cleaner UPDATE and DELETE Statements

Oracle 23ai allows table aliases in UPDATE and DELETE statements, making complex DML with subqueries and correlated references significantly more readable.

UPDATE with a table alias:

Before 23ai, the target table in UPDATE couldn’t easily have an alias in subquery references. Now:

UPDATE employees e
SET    e.salary = e.salary * 1.10
WHERE  e.department_id IN (
    SELECT d.department_id
    FROM   departments d
    WHERE  d.location_code = 'NYC'
);

The alias e is now cleanly scoped across the entire statement.

DELETE with a table alias:

DELETE FROM order_archive oa
WHERE  oa.archived_date < ADD_MONTHS(SYSDATE, -24)
AND    oa.customer_id IN (
    SELECT c.customer_id
    FROM   customers c
    WHERE  c.account_status = 'CLOSED'
);

Combining with direct joins (covered in Feb):

UPDATE employees e
JOIN   performance_reviews pr ON pr.employee_id = e.employee_id
SET    e.rating = pr.final_rating,
       e.last_review_date = pr.review_date
WHERE  pr.review_year = 2024
AND    e.status = 'ACTIVE';

While table aliases in DML were partially supported before (especially in correlated subqueries), 23ai formalizes and extends this capability to work cleanly with the new direct JOIN syntax. Together, they make multi-table DML in Oracle as readable as equivalent SELECT queries — which has always been the goal of SQL ergonomics.

GRAPH_TABLE: Navigating Connected Data in SQL

Following up on last month’s SQL/PGQ introduction, let’s go deeper into the GRAPH_TABLE function — the SQL construct that makes graph traversal queries possible in Oracle 23ai.

GRAPH_TABLE appears in the FROM clause and returns a relational result set from a graph pattern match. This means you can use it exactly like any other table or view — filter it with WHERE, join it with other tables, aggregate it with GROUP BY.

Detecting fraud rings (connected accounts):

SELECT account_id, connected_account, hops
FROM GRAPH_TABLE (
    transaction_graph
    MATCH (a IS accounts) -[IS transfers]->{2,5} (b IS accounts)
    WHERE a.flagged = 1
    COLUMNS (
        a.account_id,
        b.account_id AS connected_account,
        GRAPH_PATH_LENGTH() AS hops
    )
)
WHERE hops <= 3;

Finding shortest path:

SELECT *
FROM GRAPH_TABLE (
    supply_chain_graph
    MATCH SHORTEST (supplier) -[IS supplies]->+ (product)
    WHERE supplier.country = 'US' AND product.sku = 'SKU-9981'
    COLUMNS (LISTAGG(supplier.name, ' -> ') WITHIN GROUP (GRAPH ORDER) AS supply_path)
);

Combining with relational data:

-- Which customers are connected to known fraudsters, and what's their credit limit?
SELECT g.customer_id, c.credit_limit, g.hops
FROM GRAPH_TABLE (
    fraud_network_graph
    MATCH (seed) -[IS knows]->{1,3} (connected)
    WHERE seed.is_fraudster = TRUE
    COLUMNS (connected.customer_id, GRAPH_PATH_LENGTH() AS hops)
) g
JOIN customers c ON c.customer_id = g.customer_id
WHERE c.credit_limit > 10000;

The power of GRAPH_TABLE is not graph traversal in isolation — it’s the seamless combination of graph patterns with relational operations. This is what sets Oracle’s implementation apart from standalone graph databases.

Oracle Select AI: Natural Language Queries on Your Database

Oracle 23ai introduces Select AI — a feature that lets you query your Oracle database using plain English (or other natural languages), which Oracle translates into SQL using an LLM.

It’s not magic. It’s a bridge between a large language model (OpenAI, Cohere, or others) and your database schema. But it’s designed carefully, with schema awareness and configurable profiles.

Setting up a Select AI profile:

BEGIN
    DBMS_CLOUD_AI.CREATE_PROFILE(
        profile_name   => 'hr_assistant',
        attributes     => '{
            "provider"      : "openai",
            "credential_name": "openai_cred",
            "object_list"   : [
                {"owner": "HR", "name": "EMPLOYEES"},
                {"owner": "HR", "name": "DEPARTMENTS"}
            ]
        }'
    );
END;

Querying in natural language:

SELECT AI 'How many employees are in the Sales department?'
USING PROFILE hr_assistant;

Oracle translates this to the appropriate SQL and returns the results.

Modes of operation:

-- See the SQL that was generated (transparency mode)
SELECT AI SHOWSQL 'List top 5 earning employees by department'
USING PROFILE hr_assistant;

-- Explain the results in natural language
SELECT AI NARRATE 'What was the headcount change from 2023 to 2024?'
USING PROFILE hr_assistant;

Honest caveats

Select AI depends on an external LLM provider (it currently doesn’t use a built-in model). The quality of generated SQL depends heavily on schema naming, documentation, and the complexity of the question. It works best on well-named schemas with reasonable cardinality — it will struggle with highly normalized schemas or ambiguous business terminology.

But as a natural language layer on top of structured data, it’s a legitimate tool for non-SQL users. Executive dashboards, self-service analytics, and chatbot integrations are all viable use cases.

RETURNING INTO Enhancements in Oracle 23ai

The RETURNING INTO clause has been in Oracle for a long time — it lets you capture column values from DML statements (INSERT, UPDATE, DELETE) without doing a separate SELECT. Oracle 23ai extends this in two meaningful ways: it now works with multi-row DML, and it supports the OLD and NEW keywords to capture values before and after the change.

Classic single-row RETURNING (has always worked):

DECLARE
    v_new_id NUMBER;
BEGIN
    INSERT INTO orders (customer_id, status)
    VALUES (42, 'PENDING')
    RETURNING order_id INTO v_new_id;

    DBMS_OUTPUT.PUT_LINE('Created order: ' || v_new_id);
END;

New: OLD and NEW qualifiers:

DECLARE
    v_old_status VARCHAR2(20);
    v_new_status VARCHAR2(20);
BEGIN
    UPDATE orders
    SET    status = 'SHIPPED'
    WHERE  order_id = 1001
    RETURNING OLD status, NEW status
    INTO v_old_status, v_new_status;

    DBMS_OUTPUT.PUT_LINE('Changed: ' || v_old_status || ' -> ' || v_new_status);
END;

Bulk collect with multi-row DML:

DECLARE
    TYPE t_ids IS TABLE OF NUMBER;
    v_ids t_ids;
BEGIN
    DELETE FROM temp_orders
    WHERE  processed = 'Y'
    RETURNING order_id BULK COLLECT INTO v_ids;

    DBMS_OUTPUT.PUT_LINE('Deleted ' || v_ids.COUNT || ' orders');
END;

Why this matters: Audit logging, event sourcing, and change tracking patterns all benefit from capturing old/new values at the DML layer. Previously you had to use triggers or issue a SELECT before and after the DML. Now you can do it inline, atomically, with less code.

Oracle AI Vector Search: A New Data Type for the AI Era

The most AI-forward feature in Oracle 23ai is AI Vector Search — a new VECTOR data type and a set of functions for storing, indexing, and querying vector embeddings directly in Oracle Database.

Vector embeddings are numerical representations of unstructured content (text, images, audio) generated by AI models. They’re the foundation of semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation) architectures.

The VECTOR data type:

CREATE TABLE documents (
    doc_id      NUMBER,
    content     CLOB,
    embedding   VECTOR(1536, FLOAT32)  -- 1536 dimensions, OpenAI ada-002 format
);

Inserting a vector:

INSERT INTO documents (doc_id, content, embedding)
VALUES (1, 'Oracle 23ai brings AI features to the database.',
        TO_VECTOR('[0.023, -0.145, 0.871, ...]'));

Semantic similarity search:

SELECT doc_id, content,
       VECTOR_DISTANCE(embedding, :query_vector, COSINE) AS similarity
FROM   documents
ORDER BY similarity
FETCH FIRST 5 ROWS ONLY;

Creating a vector index for fast ANN search:

CREATE VECTOR INDEX docs_vector_idx
ON documents (embedding)
ORGANIZATION INMEMORY NEIGHBOR GRAPH
DISTANCE COSINE
WITH TARGET ACCURACY 95;

The bigger picture: RAG in the database

The reason this matters is the architecture it enables. Instead of maintaining a separate vector database (Pinecone, Weaviate, Chroma), you can store embeddings alongside the source data in Oracle. Your SQL queries can combine vector similarity search with traditional relational filters:

SELECT content
FROM   documents
WHERE  created_date > DATE '2024-01-01'   -- relational filter
ORDER BY VECTOR_DISTANCE(embedding, :query_vec, COSINE)
FETCH FIRST 10 ROWS ONLY;

No data movement, no synchronization, full ACID guarantees. This is Oracle’s AI play, and it’s well-designed.

SQL/PGQ: Querying Property Graphs with Standard SQL

Property graphs model data as nodes and edges — useful for social networks, supply chains, fraud detection, recommendation engines, and any domain where relationships are as important as the data itself. Oracle has had graph support for years (via Oracle Graph Server), but Oracle 23ai brings graph queries directly into SQL using the new ANSI SQL/PGQ standard.

Creating a property graph:

CREATE PROPERTY GRAPH company_graph
    VERTEX TABLES (employees KEY (employee_id))
    EDGE TABLES   (
        reports_to
            SOURCE KEY (employee_id) REFERENCES employees
            DESTINATION KEY (manager_id) REFERENCES employees
    );

Querying the graph with SQL/PGQ:

SELECT *
FROM GRAPH_TABLE (
    company_graph
    MATCH (e IS employees) -[r IS reports_to]-> (m IS employees)
    WHERE e.department_id = 10
    COLUMNS (e.first_name AS employee, m.first_name AS manager)
);

Finding paths of arbitrary depth:

-- Find all employees up to 3 levels below a given manager
SELECT *
FROM GRAPH_TABLE (
    company_graph
    MATCH (top) -[IS reports_to]->+ (sub)  -- + means one or more hops
    WHERE top.employee_id = 100
    COLUMNS (sub.employee_id, sub.first_name)
);

Why in SQL, not a separate graph query language?

This is the key insight. By embedding graph traversal in SQL, you can combine graph patterns with relational predicates, aggregations, and joins — in a single query. No need to move data to a dedicated graph database.

Oracle is the first major commercial RDBMS to implement the SQL/PGQ standard. This positions it well for use cases in fraud detection, network analysis, and knowledge graphs — all of which are growing rapidly in enterprise settings.