Lock-Free Reservations: High-Concurrency DML Without Conflicts

One of Oracle 23ai’s most interesting concurrency features is Lock-Free Reservations. It solves a classic high-concurrency problem: multiple sessions competing to update the same row — like decrementing inventory counts in a flash sale.

The classic problem:

-- Session 1 and Session 2 both try to decrement inventory
UPDATE products SET stock_qty = stock_qty - 1 WHERE product_id = 42;

With thousands of concurrent orders, Sessions 2 through N all block waiting for Session 1’s lock. This creates a hot row contention bottleneck.

Lock-Free Reservation solution:

-- Define the column as a reservation target
ALTER TABLE products MODIFY (
    stock_qty NUMBER RESERVABLE CONSTRAINT stock_non_negative CHECK (stock_qty >= 0)
);

Once a column is declared RESERVABLE, Oracle handles concurrent updates to it using a reservation mechanism rather than row-level locking. Each session “reserves” a portion of the value without blocking others.

How it works under the hood:

Oracle maintains a pending reservation table alongside the main table. Each concurrent update is stored as a reservation. The actual column value is the base value plus the sum of all pending reservations. When a session commits, its reservation is rolled into the base value.

The net effect:

  • Multiple sessions can concrement stock_qty simultaneously without blocking each other
  • The CHECK constraint (stock_qty >= 0) is still enforced — the sum of all reservations cannot drive the value below zero
  • ACID guarantees are maintained

Ideal use cases: Inventory management, ticket booking, seat reservations, resource quotas — any scenario where many sessions update the same numeric column concurrently.

This is a sophisticated concurrency improvement that eliminates a well-known Oracle scalability bottleneck without requiring application-level workarounds.

SQL Firewall: Protecting Your Database with SQL Whitelisting

Oracle SQL Firewall is one of the most important security additions in Oracle 23ai. It’s a kernel-level feature (not an add-on product) that inspects every SQL statement sent to the database and compares it against a learned whitelist of approved statements.

How it works:

  1. Learning phase: SQL Firewall observes your application’s normal SQL traffic and builds a whitelist of allowed statements, users, and connection paths.
  2. Enforcement phase: Any SQL statement not in the whitelist is either blocked, logged, or both.

Setting up SQL Firewall:

-- Enable SQL Firewall
EXEC DBMS_SQL_FIREWALL.ENABLE;

-- Start learning for a specific user
EXEC DBMS_SQL_FIREWALL.START_CAPTURE('APP_USER');

-- (Let the application run normally for the learning period)

-- Stop learning and create the allow-list
EXEC DBMS_SQL_FIREWALL.STOP_CAPTURE('APP_USER');
EXEC DBMS_SQL_FIREWALL.CREATE_ALLOW_LIST('APP_USER');

-- Enable enforcement
EXEC DBMS_SQL_FIREWALL.ENABLE_ALLOW_LIST(
    username     => 'APP_USER',
    enforce      => DBMS_SQL_FIREWALL.ENFORCE_SQL,
    block        => TRUE
);

What it catches:

  • SQL injection attempts that generate SQL not in the whitelist
  • Unauthorized queries run by rogue application code or DBA accounts
  • Unexpected data exfiltration queries (SELECT * FROM users when the app only does parameterized lookups)
  • Suspicious connection paths (e.g., a user who normally connects from the app server suddenly connecting from a different IP)

Monitoring violations:

SELECT username, sql_text, action_taken, timestamp
FROM   dba_sql_firewall_violations
ORDER BY timestamp DESC;

SQL Firewall is especially valuable for organizations subject to PCI-DSS, HIPAA, or SOX compliance. It provides a last line of defense at the database level — even if the application layer is compromised.

Enhanced JSON Functions in Oracle 23ai

Oracle has had JSON support since 12c, but 23ai significantly extends the JSON function library and makes JSON more deeply integrated with SQL. Here’s a tour of what’s new.

JSON_MERGEPATCH — patching JSON documents:

UPDATE preferences
SET    settings = JSON_MERGEPATCH(
    settings,
    '{"theme": "dark", "notifications": {"email": false}}'
)
WHERE user_id = 42;

JSON_MERGEPATCH follows RFC 7396 — it merges a patch document into an existing JSON value, replacing changed keys and removing keys set to null.

JSON_TRANSFORM — targeted updates:

UPDATE documents
SET    data = JSON_TRANSFORM(
    data,
    SET '$.status'      = 'archived',
    SET '$.archived_at' = SYSTIMESTAMP,
    REMOVE '$.draft_notes'
)
WHERE data.status = 'published';

JSON_TRANSFORM lets you SET, REMOVE, APPEND, and REPLACE specific paths in a JSON document without rewriting the entire value.

IS JSON CHECK constraint with schema:

CREATE TABLE events (
    event_id NUMBER,
    payload  CLOB
        CONSTRAINT payload_is_json CHECK (payload IS JSON)
        CONSTRAINT payload_schema CHECK (
            payload IS JSON VALIDATE '{
                "type": "object",
                "required": ["event_type", "timestamp"],
                "properties": {
                    "event_type": {"type": "string"},
                    "timestamp": {"type": "string", "format": "date-time"}
                }
            }'
        )
);

JSON_SCHEMA_VALIDATE: Validate a JSON value against a JSON Schema (draft-07 compatible) at query time.

SELECT JSON_SCHEMA_VALIDATE(payload, :my_schema) AS is_valid
FROM events;

JSON capabilities in Oracle 23ai are now comparable to dedicated document databases, while maintaining full ACID compliance and relational query capabilities. A strong foundation for hybrid workloads.

DEFAULT ON NULL: Smarter Column Defaults in Oracle 23ai

Oracle has had DEFAULT column values for a long time. But there was an annoying gap: if you explicitly INSERT NULL into a column that has a default, Oracle uses NULL — not the default. The default only applies when the column is omitted from the INSERT statement entirely.

Oracle 23ai closes this gap with DEFAULT ON NULL.

Classic DEFAULT behavior (still applies):

CREATE TABLE requests (
    id       NUMBER,
    priority VARCHAR2(10) DEFAULT 'NORMAL'
);

-- Omitting priority: uses default → 'NORMAL'
INSERT INTO requests (id) VALUES (1);

-- Explicitly inserting NULL: stores NULL (default ignored)
INSERT INTO requests (id, priority) VALUES (2, NULL);

With DEFAULT ON NULL:

CREATE TABLE requests (
    id       NUMBER,
    priority VARCHAR2(10) DEFAULT ON NULL 'NORMAL'
);

-- Explicitly inserting NULL: uses default → 'NORMAL'
INSERT INTO requests (id, priority) VALUES (2, NULL);

-- NULL can no longer be stored in this column via INSERT or UPDATE

Adding to an existing column:

ALTER TABLE requests MODIFY priority DEFAULT ON NULL 'NORMAL';

Real-world uses:

  • Audit columns like created_by that should always have a value
  • Status columns where NULL is never a valid state
  • Configuration values with meaningful defaults that should survive explicit NULL inserts from ORMs or API frameworks that always pass all columns

This bridges the semantic gap between application-layer “I didn’t provide a value” and “I provided NULL,” which are often treated the same way by ORMs, REST frameworks, and bulk insert tools.

SKEWNESS and KURTOSIS: Statistical Functions Come to Oracle SQL

Oracle 23ai adds four new statistical aggregate functions that give SQL developers direct access to distribution shape analysis without leaving the database.

SKEWNESS_POP and SKEWNESS_SAMP

Skewness measures the asymmetry of a distribution around its mean.

  • Positive skew: long tail on the right (e.g., most employees earn average salaries, but a few earn very high)
  • Negative skew: long tail on the left
  • Zero: symmetric distribution
SELECT
    department_id,
    ROUND(SKEWNESS_POP(salary), 4)  AS pop_skewness,
    ROUND(SKEWNESS_SAMP(salary), 4) AS sample_skewness,
    COUNT(*) AS headcount
FROM employees
GROUP BY department_id
ORDER BY ABS(SKEWNESS_SAMP(salary)) DESC;

Use SKEWNESS_POP when you have the full population. Use SKEWNESS_SAMP when working with a sample (applies Bessel’s correction).

KURTOSIS_POP and KURTOSIS_SAMP

Kurtosis measures the “tailedness” of a distribution — whether extreme values are frequent or rare compared to a normal distribution.

SELECT
    product_category,
    ROUND(KURTOSIS_SAMP(order_value), 4) AS kurtosis,
    STDDEV(order_value)                  AS std_deviation,
    AVG(order_value)                     AS mean_value
FROM orders
GROUP BY product_category
HAVING COUNT(*) > 100;  -- meaningful only with sufficient data

High kurtosis means more extreme outliers than expected. This is important for financial risk modeling, anomaly detection, and capacity planning.

Before 23ai: You had to calculate skewness and kurtosis manually using complex expressions involving SUM(POWER(x - mean, 3)) and normalization factors. Or export data to Python/R. Now it’s a single function call in SQL, composable with GROUP BY and window functions.

Table Aliases in DML: Cleaner UPDATE and DELETE Statements

Oracle 23ai allows table aliases in UPDATE and DELETE statements, making complex DML with subqueries and correlated references significantly more readable.

UPDATE with a table alias:

Before 23ai, the target table in UPDATE couldn’t easily have an alias in subquery references. Now:

UPDATE employees e
SET    e.salary = e.salary * 1.10
WHERE  e.department_id IN (
    SELECT d.department_id
    FROM   departments d
    WHERE  d.location_code = 'NYC'
);

The alias e is now cleanly scoped across the entire statement.

DELETE with a table alias:

DELETE FROM order_archive oa
WHERE  oa.archived_date < ADD_MONTHS(SYSDATE, -24)
AND    oa.customer_id IN (
    SELECT c.customer_id
    FROM   customers c
    WHERE  c.account_status = 'CLOSED'
);

Combining with direct joins (covered in Feb):

UPDATE employees e
JOIN   performance_reviews pr ON pr.employee_id = e.employee_id
SET    e.rating = pr.final_rating,
       e.last_review_date = pr.review_date
WHERE  pr.review_year = 2024
AND    e.status = 'ACTIVE';

While table aliases in DML were partially supported before (especially in correlated subqueries), 23ai formalizes and extends this capability to work cleanly with the new direct JOIN syntax. Together, they make multi-table DML in Oracle as readable as equivalent SELECT queries — which has always been the goal of SQL ergonomics.

GRAPH_TABLE: Navigating Connected Data in SQL

Following up on last month’s SQL/PGQ introduction, let’s go deeper into the GRAPH_TABLE function — the SQL construct that makes graph traversal queries possible in Oracle 23ai.

GRAPH_TABLE appears in the FROM clause and returns a relational result set from a graph pattern match. This means you can use it exactly like any other table or view — filter it with WHERE, join it with other tables, aggregate it with GROUP BY.

Detecting fraud rings (connected accounts):

SELECT account_id, connected_account, hops
FROM GRAPH_TABLE (
    transaction_graph
    MATCH (a IS accounts) -[IS transfers]->{2,5} (b IS accounts)
    WHERE a.flagged = 1
    COLUMNS (
        a.account_id,
        b.account_id AS connected_account,
        GRAPH_PATH_LENGTH() AS hops
    )
)
WHERE hops <= 3;

Finding shortest path:

SELECT *
FROM GRAPH_TABLE (
    supply_chain_graph
    MATCH SHORTEST (supplier) -[IS supplies]->+ (product)
    WHERE supplier.country = 'US' AND product.sku = 'SKU-9981'
    COLUMNS (LISTAGG(supplier.name, ' -> ') WITHIN GROUP (GRAPH ORDER) AS supply_path)
);

Combining with relational data:

-- Which customers are connected to known fraudsters, and what's their credit limit?
SELECT g.customer_id, c.credit_limit, g.hops
FROM GRAPH_TABLE (
    fraud_network_graph
    MATCH (seed) -[IS knows]->{1,3} (connected)
    WHERE seed.is_fraudster = TRUE
    COLUMNS (connected.customer_id, GRAPH_PATH_LENGTH() AS hops)
) g
JOIN customers c ON c.customer_id = g.customer_id
WHERE c.credit_limit > 10000;

The power of GRAPH_TABLE is not graph traversal in isolation — it’s the seamless combination of graph patterns with relational operations. This is what sets Oracle’s implementation apart from standalone graph databases.

Oracle Select AI: Natural Language Queries on Your Database

Oracle 23ai introduces Select AI — a feature that lets you query your Oracle database using plain English (or other natural languages), which Oracle translates into SQL using an LLM.

It’s not magic. It’s a bridge between a large language model (OpenAI, Cohere, or others) and your database schema. But it’s designed carefully, with schema awareness and configurable profiles.

Setting up a Select AI profile:

BEGIN
    DBMS_CLOUD_AI.CREATE_PROFILE(
        profile_name   => 'hr_assistant',
        attributes     => '{
            "provider"      : "openai",
            "credential_name": "openai_cred",
            "object_list"   : [
                {"owner": "HR", "name": "EMPLOYEES"},
                {"owner": "HR", "name": "DEPARTMENTS"}
            ]
        }'
    );
END;

Querying in natural language:

SELECT AI 'How many employees are in the Sales department?'
USING PROFILE hr_assistant;

Oracle translates this to the appropriate SQL and returns the results.

Modes of operation:

-- See the SQL that was generated (transparency mode)
SELECT AI SHOWSQL 'List top 5 earning employees by department'
USING PROFILE hr_assistant;

-- Explain the results in natural language
SELECT AI NARRATE 'What was the headcount change from 2023 to 2024?'
USING PROFILE hr_assistant;

Honest caveats

Select AI depends on an external LLM provider (it currently doesn’t use a built-in model). The quality of generated SQL depends heavily on schema naming, documentation, and the complexity of the question. It works best on well-named schemas with reasonable cardinality — it will struggle with highly normalized schemas or ambiguous business terminology.

But as a natural language layer on top of structured data, it’s a legitimate tool for non-SQL users. Executive dashboards, self-service analytics, and chatbot integrations are all viable use cases.

RETURNING INTO Enhancements in Oracle 23ai

The RETURNING INTO clause has been in Oracle for a long time — it lets you capture column values from DML statements (INSERT, UPDATE, DELETE) without doing a separate SELECT. Oracle 23ai extends this in two meaningful ways: it now works with multi-row DML, and it supports the OLD and NEW keywords to capture values before and after the change.

Classic single-row RETURNING (has always worked):

DECLARE
    v_new_id NUMBER;
BEGIN
    INSERT INTO orders (customer_id, status)
    VALUES (42, 'PENDING')
    RETURNING order_id INTO v_new_id;

    DBMS_OUTPUT.PUT_LINE('Created order: ' || v_new_id);
END;

New: OLD and NEW qualifiers:

DECLARE
    v_old_status VARCHAR2(20);
    v_new_status VARCHAR2(20);
BEGIN
    UPDATE orders
    SET    status = 'SHIPPED'
    WHERE  order_id = 1001
    RETURNING OLD status, NEW status
    INTO v_old_status, v_new_status;

    DBMS_OUTPUT.PUT_LINE('Changed: ' || v_old_status || ' -> ' || v_new_status);
END;

Bulk collect with multi-row DML:

DECLARE
    TYPE t_ids IS TABLE OF NUMBER;
    v_ids t_ids;
BEGIN
    DELETE FROM temp_orders
    WHERE  processed = 'Y'
    RETURNING order_id BULK COLLECT INTO v_ids;

    DBMS_OUTPUT.PUT_LINE('Deleted ' || v_ids.COUNT || ' orders');
END;

Why this matters: Audit logging, event sourcing, and change tracking patterns all benefit from capturing old/new values at the DML layer. Previously you had to use triggers or issue a SELECT before and after the DML. Now you can do it inline, atomically, with less code.

Oracle AI Vector Search: A New Data Type for the AI Era

The most AI-forward feature in Oracle 23ai is AI Vector Search — a new VECTOR data type and a set of functions for storing, indexing, and querying vector embeddings directly in Oracle Database.

Vector embeddings are numerical representations of unstructured content (text, images, audio) generated by AI models. They’re the foundation of semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation) architectures.

The VECTOR data type:

CREATE TABLE documents (
    doc_id      NUMBER,
    content     CLOB,
    embedding   VECTOR(1536, FLOAT32)  -- 1536 dimensions, OpenAI ada-002 format
);

Inserting a vector:

INSERT INTO documents (doc_id, content, embedding)
VALUES (1, 'Oracle 23ai brings AI features to the database.',
        TO_VECTOR('[0.023, -0.145, 0.871, ...]'));

Semantic similarity search:

SELECT doc_id, content,
       VECTOR_DISTANCE(embedding, :query_vector, COSINE) AS similarity
FROM   documents
ORDER BY similarity
FETCH FIRST 5 ROWS ONLY;

Creating a vector index for fast ANN search:

CREATE VECTOR INDEX docs_vector_idx
ON documents (embedding)
ORGANIZATION INMEMORY NEIGHBOR GRAPH
DISTANCE COSINE
WITH TARGET ACCURACY 95;

The bigger picture: RAG in the database

The reason this matters is the architecture it enables. Instead of maintaining a separate vector database (Pinecone, Weaviate, Chroma), you can store embeddings alongside the source data in Oracle. Your SQL queries can combine vector similarity search with traditional relational filters:

SELECT content
FROM   documents
WHERE  created_date > DATE '2024-01-01'   -- relational filter
ORDER BY VECTOR_DISTANCE(embedding, :query_vec, COSINE)
FETCH FIRST 10 ROWS ONLY;

No data movement, no synchronization, full ACID guarantees. This is Oracle’s AI play, and it’s well-designed.