SQL Domains in Practice: Real-World Patterns

We introduced SQL Domains in February. Let’s look at how they work in realistic enterprise scenarios — because the design decisions around domain adoption matter as much as the feature itself.

Building a domain library:

Think of SQL Domains as a shared type system for your database. The first step is identifying recurring patterns:

-- Financial amount
CREATE DOMAIN d_amount_usd AS NUMBER(15,4)
    DEFAULT 0
    CONSTRAINT ck_amount_positive CHECK (VALUE >= 0);

-- Percentage (0-100)
CREATE DOMAIN d_percentage AS NUMBER(5,2)
    CONSTRAINT ck_pct_range CHECK (VALUE BETWEEN 0 AND 100);

-- Status fields
CREATE DOMAIN d_active_flag AS CHAR(1)
    DEFAULT 'Y'
    CONSTRAINT ck_active_flag CHECK (VALUE IN ('Y', 'N'));

-- Email
CREATE DOMAIN d_email AS VARCHAR2(254)
    CONSTRAINT ck_email_fmt CHECK (VALUE LIKE '%@%.%');

-- ISO country code
CREATE DOMAIN d_country_code AS CHAR(2)
    CONSTRAINT ck_country_upper CHECK (VALUE = UPPER(VALUE));

Using domains across tables:

CREATE TABLE invoices (
    invoice_id      NUMBER PRIMARY KEY,
    amount          d_amount_usd,
    tax_rate        d_percentage,
    customer_email  d_email,
    ship_country    d_country_code,
    is_paid         d_active_flag
);

CREATE TABLE purchase_orders (
    po_id           NUMBER PRIMARY KEY,
    total_value     d_amount_usd,     -- same domain reused
    billing_email   d_email,          -- same domain reused
    origin_country  d_country_code    -- same domain reused
);

Viewing domain usage across the schema:

SELECT table_name, column_name, domain_name
FROM   user_tab_columns
WHERE  domain_name IS NOT NULL
ORDER BY domain_name, table_name;

Governance tip: Store domain definitions in a versioned migration script, just like table DDL. Treat them as first-class schema objects that need change management. When a domain constraint needs updating, the change propagates automatically to all columns using it.

Blockchain Tables: What’s New in Oracle 23ai

Oracle introduced Blockchain Tables in Oracle 21c — append-only, tamper-evident tables where rows cannot be modified or deleted (until a configurable retention period expires). Oracle 23ai adds several enhancements to this feature.

Quick recap of Blockchain Tables:

CREATE BLOCKCHAIN TABLE audit_records (
    record_id     NUMBER,
    event_time    TIMESTAMP,
    user_name     VARCHAR2(100),
    action        VARCHAR2(500)
)
NO DROP UNTIL 365 DAYS IDLE   -- can't drop while rows exist within 365 days
NO DELETE UNTIL 90 DAYS AFTER INSERT  -- rows immutable for 90 days
HASHING USING "SHA2_512" VERSION "v1";

Each row includes a cryptographic hash of the previous row, forming a chain. Any attempt to modify historical data breaks the chain and is detectable.

Oracle 23ai: Immutable Tables

23ai also introduces Immutable Tables — a lighter-weight alternative to blockchain tables that provides tamper protection without the full cryptographic chaining overhead:

CREATE IMMUTABLE TABLE compliance_log (
    log_id    NUMBER,
    log_time  TIMESTAMP,
    log_msg   VARCHAR2(4000)
)
NO DROP UNTIL 180 DAYS IDLE
NO DELETE UNTIL 60 DAYS AFTER INSERT;

Key difference: Immutable Tables don’t maintain a hash chain — rows can’t be modified or deleted within the retention window, but the tamper detection mechanism is Oracle’s internal row-change tracking rather than cryptographic chaining. Simpler, lower overhead, still compliant for most audit requirements.

23ai Blockchain enhancements:

  • Counter-signing support for third-party verification
  • Improved DBMS_BLOCKCHAIN_TABLE.VERIFY_ROWS performance
  • Integration with Oracle’s SQL Firewall for combined protection

For regulatory compliance, financial audit trails, and healthcare record retention, these features provide database-level immutability without external blockchain infrastructure.

Oracle 23ai Lightweight Tables: When You Don’t Need Everything

Oracle 23ai introduces guidance and improvements around Lightweight Tables — a design philosophy and set of options for creating tables that omit expensive features when they’re not needed, resulting in lower overhead for specific workloads.

The concept isn’t entirely new, but 23ai formalizes and extends the options:

Heap tables without segment-level features:

CREATE TABLE event_log (
    log_time   TIMESTAMP DEFAULT SYSTIMESTAMP,
    log_level  VARCHAR2(10),
    log_msg    VARCHAR2(4000)
)
SEGMENT CREATION IMMEDIATE
NOCOMPRESS
NOLOGGING    -- skip redo logging (use only if data is recoverable from source)
NOMONITORING;

Deferred segment creation:

CREATE TABLE staging_area (
    id     NUMBER,
    data   VARCHAR2(4000)
) SEGMENT CREATION DEFERRED;
-- No space allocated until the first row is inserted

Index-Organized Tables (IOT) for key-based access:

For tables where you always access by primary key and rarely do full scans:

CREATE TABLE session_cache (
    session_token  VARCHAR2(64) PRIMARY KEY,
    user_id        NUMBER,
    expiry         TIMESTAMP,
    payload        VARCHAR2(4000)
) ORGANIZATION INDEX
  COMPRESS 1;

The primary key is the physical storage key — lookups are O(log n) with no separate index, and the data itself is B-tree ordered.

23ai enhancement: Oracle 23ai improves the optimizer’s ability to choose between heap, IOT, and external table access paths automatically, reducing the manual tuning burden for these decisions.

Recommendation: For staging tables, working sets, cache-like structures, and append-only logs, consider which table features you actually need. Removing what you don’t need is always faster than adding features you’ll never use.

Fast Ingest (Memoptimized Rowstore) Enhancements in Oracle 23ai

Oracle’s Memoptimized Rowstore (also called Fast Ingest) was introduced in Oracle 18c for high-speed, append-only insert workloads — particularly IoT, telemetry, and time-series data. Oracle 23ai enhances this feature with better memory management and extended query capabilities over optimized data.

What Fast Ingest does:

Instead of writing each row insert directly to the data file (with full redo logging and block-level locking), Fast Ingest buffers new rows in the Large Pool and flushes them to disk asynchronously. This enables extremely high single-row insert throughput.

Setting up Fast Ingest:

CREATE TABLE sensor_readings (
    sensor_id      NUMBER,
    reading_time   TIMESTAMP,
    temperature    NUMBER(6,2),
    humidity       NUMBER(6,2)
) MEMOPTIMIZE FOR WRITE;

23ai enhancements — MEMOPTIMIZE FOR READ:

The new MEMOPTIMIZE FOR READ option creates a specialized in-memory cache of frequently accessed rows optimized for key-based lookups. Combined with MEMOPTIMIZE FOR WRITE, you get a table that ingests fast AND queries fast:

ALTER TABLE sensor_readings MEMOPTIMIZE FOR READ;

-- Fast key lookup — served from memoptimized buffer
SELECT * FROM sensor_readings
WHERE sensor_id = 42
ORDER BY reading_time DESC
FETCH FIRST 1 ROW ONLY;

Monitoring Fast Ingest:

SELECT pool, ROUND(used_bytes/1024/1024) AS used_mb,
       ROUND(allocated_bytes/1024/1024) AS allocated_mb
FROM   v$memoptimize_write_area;

Use cases: Smart meter data ingestion, vehicle telemetry, application performance monitoring, financial tick data. If you’re writing millions of rows per hour to Oracle from IoT or streaming sources, MEMOPTIMIZE FOR WRITE should be in your architecture.

Operational Property Graphs: Real-Time Graph Analytics in Oracle 23ai

We covered SQL/PGQ in April and GRAPH_TABLE in May. This week, let’s look at the operational side: how Oracle 23ai’s property graphs differ from traditional graph database approaches, and why “operational” is the key word.

What makes a graph “operational”?

In most graph database architectures (Neo4j, Amazon Neptune, etc.), graph data is a separate store synchronized from the primary OLTP database. This means:

  • Graph queries reflect a snapshot, not current state
  • Data pipelines add latency between transactions and graph visibility
  • Two systems to maintain, monitor, and backup

Oracle 23ai’s property graphs are defined directly on relational tables. There is no separate graph store. The graph is a logical overlay on the relational data — always current, always transactionally consistent.

-- Create a graph over live operational data
CREATE PROPERTY GRAPH fraud_detection_graph
    VERTEX TABLES (
        accounts   KEY (account_id),
        devices    KEY (device_id)
    )
    EDGE TABLES (
        transactions
            SOURCE KEY (sender_id)      REFERENCES accounts
            DESTINATION KEY (receiver_id) REFERENCES accounts,
        account_logins
            SOURCE KEY (account_id)     REFERENCES accounts
            DESTINATION KEY (device_id) REFERENCES devices
    );

Querying across the live graph:

-- Detect accounts connected to a flagged account within 2 hops
SELECT *
FROM GRAPH_TABLE (
    fraud_detection_graph
    MATCH (flagged) -[]->{1,2} (connected)
    WHERE flagged.is_fraud_flagged = TRUE
    COLUMNS (connected.account_id, connected.balance)
)
WHERE connected.balance > 100000;

Because the graph is defined over live tables, this query reflects every transaction that has occurred up to this moment — no batch sync delay. This is what “operational” means: real-time graph analytics on OLTP data.

Oracle In-Memory Enhancements in 23ai

Oracle’s In-Memory Column Store (IMCS) has been in the database since 12c, but 23ai brings meaningful enhancements to this performance layer. If you use the In-Memory option, these changes deserve attention.

Automatic In-Memory (AIM) improvements:

Oracle 23ai refines the Automatic In-Memory feature — where Oracle automatically decides which segments to populate in the column store based on access patterns.

The 23ai version uses a more sophisticated heat map that accounts for:

  • Query frequency AND recency
  • Join column correlation (prefetching joined tables together)
  • Current available In-Memory area utilization
-- Check AIM recommendations
SELECT segment_name, recommendation, reason
FROM   v$im_segments_detail
WHERE  imeu_size > 0
ORDER BY priority_score DESC;

In-Memory Hybrid Scans:

23ai introduces Hybrid Scans — the optimizer can now split a query between the In-Memory column store and on-disk row store within the same scan, when only part of the table fits in memory. This avoids the all-or-nothing limitation of earlier versions.

In-Memory for JSON:

Columnar storage is now available for JSON document columns, allowing analytical queries on JSON fields to benefit from In-Memory performance without projecting out JSON to relational columns first.

ALTER TABLE events INMEMORY;
-- JSON field analytics now use IMCS automatically
SELECT data.event_type, COUNT(*)
FROM   events
GROUP BY data.event_type;

Monitoring In-Memory effectiveness:

SELECT table_name, inmemory_size, bytes_not_populated,
       populate_status, ROUND(inmemory_size/1024/1024) AS size_mb
FROM   v$im_segments
ORDER BY inmemory_size DESC;

The IMCS continues to be one of Oracle’s most powerful performance tools. The 23ai enhancements make it smarter and more flexible.

IF [NOT] EXISTS Deep Dive: Advanced Patterns in Oracle 23ai

We covered IF [NOT] EXISTS in January. Let’s go deeper — specifically into how it integrates with migration tooling and advanced scripting patterns.

Idempotent schema migrations:

The dream of every migration tool is an idempotent script — one you can run multiple times safely, producing the same result. Oracle 23ai makes this achievable natively:

-- Safe initial setup script
CREATE TABLE customers         IF NOT EXISTS ( ... );
CREATE SEQUENCE seq_customer   IF NOT EXISTS START WITH 1;
CREATE INDEX idx_customer_email IF NOT EXISTS ON customers(email);
CREATE VIEW active_customers   IF NOT EXISTS AS SELECT ...;

-- Safe cleanup
DROP TABLE temp_migration_log   IF EXISTS;
DROP INDEX idx_old_email_format IF EXISTS;

Conditional object creation based on environment:

-- Only create the debug table in non-production environments
CREATE TABLE debug_trace_log (
    trace_id    NUMBER,
    trace_time  TIMESTAMP DEFAULT SYSTIMESTAMP,
    trace_msg   CLOB
) IF NOT EXISTS;

Combining with PL/SQL for complex logic:

BEGIN
    -- Create the table if new, then add a column to it
    EXECUTE IMMEDIATE 'CREATE TABLE config_params (
        param_name  VARCHAR2(100) PRIMARY KEY,
        param_value VARCHAR2(4000)
    ) IF NOT EXISTS';

    -- Add column (IF NOT EXISTS works for ALTER too)
    EXECUTE IMMEDIATE 'ALTER TABLE config_params
        ADD (last_modified DATE DEFAULT SYSDATE) IF NOT EXISTS';
END;

ALTER TABLE … ADD column IF NOT EXISTS is new in 23ai and is one of the most requested migration helpers — no more checking ALL_TAB_COLUMNS before adding a column in a script.

For Flyway, Liquibase, or custom migration frameworks targeting Oracle 23ai, these additions dramatically reduce the complexity of migration scripts.

The SQL Features I Suggested (And Oracle Implemented) – Part 5: On Being an Uncredited Contributor – A Reflection

I want to wrap this saga with something honest rather than conclusive.

I posted ideas to the Oracle Community in 2016. Two of them — GROUP BY OTHERS and Oracle Tagging for Objects — bear a striking resemblance to features shipped in Oracle 23ai eight years later: GROUP BY ALL and the ANNOTATIONS clause. The syntax is nearly identical. The motivation is the same. The use cases I described are the exact use cases Oracle used in their documentation.

I sent an email to Oracle in August 2025 asking for credit in the documentation. A small request. Being acknowledged as the person who suggested a feature that’s now documented in Oracle’s official manuals would be meaningful.

I’m not writing this as an accusation. I’m writing it because I think the community deserves to know how the relationship between feedback and implementation works – or doesn’t. Thousands of developers, DBAs, and architects contribute ideas to vendor forums every year. Most go nowhere. A few get implemented. Rarely are contributors thanked.

If your idea becomes a product feature, you may never know. If it does, the credit usually stays internal.

I think that should change. Not through legal mechanisms – these are ideas, not patents – but through culture. It costs nothing to say “this idea came from community feedback, including a 2016 post by Matheus Boesing.” Nothing except the will to do it.

The Oracle community has given Oracle an enormous amount over the years: bug reports, feature requests, presentations, blog posts, conference sessions. A little acknowledgment goes a long way.

The SQL Features I Suggested (And Oracle Implemented) — Part 4: CTAS with Autodrop: A Seed That May Have Grown Into Something Bigger

THE SAGA: “The Features I Suggested, Oracle Implemented — A Personal History”

This is a five-part series about two Oracle SQL features introduced in Oracle Database 23ai that are strikingly similar — in concept, motivation, and syntax — to suggestions I posted in the Oracle Community forums in July 2016, nearly a decade before they were implemented.

Not every idea maps cleanly to a shipped feature. This one is more speculative — and I want to be honest about that.

In 2016, I also posted an idea I called “CTAS with built-in Autodrop”. The concept was a CREATE TABLE AS SELECT that included a lifecycle clause — a way to define, at creation time, that the table should be automatically dropped (or archived) after a certain time or condition.

CREATE TABLE temp_report AS
SELECT * FROM orders WHERE order_date < SYSDATE - 30
AUTODROP AFTER 7 DAYS;

The motivation: temporary working tables created for reports, ETL staging, or analysis are often forgotten. They accumulate. Giving the database a built-in way to manage their lifecycle would reduce clutter and DBA overhead.

In Oracle 12c, Oracle shipped Automatic Data Optimization (ADO) — a feature under the Information Lifecycle Management umbrella that automatically manages data based on policies (heat maps, tiering, compression, archiving). My CTAS Autodrop idea predated ADO’s expanded feature set, though some of ADO’s core concepts were already in the works.

Did my idea seed any part of ADO? Honestly, I don’t know. I said as much in my email to Gerald Venzl:

“It was suggested before the Automatic Data Optimization (ADO) was released, so it could have been something like a seed to all of it. But then I may not be able to claim, exactly.”

That’s fair. ADO is far broader than my suggestion. The connection is more conceptual than syntactic.

What I will say: the Oracle Ideas forum in 2016 was read by product managers. Ideas were evaluated. And this one described a gap that Oracle recognized and eventually addressed — in their own way, on their own timeline.

Link to my original 2016 post: https://forums.oracle.com/ords/apexds/post/ctas-with-built-in-autodrop-5261

The SQL Features I Suggested (And Oracle Implemented) — Part 3: Oracle Annotations: How My “Tagging” Idea from 2016 Became a Core Feature

THE SAGA: “The Features I Suggested, Oracle Implemented — A Personal History”

This is a five-part series about two Oracle SQL features introduced in Oracle Database 23ai that are strikingly similar — in concept, motivation, and syntax — to suggestions I posted in the Oracle Community forums in July 2016, nearly a decade before they were implemented.

Also on July 2, 2016 — the same day I posted the GROUP BY OTHERS idea — I posted another suggestion to the Oracle Community: “Oracle Tagging for Objects”.

Here’s what I described:

“It’s more than comments, it consists in add indexable keywords that enable easily locate tables or other objects in database, like Mac OS and other systems allow to tag files.”

I proposed a TAGS keyword for DDL, with this example syntax:

CREATE TABLE example (a NUMBER, b NUMBER) TAGS ('Transactional'[,...]);
CREATE SEQUENCE seq_exemp ON example TAGS ('Transactional');
CREATE INDEX idx_oltp ON example TAGS ('Transactional');

And a way to query tags:

SELECT OBJECT_NAME, OBJECT_TYPE, TAG
FROM DBA_TAGS
WHERE TAG = 'Transactional';

I also proposed that tags could have key-value form:

TAGS ('Transactional', 'John');

Now look at Oracle 23ai’s Annotations clause, introduced in 2024:

CREATE TABLE example (a NUMBER, b NUMBER)
ANNOTATIONS (Transactional, Owner 'John');

SELECT object_name, annotation_name, annotation_value
FROM user_annotations_usage;

The concept: identical. The syntax: nearly identical. The query dictionary view: the same idea. The name changed from TAGS to ANNOTATIONS, the keyword changed from TAGS(...) to ANNOTATIONS(...), but everything else maps directly.

My 2016 idea was posted eight years before the implementation. The Oracle Ideas forum was active. People voted on ideas, product managers read them.

A mention in the documentation would be nice, nothing more. I think that’s fair, even for an idea implemented a decade later. Maybe especially for one implemented a decade later. Don’t you think?

Link to my original 2016 post: https://forums.oracle.com/ords/apexds/post/oracle-tagging-for-objects-9158