UPDATE with Direct Joins in Oracle 23ai

Updating rows in one table based on data from another has always been possible in Oracle, but the syntax was never elegant. You had to use a correlated subquery or an inline view workaround. Oracle 23ai introduces direct JOIN syntax in UPDATE statements, making multi-table updates far more readable.

The old way — correlated subquery:

UPDATE employees e
SET    e.department_name = (
           SELECT d.department_name
           FROM   departments d
           WHERE  d.department_id = e.department_id
       )
WHERE EXISTS (
    SELECT 1 FROM departments d
    WHERE  d.department_id = e.department_id
    AND    d.active = 1
);

The new way — direct JOIN (Oracle 23ai):

UPDATE employees e
JOIN   departments d ON d.department_id = e.department_id
SET    e.department_name = d.department_name
WHERE  d.active = 1;

Much cleaner. The intent is immediately clear — join employees to departments, update the column, filter by a condition on the joined table.

Supported join types:

  • INNER JOIN
  • LEFT JOIN
  • RIGHT JOIN

Multi-column update example:

UPDATE employees e
JOIN   salary_bands sb ON sb.grade = e.pay_grade
SET    e.min_salary = sb.min_salary,
       e.max_salary = sb.max_salary
WHERE  sb.effective_date = DATE '2025-01-01';

Performance note: The Oracle optimizer can leverage indexes and statistics on the joined table just as it does with regular SELECT joins. In many cases, this will be equivalent or better than the correlated subquery approach.

If you’ve ever looked at UPDATE statements and wished they read more like SELECT statements, Oracle 23ai has finally granted that wish. This is one of those features you didn’t know you were missing until you have it.

FROM VALUES: Table Value Constructors in Oracle 23ai

Oracle 23ai introduces table value constructors — a way to use a VALUES clause directly in a FROM clause, creating an inline result set without needing a real table, a UNION ALL chain, or a PL/SQL collection.

This is especially useful for testing queries, building lookup sets, or constructing small datasets on the fly.

Basic syntax:

SELECT *
FROM   (VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Carol'))
       AS t(id, name);

Practical use case — joining against a small inline dataset:

SELECT e.employee_name, r.region_label
FROM   employees e
JOIN   (VALUES ('EMEA', 'Europe, Middle East & Africa'),
               ('APAC', 'Asia Pacific'),
               ('AMER', 'Americas'))
       AS r(region_code, region_label)
ON     e.region = r.region_code;

Why not just use DUAL with UNION ALL?

Before 23ai, the standard Oracle workaround was:

SELECT 1 AS id, 'Alice' AS name FROM DUAL
UNION ALL
SELECT 2, 'Bob'   FROM DUAL
UNION ALL
SELECT 3, 'Carol' FROM DUAL;

It works, but it’s verbose and scales poorly. With FROM VALUES, the syntax is cleaner and performs better because Oracle can optimize it as a single operation.

Testing aggregate functions inline:

SELECT region, SUM(sales) AS total
FROM   (VALUES ('EMEA', 500), ('EMEA', 300), ('APAC', 800))
       AS t(region, sales)
GROUP BY region;

This pattern is fantastic for unit-testing SQL logic without needing real data in real tables. Write your query against a known dataset, validate it, then swap in the real table.

Table value constructors are a mature feature in ANSI SQL and in most other major databases. Their arrival in Oracle 23ai is welcome — and makes a lot of common SQL patterns significantly cleaner.

SELECT Without FROM: Oracle Simplifies Your Queries

Oracle DBAs have a long and complicated relationship with DUAL. This single-row, single-column table has been the answer to a quirky limitation for decades: Oracle SQL required a FROM clause on every SELECT statement, even when you just wanted to evaluate an expression.

SELECT SYSDATE FROM DUAL;
SELECT 1 + 1 FROM DUAL;
SELECT SYS_GUID() FROM DUAL;

In every other major database — PostgreSQL, MySQL, SQL Server, SQLite — you can simply write:

SELECT SYSDATE;
SELECT 1 + 1;

With Oracle 23ai, this finally works in Oracle too. The FROM clause is now optional when no table data is needed.

Before (still valid, still works):

SELECT SYSDATE FROM DUAL;
SELECT USER FROM DUAL;
SELECT 100 * 1.08 FROM DUAL;

After (23ai):

SELECT SYSDATE;
SELECT USER;
SELECT 100 * 1.08;

Why does this matter?

Mostly for ergonomics and cross-database compatibility. If you’re writing SQL that needs to run across multiple databases, removing the FROM DUAL dependency makes your code more portable. It also reduces onboarding friction for developers coming from other database backgrounds who find DUAL confusing.

DUAL itself isn’t going anywhere — Oracle has committed to maintaining backward compatibility, and millions of existing queries reference it. But new code no longer needs it for expression evaluation.

This is a small quality-of-life improvement, but it’s the kind that makes you realize Oracle is finally listening more closely to developer ergonomics. Small changes compound. And as we’ll see throughout this series, 23ai has many of them.

IF [NOT] EXISTS: Writing Safer DDL Scripts in Oracle 23ai

Every DBA has seen it: a deployment script fails halfway through because a table already exists, or a DROP command fails because the object isn’t there. The usual workaround was wrapping DDL in PL/SQL exception blocks or using DBMS_METADATA tricks. Not anymore.

Oracle 23ai introduces IF [NOT] EXISTS syntax for DDL statements, bringing Oracle in line with PostgreSQL, MySQL, and SQL Server.

Creating objects safely:

sql
CREATE TABLE employees (
    id     NUMBER,
    name   VARCHAR2(100)
) IF NOT EXISTS;

If the table already exists, Oracle simply skips the command — no error, no exception, no rollback. Clean and predictable.

Dropping objects safely:

sql
DROP TABLE temp_staging IF EXISTS;
DROP INDEX idx_emp_name IF EXISTS;
DROP SEQUENCE seq_invoice IF EXISTS;

No more ORA-00942: table or view does not exist blowing up your scripts.

Supported objects include: TABLE, VIEW, INDEX, SEQUENCE, TRIGGER, PROCEDURE, FUNCTION, PACKAGE, TYPE, and more.

Why this matters for real deployments

If you manage database migrations with tools like Flyway or Liquibase, or if you run repeatable deployment scripts across multiple environments (dev, QA, prod), this feature is a genuine time-saver. You can now write idempotent DDL scripts natively in SQL without PL/SQL wrappers.

Before 23ai, a typical safe-drop pattern looked like:

BEGIN
  EXECUTE IMMEDIATE 'DROP TABLE temp_staging';
EXCEPTION
  WHEN OTHERS THEN NULL;
END;
/

Now it’s just:

DROP TABLE temp_staging IF EXISTS;

One line. No PL/SQL block. No hidden WHEN OTHERS THEN NULL swallowing real errors.

This is a small feature with a big impact on daily DBA work. If your team manages schema migrations, start using IF [NOT] EXISTS immediately.

The Boolean Data Type Has Finally Arrived in Oracle SQL

If you’ve worked with PL/SQL, you’ve always had access to BOOLEAN. But in Oracle SQL — in actual table columns, queries, and DML — it simply didn’t exist. You had to emulate it with NUMBER(1), CHAR(1) with check constraints, or VARCHAR2 with values like 'Y'/'N'. Each team had its own convention. None were truly boolean.

Oracle 23ai ends this inconsistency. BOOLEAN is now a native SQL data type.

Defining a boolean column:

CREATE TABLE feature_flags (
    feature_name  VARCHAR2(100),
    is_enabled    BOOLEAN DEFAULT FALSE NOT NULL,
    is_beta       BOOLEAN DEFAULT TRUE
);

Inserting and querying boolean data:

INSERT INTO feature_flags VALUES ('dark_mode', TRUE, FALSE);

SELECT feature_name
FROM   feature_flags
WHERE  is_enabled = TRUE;

Accepted literal values:

Oracle is flexible about what it accepts as boolean input. All of the following work:

  • TRUE / FALSE
  • 'YES' / 'NO'
  • 'ON' / 'OFF'
  • '1' / '0'
  • 1 / 0

JSON integration

One of the biggest wins is JSON interoperability. Oracle’s JSON functions now map SQL BOOLEAN columns directly to JSON true/false — no more "is_active": "Y" in your API responses.

Migration note

If you have existing columns using NUMBER(1) or CHAR(1) as boolean proxies, you’ll need to handle conversion carefully. Oracle does not auto-cast these to BOOLEAN on column migration. Plan your data migration scripts accordingly.

The BOOLEAN type in SQL is one of those features that seems minor until you realize how much friction it removes. Cleaner DDL, cleaner application code, cleaner JSON output. A long-overdue improvement.

Oracle 23ai: The SQL Revolution Begins

Oracle has been quietly building one of its most feature-rich database releases in years. Oracle Database 23ai — originally named 23c before Oracle leaned fully into the AI branding — landed in May 2024 and brought over 300 new features to the platform. But if you’re a SQL developer or DBA, the SQL side of this release deserves special attention.

For years, Oracle SQL lagged behind competitors in developer ergonomics. PostgreSQL was shipping quality-of-life improvements. MySQL was adding modern syntax. And Oracle, frankly, was slow to evolve the SQL language itself — even when its own community was asking for changes.

23ai changes that narrative.

In this series, I’ll break down the SQL innovations in Oracle 23ai one by one, every week throughout 2025. Some of these features are pure developer convenience. Others are architectural game-changers. A few of them — and I’ll get to this in September — have a personal story attached.

Here’s a preview of what we’ll cover:

  • Boolean data type — finally a native BOOLEAN in SQL tables
  • IF [NOT] EXISTS for DDL — write safer migration scripts
  • GROUP BY ALL — stop repeating every non-aggregated column
  • Annotations — add metadata to any database object
  • SQL Domains — reusable column type definitions
  • Direct JOINs in UPDATE and DELETE — simplify DML logic
  • SELECT without FROM — a small but elegant change
  • AI Vector Search — a new data type for the AI era
  • SQL/PGQ Property Graphs — graph queries in standard SQL

And much more. Whether you’re running Oracle on-premises or in OCI, these features are worth knowing. Some are already available in Oracle Database Free (the successor to XE), so you can start experimenting right now.

Welcome to 2025. Let’s make it an Oracle SQL year.

OCI Free Trainings!

Hi all,

I noted you guys enjoyed my last post about MADE Strategy and OCI Certs, specially the part where I mentioned those certifications that are free:

Ok, here are some free trainings for you as well:

An overview about the Infrastructure ones, for example:

Oracle MADE Strategy and Certifications (+3 OCI Certifications for Free!)

Hello all!

As the first post for re-engagement, I’d like to share some quick notes from a recent review of the certifications, choosing which would be more compliance to the Oracle sales/market plans. First question is: Have you heard about the MADE Strategy?

In this context, some certifications came to my attention after reviewing the updated list, specially form a Data perspective.

All of them are based on year, so you have either the 2024 or the 2025 already:

  • Oracle Autonomous Database Cloud Professional
  • Oracle Database Services Professionall
  • Oracle Cloud Data Management Foundations Associate
  • Oracle Cloud Database Migration and Integration Professiona
  • Oracle APEX Cloud Developer
  • OCI Architect Professional
  • OCI Multicloud Architect Associate
  • OCI Cloud Operations Professional
  • OCI Generative AI Professional
  • OCI AI Foundations Associate
  • OCI Data Science Professional
  • Oracle Cloud Database Migration and Integration Professional
  • Oracle Cloud Infrastructure Migration Architect Professional
  • Oracle Cloud Data Management Foundations Associate
  • Orace MySQL HeatWave Implementation Associate
  • Oracle Machine Learning with Autonomous Database Specialist

Aaaaaas a bonus for those who read up to this point, here are some certifications that are free for now! Enjoy it!

We are back!

Hello Everyone!

I just wanted to let you know we are back on track with blogging here!

I left the writing behind for a while when I had a child and went through some personal challenges, but things are now sorted out and I’m getting back on track. As of now, I’m back!

We’ll also be re-engaging with authors and producing new and updated material.

Thanks for being around and welcome back to you too!

Dataguard Broker: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

This seems like a simple message to be fixed, right?

The parameter is different between the broker configuration and the database parameters, most likely changed directly on the database after the DGBroker configuration be created or the database added. However, there is an interesting thing in this case.

Let’s check on the error first. On the primary database side of the broker configuration:

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Properties:
DGConnectIdentifier = 'myprodDB'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/, +DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/', +DATADG3/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
WARNING

And checking for the status in the standby database server:

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
Warning: ORA-16809: multiple warnings detected for the database

mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
WARNING (status updated 12 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

Ok, let’s check now for the database parameters perspective on the Primary:

SQL> show parameter convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA/MYDATABASE/DATAFILE/, +
DATADG/myprodDB/DATAFILE
log_file_name_convert string +DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, 
                             +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/
pdb_file_name_convert string

Comparing the settings:

  • LogFileNameConvert=’+DATA/MYDATABASE/ONLINELOG/, +DATA/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA3/MYDATABASE/ONLINELOG/’, +DATADG3/myprodDB/ONLINELOG/’
  • log_file_name_convert=+DATA/MYDATABASE/ONLINELOG/,+DATA/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/

It seems all right!

What is the problem then?

That’s the interesting part. Checking on MOS Usage and Limitation of db_file_name_convert and log_file_name_convert (Doc ID 1367014.1):

When using the Data Guard Broker the Values for these Parameters are limited to 512 Bytes (Characters) due to the Limit of the corresponding Data Guard Broker Properties ‘DbFileNameConvert’ and ‘LogFileNameConvert’.

That’s new to me! So, possible alternatives are:

  • Use OMF (Oracle Managed Files)
  • Use the same File Structure on both Sites
  • Rename and create Datafiles/RedoLog Files manually

What I did in my case?

We checked and confirmed with the client the only places for the logfiles are DATA and DATA2 (multiplexed). So the fix was easy:

edit database 'myprodDB' set property 'LogFileNameConvert' = "+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/";

Once done:

DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 3 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Database Status:
SUCCESS

DGMGRL> show database mySTDB;

Database - mySTDB

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 325.00 KByte/s
Real Time Query: OFF
Instance(s):
mySTDB

Database Status:
SUCCESS

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Properties:
DGConnectIdentifier = 'myprodDB.'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/2MYDATABASE/ONLINELOG/,+DATADG2/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

Did you know that?
I hope it helps!