I’m Attempting to Break a Guinness World Record – Yeah, it’s true!

Some ideas are bigger than the person who has them. This one has been living in my head for a long time, and now it’s real: Guinness World Records has officially accepted my challenge.

From September 4 to 7, 2026, I will teach an uninterrupted class on Introduction to SQL and Database Administration for more than 80 consecutive hours, live, in hybrid format, from a small city in the interior of Rio Grande do Sul, Brazil.

The current world record for the longest computer programming lesson stands at 60 hours. We’re going after 80h.


More Than a Record. A Statement About Education.

The project is called “A Aula Mais Longa” (The Longest Lesson), and the name is exactly what it sounds like. But it’s not about endurance. It’s not a stunt. It’s a declaration.

We live in a country where education is often seen as secondary, something that happens before the “real” career begins, or something reserved for those who already have access. I want to prove something different: that learning can be extraordinary. That students and teachers can be protagonists. That a city in the interior of Brazil can make world history.

The lesson starts on September 4 and ends on September 7: Brazil’s Independence Day. That’s not a coincidence.


What We’re Aiming For

The primary record we’re chasing is the Longest Computer Programming Lesson, officially accepted by Guinness World Records. The current benchmark is 60 hours. Ours will surpass 80.

But we’re not stopping at one title. During the event, we’ll also attempt to set or break records in categories including:

  • Longest Lesson Attended (current reference: 78h 3min)
  • Longest Continuous Student Attendance
  • Most Users to Take an Online Computer Programming Lesson in 24 Hours (current reference: 112,314 participants)
  • Longest Hybrid Technology Lesson
  • Longest Online Lesson
  • Longest University Lesson

The Scale of the Challenge

This won’t be a quiet classroom experiment. The event is designed for real impact, in real numbers:

  • +10,000 simultaneous participants
  • +1,000 professionals trained live
  • +200,000 students impacted online
  • +1 million people reached
  • 4 days of continuous live streaming
  • 800+ bite-sized learning clips generated from the content

The format is hybrid, in-person attendance in Três de Maio, RS, alongside full online participation. There are currently 50 in-person spots and 100 online spots for those who want to be officially part of the Guinness attempt. We’ll evaluate the public interest and review this number if it goes higher.


The Legacy That Stays

The record ends when the lesson ends. The legacy doesn’t.

Every piece of equipment used during the event will be donated to public education initiatives. The 80 hours of content will be edited into short modules and made freely available. Professionals trained during the event become multipliers, people who carry this knowledge forward into their communities and careers.

This is what I mean when I say it’s about education, not just a record. The Guinness certificate is the headline. The real story is what happens after.


What’s Next

We’re currently in the sponsorship and partnership phase (March–April 2026). Public registration is opened!

The final push press coverage, technical rehearsals, and pre-event media  happens in July and August. Then, September 4, we go live. For 80 hours.

If you want to follow this journey, be part of it, or just watch history happen in real time, keep an eye on aulamaislonga.com.br or longestlesson.com and follow in insta @aulamaislonga

The record is just the beginning.


— Prof. Matheus Boesing.

Oracle 26ai ALIAS Clause: Powerful Query Patterns

The ALIAS clause enables some surprisingly elegant query patterns. Let’s look at the cases where it makes the biggest difference.

Multi-step financial calculations:

SELECT
    order_id,
    quantity * unit_price                      AS gross_amount    ALIAS gross_amount,
    gross_amount * tax_rate / 100              AS tax_amount       ALIAS tax_amount,
    gross_amount + tax_amount                  AS total_amount     ALIAS total_amount,
    total_amount * discount_pct / 100          AS discount_amount  ALIAS discount_amount,
    total_amount - discount_amount             AS net_payable
FROM order_lines
WHERE gross_amount > 1000;

Without ALIAS, you’d either repeat the expressions (error-prone) or use a subquery to materialize intermediate results. Now it’s a single, readable SELECT.

Analytics with computed breakpoints:

SELECT
    customer_id,
    SUM(order_value)                          AS lifetime_value   ALIAS lifetime_value,
    CASE WHEN lifetime_value >= 10000 THEN 'PLATINUM'
         WHEN lifetime_value >= 5000  THEN 'GOLD'
         WHEN lifetime_value >= 1000  THEN 'SILVER'
         ELSE 'BRONZE'
    END                                       AS customer_tier    ALIAS customer_tier,
    COUNT(*) FILTER (WHERE order_date >= ADD_MONTHS(SYSDATE, -12)) AS orders_last_year
FROM orders
GROUP BY customer_id
ORDER BY lifetime_value DESC;

ALIAS in WHERE clause:

SELECT
    employee_id,
    salary * (1 + bonus_pct/100)              AS total_comp  ALIAS total_comp,
    TRUNC(total_comp / 1000) * 1000           AS comp_band
FROM employees
WHERE total_comp > 60000    -- references the alias directly
ORDER BY total_comp DESC;

Previously this WHERE clause would require a subquery or repeating the expression. ALIAS makes the intent clear and the query maintainable.

Oracle SQL Firewall: Production Deployment Guide

SQL Firewall is powerful, but deploying it in production without a proper plan can cause application outages. Here’s a safe, phased deployment guide.

Phase 1: Observation (2-4 weeks)

Enable SQL Firewall in capture mode for all application users. Let it observe the full range of SQL generated by your application — including end-of-month batch jobs, reporting queries, and administrative scripts.

EXEC DBMS_SQL_FIREWALL.ENABLE;
EXEC DBMS_SQL_FIREWALL.START_CAPTURE('APP_USER');
EXEC DBMS_SQL_FIREWALL.START_CAPTURE('REPORT_USER');
EXEC DBMS_SQL_FIREWALL.START_CAPTURE('ETL_USER');

Phase 2: Allow-list review

After the observation period, review what was captured:

SELECT sql_text, capture_count, first_seen, last_seen
FROM   dba_sql_firewall_allowed_sql
WHERE  username = 'APP_USER'
ORDER BY last_seen DESC;

Remove any SQL that shouldn’t be on the allow-list (e.g., ad-hoc queries a developer ran during the capture period):

EXEC DBMS_SQL_FIREWALL.DELETE_ALLOWED_SQL('APP_USER', :sql_id);

Phase 3: Enable in LOG mode (not BLOCK)

EXEC DBMS_SQL_FIREWALL.ENABLE_ALLOW_LIST('APP_USER', DBMS_SQL_FIREWALL.ENFORCE_SQL, FALSE);
-- FALSE = log violations but don't block

Monitor DBA_SQL_FIREWALL_VIOLATIONS for 1-2 weeks. Any legitimate application SQL that triggers violations needs to be added to the allow-list.

Phase 4: Enable BLOCK mode

Only after violations are zero (or only known attack patterns):

EXEC DBMS_SQL_FIREWALL.ENABLE_ALLOW_LIST('APP_USER', DBMS_SQL_FIREWALL.ENFORCE_SQL, TRUE);

Ongoing maintenance: When application code changes, update the allow-list before deploying to production. A CI/CD step that runs a short capture against a staging environment and updates the allow-list is ideal.

Oracle 23ai Direct Join DML: Performance Analysis

Direct JOIN syntax in UPDATE and DELETE (covered in February 2025) is cleaner to write, but does it perform better? The answer depends on the query, and understanding the execution plan differences helps you make the right choice.

Checking execution plans for both approaches:

-- Approach 1: Correlated subquery (classic)
EXPLAIN PLAN FOR
UPDATE employees e
SET    e.department_name = (SELECT d.department_name FROM departments d WHERE d.id = e.dept_id)
WHERE EXISTS (SELECT 1 FROM departments d WHERE d.id = e.dept_id AND d.active = 1);

-- Approach 2: Direct JOIN (23ai)
EXPLAIN PLAN FOR
UPDATE employees e
JOIN   departments d ON d.id = e.dept_id AND d.active = 1
SET    e.department_name = d.department_name;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

What you’ll typically see:

The optimizer often generates the same or equivalent plan for both syntaxes. The direct JOIN form gives the optimizer clearer join semantics, which can lead to better cardinality estimates and more accurate statistics usage.

When direct JOIN is measurably better:

  • When the join column has a usable index and the correlated subquery form was defeating the index due to function wrapping
  • When the number of rows to update is a small fraction of the target table (the JOIN can use NESTED LOOPS more efficiently)
  • When Oracle was previously choosing FILTER operations (which can be slow) for correlated EXISTS subqueries

When they’re equivalent:

For well-written correlated subqueries with proper indexes, the optimizer typically produces identical plans. The direct JOIN benefit is primarily in readability and maintainability, not necessarily raw performance.

General recommendation: Use direct JOIN for clarity. If you observe a performance regression (unlikely but possible on older statistics), compare plans and potentially hint the join type.

Oracle 23ai Boolean Type: Migration Patterns from NUMBER(1) and CHAR(1)

The BOOLEAN data type is new, clean, and semantically correct. But most Oracle shops have years of NUMBER(1) and CHAR(1) boolean proxies in production schemas. Here’s how to migrate.

Auditing existing boolean-proxy columns:

-- Find NUMBER(1) columns that are likely booleans
SELECT table_name, column_name, data_type, data_length
FROM   user_tab_columns
WHERE  (data_type = 'NUMBER' AND data_precision = 1 AND data_scale = 0)
OR     (data_type = 'CHAR'   AND data_length = 1)
ORDER BY table_name, column_name;

Review results with your team. Not every NUMBER(1) is a boolean — some are small integer codes.

Migration pattern — NUMBER(1) to BOOLEAN:

-- Step 1: Add a new BOOLEAN column
ALTER TABLE products ADD is_active_bool BOOLEAN;

-- Step 2: Populate from existing column
UPDATE products SET is_active_bool = CASE WHEN is_active = 1 THEN TRUE ELSE FALSE END;

-- Step 3: Apply constraints
ALTER TABLE products MODIFY is_active_bool NOT NULL;

-- Step 4: Drop old column (after verifying app code is updated)
ALTER TABLE products DROP COLUMN is_active;

-- Step 5: Rename
ALTER TABLE products RENAME COLUMN is_active_bool TO is_active;

Application considerations:

JDBC drivers and ORMs need to be updated to handle Oracle’s BOOLEAN type. Oracle JDBC 23ai drivers support getBoolean()/setBoolean() natively. Older driver versions map Oracle BOOLEAN to NUMBER or VARCHAR2 — verify your driver version before migrating columns that applications read.

Rollout recommendation: Migrate boolean columns in new tables first (greenfield), then tackle high-traffic legacy tables during planned maintenance windows with connection pool drains.

Building Idempotent Database Scripts with Oracle 23ai IF [NOT] EXISTS

Idempotent database migrations — scripts you can safely run multiple times — are a CI/CD best practice. Oracle 23ai’s IF [NOT] EXISTS syntax makes this achievable without PL/SQL wrappers. Here’s a complete migration script template.

Idempotent schema creation script:

-- ============================================================
-- Migration: v2.5.0 - Customer Preferences Schema
-- Safe to run multiple times
-- ============================================================

-- 1. Create table if it doesn't exist
CREATE TABLE customer_preferences (
    preference_id   NUMBER         GENERATED ALWAYS AS IDENTITY,
    customer_id     NUMBER         NOT NULL,
    preference_key  VARCHAR2(100)  NOT NULL,
    preference_val  VARCHAR2(4000),
    created_at      TIMESTAMP      DEFAULT SYSTIMESTAMP,
    CONSTRAINT pk_cust_pref PRIMARY KEY (preference_id),
    CONSTRAINT uq_cust_pref_key UNIQUE (customer_id, preference_key)
) IF NOT EXISTS;

-- 2. Add columns if they don't exist (new in 23ai)
ALTER TABLE customer_preferences
    ADD (updated_at TIMESTAMP) IF NOT EXISTS;

ALTER TABLE customer_preferences
    ADD (updated_by VARCHAR2(100)) IF NOT EXISTS;

-- 3. Create index if needed
CREATE INDEX IF NOT EXISTS idx_cust_pref_customer
    ON customer_preferences (customer_id);

-- 4. Create sequence if it doesn't exist (for legacy patterns)
CREATE SEQUENCE IF NOT EXISTS seq_pref_legacy
    START WITH 1 INCREMENT BY 1 NOCACHE;

-- 5. Create or replace views (always safe, no IF needed)
CREATE OR REPLACE VIEW active_preferences AS
SELECT * FROM customer_preferences
WHERE preference_val IS NOT NULL;

-- End of migration

Why this matters for DevOps:

With Flyway or Liquibase, each migration file should run exactly once. But in some environments (re-running failed migrations, cross-environment synchronization), idempotency provides a safety net. Oracle 23ai’s native IF [NOT] EXISTS removes the need for tool-specific workarounds and makes the intent clear in the script itself.

Property Graphs: Advanced Analytics Beyond Simple Traversals

Property graphs in Oracle 23ai aren’t just for simple hop-counting. With SQL/PGQ’s full feature set, you can run sophisticated graph analytics directly in SQL.

Centrality analysis — finding the most connected nodes:

-- Degree centrality: how many direct connections does each node have?
SELECT node_id, COUNT(*) AS degree_centrality
FROM GRAPH_TABLE (
    social_graph
    MATCH (n) -[IS follows]-> (m)
    COLUMNS (n.user_id AS node_id)
)
GROUP BY node_id
ORDER BY degree_centrality DESC
FETCH FIRST 10 ROWS ONLY;

Community detection via shared connections:

-- Find users who share more than 5 mutual followers (potential community)
SELECT a.user_id, b.user_id, COUNT(DISTINCT shared.user_id) AS mutual_count
FROM GRAPH_TABLE (
    social_graph
    MATCH (a IS users) -[IS follows]-> (shared IS users) <-[IS follows]- (b IS users)
    WHERE a.user_id != b.user_id
    COLUMNS (a.user_id, b.user_id, shared.user_id)
) g
GROUP BY g.user_id, g.user_id  -- note: use actual column refs in real query
HAVING COUNT(DISTINCT shared.user_id) > 5;

Fraud ring detection — layered hops:

-- Find all accounts reachable within 4 transactions from a known fraud seed
SELECT DISTINCT suspicious.account_id, suspicious.account_name
FROM GRAPH_TABLE (
    fraud_graph
    MATCH (seed IS accounts) -[IS transactions]->{1,4} (suspicious IS accounts)
    WHERE seed.flagged_for_fraud = 1
    AND   suspicious.flagged_for_fraud = 0
    COLUMNS (suspicious.account_id, suspicious.account_name)
)
WHERE account_id NOT IN (SELECT account_id FROM fraud_whitelist);

Combining graph traversal with relational filters and aggregations is what sets Oracle’s SQL/PGQ apart from standalone graph databases — you get the full SQL toolkit alongside the graph engine.

Oracle Select AI in Production: Lessons from Real Deployments

Select AI (natural language to SQL) sounds magical in demos. In production, it requires careful setup to be useful. Here’s what the production experience looks like.

Schema naming is everything:

Select AI uses table and column names to understand your schema. Cryptic names like T_CUST_HDR or AMT_NET_USD_EQUIV confuse the LLM. Before deploying Select AI, audit your schema naming:

-- Bad: obscure naming that defeats NLP
SELECT C_ID, FNM, LNM FROM T_CUST WHERE STAT = 'A';

-- Good: self-documenting naming
SELECT customer_id, first_name, last_name FROM customers WHERE status = 'ACTIVE';

If you can’t rename legacy tables/columns, use synonym layers or annotate them:

-- Add annotations as hints for Select AI
ALTER TABLE T_CUST MODIFY (FNM ANNOTATIONS (ADD UILabel 'First Name'));
ALTER TABLE T_CUST ANNOTATIONS (ADD BusinessName 'Customers');

Profile refinement with examples:

Select AI profiles support example question-SQL pairs that guide the LLM:

BEGIN
    DBMS_CLOUD_AI.ADD_EXAMPLE(
        profile_name => 'hr_assistant',
        question     => 'How many employees are in each department?',
        sql_text     => 'SELECT d.department_name, COUNT(e.employee_id) AS headcount FROM departments d LEFT JOIN employees e ON e.department_id = d.department_id GROUP BY d.department_name ORDER BY headcount DESC'
    );
END;

Adding 10-20 well-chosen examples dramatically improves accuracy for domain-specific queries.

Monitor generated SQL:

Always log what Select AI generates before it executes in production:

SELECT AI SHOWSQL 'Top 5 products by revenue last quarter'
USING PROFILE sales_ai;

Review and approve generated SQL patterns before exposing Select AI to end users.

JSON Duality Views: Advanced Query Patterns

After a year working with JSON Relational Duality Views, the patterns that make them genuinely powerful in production have become clear. Here are the advanced techniques.

Filtering on nested JSON fields:

-- Find orders where any item's quantity exceeds 10
SELECT data
FROM   order_dv
WHERE  JSON_EXISTS(data, '$.items[*]?(@.qty > 10)');

Combining duality view queries with relational joins:

Duality views can be joined with regular tables in SQL:

SELECT o.data.customer, o.data.status, c.credit_limit
FROM   order_dv o
JOIN   customers c ON c.customer_id = o.data._id.customer_id
WHERE  o.data.status = 'PENDING'
AND    c.credit_limit < 1000;

Using ORDS to expose duality views as REST APIs:

Once a duality view is published via ORDS, you get:

  • GET /api/orders — paginated list of all orders as JSON documents
  • GET /api/orders/42 — single order document
  • PUT /api/orders/42 — full document replacement (with optimistic locking via ETag)
  • POST /api/orders — insert a new order (decomposed into relational tables)

No controller code required. Oracle handles the JSON ↔ relational mapping.

Optimistic locking with ETags:

-- Each duality view row includes a system-generated ETag
SELECT data, ETAG
FROM   order_dv
WHERE  data._id = 42;

-- Update only succeeds if the ETag matches (no concurrent modification)
UPDATE order_dv
SET    data = JSON_MERGEPATCH(data, '{"status": "SHIPPED"}')
WHERE  data._id = 42
AND    ETAG = :last_known_etag;

This implements optimistic concurrency control — concurrent modifications are rejected rather than silently overwriting each other. A valuable pattern for mobile and REST APIs.

SQL Domains: When to Use Them (And When Not To)

SQL Domains are powerful, but like any abstraction layer, they work better in some situations than others. Here’s a practical guide to the decision.

Use SQL Domains when:

The same semantic type appears in 3+ columns across your schema. Email addresses, phone numbers, status codes, country codes, monetary amounts — these are domain candidates.

The constraint logic is non-trivial. A REGEXP_LIKE email validation or a multi-value IN list is worth centralizing. A simple NUMBER column without constraints is not.

You want schema-level documentation to drive tooling. Domains are queryable — your data catalog, code generators, or validation tools can introspect them.

You’re building a new schema. Retrofitting domains is harder than designing with them from the start.

Be cautious when:

The constraint might need to vary per table. Domains enforce the same constraint everywhere they’re used. If one table needs CHECK (VALUE IN ('A','B')) and another needs CHECK (VALUE IN ('A','B','C')), that’s two domains, not one.

You have existing applications that don’t expect domain-level errors. Domain constraint violations raise the same ORA-02290 CHECK constraint error as inline constraints, but the constraint name includes the domain name — applications parsing constraint names may need updating.

You’re mid-migration on a legacy schema. Adding domains to existing columns requires an ALTER TABLE MODIFY that adds the domain. This can be done incrementally, but plan it carefully.

Creating domain variations correctly:

-- Don't force one domain to cover all cases
CREATE DOMAIN d_status_basic   AS VARCHAR2(10) CONSTRAINT CHECK (VALUE IN ('ACTIVE','INACTIVE'));
CREATE DOMAIN d_status_extended AS VARCHAR2(10) CONSTRAINT CHECK (VALUE IN ('ACTIVE','INACTIVE','PENDING','ARCHIVED'));

Two focused domains are better than one overly permissive domain or one domain with workaround constraints.