19c+ DBMS_STATS.GATHER_TABLE_STATS Memory Leaking on PGA

Hi all,

So, here is a quick post but which wasn’t so eaasy to be mapped, though.

It happens that right after a 19c upgrade from 18c, we started facing Memory Leak messages related to PGA area on a regular basis. After a while following the v$process_memory allocation, it was possible to map it to an ODI routing with DBMS_STATS.GATHER_TABLE_STATS. The PGA exceeds the limit just a few minutes after.

Nice han, so what is happening?

Long story short: Bug 30846782 : 19C+ FAST/EXCESSIVE PGA GROWTH WHEN USING DBMS_STATS.GATHER_TABLE_STATS.

According to Oracle Docs: No Workaround!

But here is the golden info I want to share after an SR:

EXECUTE IMMEDIATE 'alter session set "_fix_control"='20424684:OFF';

You are welcome!

Cheers!

OPatch Auto Failing to Apply Datapatch for a Removed Database

Hi all,

So, just a few days ago, during a client support, crossed the following case.

A few database creations initially failed with DBCA due to other issues, but it seems the DBCA didn’t cleread all creation steps after the failure and theoretical rollback.

As a consequence, whenever running DBCA with GUI client was seeing an old database. When trying to remove it with DBCA, the removal fail as the database can’t be brought up (creation has failed, remember?). Ok, we can leave with it, right?

Yes until we reached the point where Opatchauto failed with the following:

Verifying SQL patch applicability on home /u01/app/oracle/product/19c/db

"/bin/sh -c 'cd /u01/app/oracle/product/19c/db; ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

The reason? See the complete log about the failing step:

Executing command as oracle: 
 /bin/sh -c 'cd /u01/app/oracle/product/19c/db;ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -verbose'
2020-09-02 16:26:56,362 INFO  [438] com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - COMMAND Looks like this: /bin/sh -c 'cd /u01/app/oracle/product/19c/db;ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -verbose'
2020-09-02 16:26:57,662 INFO  [438] com.oracle.glcm.patch.auto.db.product.executor.GISystemCall - Is retry required=false
2020-09-02 16:26:57,662 INFO  [438] com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - status: 1
2020-09-02 16:26:57,662 INFO  [438] com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - 
COMMAND EXECUTION FAILURE :
SQL Patching tool version 19.8.0.0.0 Production on Wed Sep  2 16:26:57 2020
Copyright (c) 2012, 2020, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_25218_2020_09_02_16_26_57/sqlpatch_invocation.log

Connecting to database...
Error: prereq checks failed!
Database connect failed with: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4376
Additional information: 1275019259 (DBD ERROR: OCISessionBegin)

Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_25218_2020_09_02_16_26_57/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Wed Sep  2 16:26:57 2020

Clearly, the database is still in place.
As per MOS (ORA-01078 Can Not Delete Database From Dbca on Linux (Doc ID 1559634.1)) -> See the /etc/oratab!

Thing is, the doesn’t have the DB1 line. Also, all related files, logs directory structure, passwd, init, etc… all wiped out. What else?

Here is goes what seems Oracle forgot to tell:

[oracle@PRODB01 dbca]$ srvctl status database -d DB1
Database is not running.
[oracle@DMSDB1PA dbca]$ srvctl config database -d DB1
Database unique name: DB1
Database name: DB1
Oracle home: /u01/app/oracle/product/19c/db
Oracle user: oracle
Spfile: 
Password file: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services: 
OSDBA group: oinstall
OSOPER group: 
Database instance: DB1
Ohhh, that took me a while to realize, this was a Standalone server. Once understood, the fix is straight forward:
[oracle@PRODB01 dbca]$ srvctl remove database -d DB1
Remove the database dms? (y/[n]) y
[oracle@PRODB01 dbca]$

I hope this can buy you some minutes of MOS, in case you are googling it first.
Or save you in case you gave up MOS already.

Cheers!

19c: Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Hi all,
This is to show you we should never trust 100% on documentation and how running on new versions yearly can put additional pressure on the documentation and cause errors…

So, I started supporting a new tool for data mining. There were no version restrictions as per their documentation, so I was more than happy about creating a PDB on my brand new 19c CDB, proudly using 19c for this new tool!

What happened?

Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Doing my validation against the database:

SQL> execute dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
BEGIN dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE); END;

*
ERROR at line 1:
ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Why is this happening?

This happens because the continuous_mine feature is deprecated since 12.2 and desupported/unavailable 19c on.

It seems this LogMining tool devs don’t check what changes on the new DBs versions before confirm compatibility, right?

And they don’t do it or a while, as this as announced as deprecated on 12.2…

Which makes me ask myself how are their confidence on the tool running on other clients they have… LOL

Anyway, are you facing the same? Here is some reference documentation to answer your boss about:

From: 19.1 Announcement – ORA-44609: CONTINOUS_MINE is Desupported For Use With DBMS_LOGMNR.START_LOGMNR (Doc ID 2456684.1):

  • CONTINUOUS_MINE was deprecated in Oracle Database 12c release 2 (12.2) and starting with 19.1 is desuppported. There is no replacement functionality.
  • The continuous_mine option for the dbms_logmnr.start_logmnr package is desupported in Oracle Database 19c (19.1), and is no longer available.

The real reasoning behind is: Nothing seems to be populating the V$LOGMNR_LOGS, so the ORA-1291 occurs.

Here is a quick test case or you, from the 19c version for Oracle Utilities Guide version 19c “Mining without specifying the list of redo log files“. (19c – 22. Using LogMiner to Analyze Redo Log Files)

I just put it all together to show you how it does [not] work:

  • Setting up it all:
ALTER DATABASE add SUPPLEMENTAL LOG DATA;
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
EXECUTE DBMS_LOGMNR_D.BUILD (OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
@?/rdbms/admin/dbmslm.sql
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
create user BLABLA identified by BLABLA default tablespace users quota unlimited on users profile default;
grant connect, resource to ;
  • Doing some stuff to generate logs:
connect BLABLA/BLABLA
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set time on
CREATE TABLE TEST_NULLS (COLUMNA1 NUMBER(3,0));
ALTER TABLE TEST_NULLS ADD (COLUMNA2 NUMBER(3) DEFAULT 0 NOT NULL);
insert into TEST_NULLS(columna1) values (4);
commit;
select * from TEST_NULLS;
update TEST_NULLS set columna2=221 where columna1=4;
commit;
select * from TEST_NULLS;
  • Gathering info or mining:
connect / as sysdba;
set echo on
set serveroutput on
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set linesize 254
set pagesize 3000
column name format a40;
SELECT FILENAME name FROM V$LOGMNR_LOGS;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
  • Introduce the first_time to recover for the previous query:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC BEGIN DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
select FIRST_CHANGE#,NEXT_CHANGE# from V$archived_log;
SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
  • Use the checkpoint at startscn and the current_scn at endscn:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE, ENDTIME => SYSDATE +5/24,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE ,ENDTIME => SYSDATE +5/24 ,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);

Ref: 19c – 22. Using LogMiner to Analyze Redo Log Files

  • You can expect or this:
ERROR at line 1:
ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

Hmmmm… So, the 19c Documentation is not working? Precisely.

As per (Doc ID 2613490.1), this will be fixed in 20.1 documentation.

  • Sections 22.13.2 Examples of Mining without specifying the list of redo log files explicitly and child example topics will be removed.
  • Section 22.4.2. Automatic Redo log Files options will be changed to Specifying Redo Log Files for Data Mining.
  • Section 22.7.4 The next sentence will be removed too.

In summary, whatever reference related to automatic mining in the documentation will be removed as this feature it’s not supported in 19.1 version and higher.

Ok, but this doesn’t solve my problem. What should I do with the client tool?

So, The continous_mine was the only method of starting LogMiner without first adding logfiles using dbms_logmnr.add_logfile.

What can I do, to workaround it:

  1. Add a logfile manually with dbms_logmnr.add_logfile for each logfile.
  2. Remove SYS.DBMS_LOGMNR.CONTINUOUS_MINE from the code.
    1. For this, you’ll need to specify the logfile explicitly. So I guess, some additional codding will be needed from your side…

I hope it helps!
Matheus.

ORA-00600: internal error code, arguments: [kghfrempty:ds]

Hi all,
Annoying thing, just because that sounded veeery unsual to me. I would expect a different error.

I have seen this “[kghfrempty:ds]” in past a couple times, and if you did see it too you may have noticed, this is almost as much generic as the ORA-600 itself. It happens because this is usually in the top of KGH errors in general. Some examples:

1.

kghnerror <- kghfrempty <- kghfrempty_ex <- qerhjFreeSpace
<- qerhjFetch <- qersoProcessULS <- qersoFetch <- opifch2 <- kpoal8
<- opiodr <- kpoodrc <- rpiswu2 <- kpoodr <- upirtrc
<- kpurcsc <- kpufch0 <- kpufch <- OCIStmtFetch2 <- qksanExecSql <- qksanAnalyzeSql

2.

kghfrempty <- kghfrempty_partial <- kghgex <- kghfnd <- kghalo <- kghgex <- kghalf <- ktmrProcessCRClone 
<- ktmbRead <- ktmrget_int <- ktmrget <- kdmsTransGet <- kdst_fetch_imc  <- kdsttgr  <- qertbFetch

And as being it can be related either to analyze bugs (as per stack 1), In-Memory Column Store – IMC (as per stack 2), create views, rollback over cluster, trigger creation, use function based indexes, or several other generic situations (there is no pattern, as you can see).

So what do to if you get this message? Well, generically, read all the MOS notes available and try to match you case. IF you match, pray to have a bug for your current version, or open an SR to either have a backport or start a new investigation.

A few examples of things to check:

  • ALERT: Bug 13605839 ORA-600 [ktbsdp1] ORA-600 [kghfrempty:ds] ORA-600 [kdBlkCheckError]. Corruption in Rollback with Clusterwide Global Transactions in RAC (Doc ID 1527740.1)
  • SQL Analyze Fails with ORA-600[kghfrempty:ds] (Doc ID 2247180.1)
    Bug 21556276 – dataguide createviewonpath hit ora 600 [kghfrempty:ds] (Doc ID 21556276.8)
  • Bug 20878625 – ORA-600[kghfrempty:ds] with IMC enabled (Doc ID 20878625.8)
  • Bug 19212166 – ORA-600 [KGHFREMPTY:DS] With Cache Line Alignment Enabled (Doc ID 19212166.8)
  • Bug 6797925 – ORA-600 [kghfrempty:ds] During Creation Of Trigger In XML Schema (Doc ID 6797925.8)
  • 11.1: ORA-600 [17147] AND [kghfrempty:ds] On Select Using Functional Indexes (Doc ID 884882.1)
  • ORA-600 [kghfrempty:ds] And ORA-600 [kghrcdepth:ds] While Gathering Statistics (Doc ID 1480132.1)

BUUT I just noticed some undocumented scenario causing this and it might help you on fixing this as well.
I noticed I was getting this error for different queries but always from same application. So I addressed a complete review on application layer and noticed the JDBC 7 on this application. Happens we had just upgraded from 12.1 to 19c.

SO, after upgrading jdbc, it started to work again properly and no MOS SR was required. So lesson learned: When you receive this kghfrempty:ds], speacially after migration, also check for your application JDBC version.

For compatibility Matrix: Starting With Oracle JDBC Drivers – Installation, Certification, and More! ( Doc ID 401934.1 )

Hope it helps, cheers!

ORA-65532: cannot alter or drop automatically created indexes

Hello all,
Quickly walking through this, as it was new to me (as for everybody, I guess): So what if I’m not happy with Oracle algorithm and I want to drop and AUTO Index on 19c?

Should be easy, like “drop index XXXX;” right? Wrong. See the struggle:

SQL> drop index GREPORA."SYS_AI_9xu652x5fyu5i";
drop index GREPORA."SYS_AI_9xu652x5fyu5i"
                 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

OK, so let’s have a look on my auto created indexes. You can see them with flag AUTO as YES on query below:

SQL> select owner, index_name, auto, tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_9xu652x5fyu5i    YES     GREPORA
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

Ok, so I cannot drop, let me alter it and set it as unusable, then, as it would archive my intend anyway:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable;
alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable
 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

Hmmm, so it means I cannot alter it as well?
Well, kind of, there is something I can do:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" rebuild tablespace GREPORA2 online;
Index altered.

Noice, thanks for nothing. What is changed?
Well, you can use a workaround to move it to a new tablespace and then drop the tablespace:

SQL> drop tablespace GREPORA2 including contents;
Tablespace dropped.

And index will be gone:

SQL> select owner,index_name,auto,tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

OK, so that’s bad, right… What the worse part? If you simply drop it, Oracle will probably recreate it, based on algorithm, right?
Gee, thanks for nothing (again)!

This, though, is easier to change: Simply alter the Audo Indexing to report mode:

exec dbms_auto_index.configure('AUTO_INDEX_MODE','REPORT');

This is interesting right?
Ok, we have more options, we can even exclude specific schemas from AUTO_INDEX, choose a default tablespace, define retentions and a lot of things.

As usual, the best reference I could find: https://oracle-base.com/articles/19c/automatic-indexing-19c

Just one thing you might want to consider and it’s not on Tim Hall’s post. Specially if the index is bad for one specific query, is to avoid using it during that specific query. We can do it with session/system parameter:

“_optimizer_use_auto_indexes”=OFF

Well, hope it helps you. Cheers!

Stop Migrating Databases to 12c!

If you are starting an upgrade plan, do it directly to 19c. Why?

Well, first let’s say that 19c is “equal” to 12.2.0.3 on previous version model, while 18c is 12.2.0.2, after latest 12.2.0.1. So, thinking reasonably, you would already plan for latest PSU on the future version, right? So this is 19c.

But not only this. It’s well known that Oracle’s Terminal Patchset is the Long Term Release for each version. And for 12.2 family this is 19c.

The image below (From Doc ID 742060.1) should help on clarifying this:

versions

In summary, since August 1st 2019:
– 11.2.0.4 is already on Extended Support (20% more expensive unless you have ULA)
– 12.1 family is already on Extended Support (20% more expensive unless you have ULA)

Which only leave us with 12.2 Family. Now I ask you, why jump to any “PSU” other than the final one?
Also, support on 12.2.0.1 goes only up to Nov 2020 and for 18c (12.2.0.2) only to Jun 2021, while 19c goes up to 2026 with ES/ULA!

I’m not even raising the great improvements we have on newer versions (if you are in 11.2, shame on you, you are basically 6 years behind the world technology wise).
Myself , I already migrated several databases on different clients to 19c and it’s running preeeetty fine. So trust on it!

Ok, so to migrate to 19c? (Some tips):
– To migrate directly to 19c you need to be on supported terminal patch set (11.2.0.4, 12.1.0.2) or on 12.2 family (12.2.0.1, 18c).
– Get in touch and use AutoUpgrade mechanism (you’ll love it!) – AutoUpgrade Tool (Doc ID 2485457.1)
– Check Upgrade Guide: https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/index.html
– Have a look on Mike Dietrich blog for news: https://mikedietrichde.com/
– Test it before doing in prod.

Additional Reference:
Release Schedule of Current Database Releases (Doc ID 742060.1)
Very nice and complete post (in portuguese) on this topic from Ricardo Portilho.

I like very much the table below as well:

NewScreenshot 2019-08-13 às 15.11.34

See you!