ORA-01450 on Index Rebuild

Hello all,
So, I got this:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS;
ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded

So, is this a bug? Yes it is.

From MOS ALTER INDEX REBUILD ONLINE Fails with ORA-01450 (Doc ID 236329.1):
“This is caused by issue Bug:2525767. The online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. Their total length is greater than number reported in ORA-01450 error message. This is a feature of online rebuild.
Maximum key length is calculated with respect to the database block size. It means that current value of the initialization parameter db_block_size is not large enough so that the internal journal IOT can be created without errors.”

Wanna know more, there is a great article on how this works exactly, by Alex Gorbachev: ORA-01450 During Online Index Rebuild

The solution?
It simply cannot be done online as explained, so doing it offline is the solution:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD TABLESPACE NEW_TBS;

Index altered.

Hope it helps, cheers!

ORA-02429 on Drop Tablespace

Ok, so here is the error:

SQL> DROP TABLESPACE MY_TBS INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE MY_TBS INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-02429: cannot drop index used for enforcement of unique/primary key

Hmm… Sounds pretty clear, right? So how to workaround it?

This is documented on MOS Drop Tablespace Failed with ORA-02429: cannot drop index used for enforcement of unique/primary key (Doc ID 1918060.1)

So, FIRST make sure this is not a problem for your application.
In case it is not, here is my step by step with some better scripting:

1) Disable the Constrains:

SQL> select 'ALTER TABLE '||owner||'.'||table_name||' DISABLE CONSTRAINT '|| constraint_name||';'
from dba_constraints
where (index_owner,index_name) in (select owner,index_name from dba_indexes
where tablespace_name='MY_TBS');

2) In case of PKs, you may need to disable it with CASCADE clause:

SQL> select 'ALTER TABLE '||owner||'.'||table_name||' DISABLE CONSTRAINT '|| constraint_name||' cascade;'
from dba_constraints
where (index_owner,index_name) in (select owner,index_name from dba_indexes
where tablespace_name='MY_TBS');

3) Drop the tablespace:

SQL> DROP TABLESPACE MY_TBS INCLUDING CONTENTS AND DATAFILES;

Hope it helps,
Cheers!

dba_registry_sqlpatch/cdb_registry_sqlpatch Empty after Patch

Hi all!
So, I was checking a new environment and noticed the dba_registry_sqlpatch was empty, when it actually shouldn’t:

SQL> select patch_id, patch_uid, version, action, action_time, status, description from dba_registry_sqlpatch;

no rows selected

SQL>

The expected output should be (from another CDB in same home):

 PATCH_ID  PATCH_UID VERSION		   ACTION	   ACTION_TIME								       STATUS	       DESCRIPTION
---------- ---------- -------------------- --------------- --------------------------------------------------------------------------- --------------- ----------------------------------------------------------------------------------------------------
  24917972   20791781 12.1.0.2		   APPLY	   37-APR-17 11.19.49.103261 AM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  24732082   20904347 12.1.0.2		   APPLY	   17-APR-17 11.19.49.322985 AM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.170117
  24917972   20791781 12.1.0.2		   ROLLBACK	 29-NOV-17 08.35.57.888426 PM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  26635845   21564421 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.890421 PM 					       SUCCESS	       Database PSU 12.1.0.2.171017, Oracle JavaVM Component (OCT2017)
  26713565   21602269 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.956378 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.171017
  27338041   22036385 12.1.0.2		   APPLY	   12-JUN-18 01.45.24.163558 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.180417

The result is basically the same if quering cdb_registry_sqlpatch.

Fist found the MOS dba_registry_sqlpatch or registry$sqlpatch View Is Not Reflecting the Complete Updated Information after Patching (Doc ID 2039738.1).
Problem is that is applies to 12.1 and it is caused by a bug in opatch version 12.1.0.1.6, but OPatch version is 12.2.0.1.8.

$ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.8

If this is a match for your, the proposed solution in that note is:

1. Download and use latest opatch version 12.1.0.1.8. (Patch 6880880)
2. Take the backup & delete the contents of dba_registry_sqlpatch to remove the invalid entries:

    SQL>delete 

3. Re-run the datapatch

But what was my problem then?
Well, after a while noticed the MOS Note Datapatch may skip the application of SQL payload for certain patches included in a given bundle in a RAC environment. (Doc ID 2069046.1).
It includes a PLSQL validation script, by the way. Have a look in case it’s a suspect.

And it was a match to me, seems the client used to had problems with opatchauto in the past and had to run the ‘datapatch -verbose’ manually.

The solution? To run this for every CDB contained in the cluster. The Registry$sqlpatch table is now reporting the correct patch history for all CDBs.

Hope it helps!

ORA-600 [kggsmGetString:1]

Hi all!
So, checking on this error, found the following relevant reference: ORA-600 [kggsmGetString:1] (Doc ID 1541187.1).
Once it was during a SQL running, the only match could be Bug 17235420 – ORA-600 [kggsmGetString:1] with adaptive cursor sharing (Doc ID 17235420.8).

The problem?
No detailed information, no workaround, only patches to apply. The only information is “ORA-600 [kggsmGetString:1] can occur in 12c if adaptive cursor sharing is used.

So, checking on SQL Plan details, could see it’s using the following adaptive optimizations:

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - statistics feedback used for this statement
   - this is an adaptive plan (rows marked '-' are inactive)
   - 2 Sql Plan Directives used for this statement

MOS note Recommendations for Adaptive Features in Oracle Database 12c Release 1 (Adaptive Features, Adaptive Statistics and 12c SQL Performance)” ID 2312911.1 recommends to apply Oct 2017 BP or one-off patches to disables adaptive statistics only.

In this case, the easiest solution is to disable both, adaptive plans, as this release only has one parameter to control that – optimizer_adaptive_features which defaults to TRUE.

And it’s online:

SQL> show parameter adaptive

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
optimizer_adaptive_features	     boolean	 TRUE
optimizer_adaptive_reporting_only    boolean	 FALSE
parallel_adaptive_multi_user	     boolean	 TRUE
SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

SQL> show parameter adaptive

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
optimizer_adaptive_features	     boolean	 FALSE
optimizer_adaptive_reporting_only    boolean	 FALSE
parallel_adaptive_multi_user	     boolean	 TRUE

After that, issue solved. ORA-600 didn’t repeated.

Hope it helps!

CRS-10051: CVU found following errors with Clusterware setup : PRCW-1015 : Wallet % does not exist

Hello all!
So, recently I found this error in a CRS alert log from a client environment. Interesting error…

2018-03-26 16:33:53.277 [SRVM(9624)]CRS-10051: CVU found following errors with Clusterware setup : PRCW-1015 : Wallet greporadb does not exist. 
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]PRCW-1015 : Wallet greporadb does not exist.

I found this also related to the error:

PRVG-1512 : Failed to retrieve current selection of public and private network classifications

So, it was mapped to known Bug 18234669, as per described in CRS-10051: CVU Found Following Errors With Clusterware Setup :PRCW-1015 : Wallet ora603ut does not exist (Doc ID 2008466.1).

Ok, but what to do?

1) In case you have the wallet, you can simply add it to the database:

crsctl add wallet -type CVUDB -name [dbname]

2) In case you haven’t, you can simply disable the resource ora.cvu, that is the one checking this:

> Checking Status

[root@grepora-srv ~]# crsctl stat res ora.cvu -p | grep CHECK_RESULT
CHECK_RESULTS=PRVG-1512 : Failed to retrieve current selection of public and private network classifications,PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet c4prod does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRVG-1512 : Failed to retrieve current selection of public and private network classifications,PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet c4prod does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRVG-1512 : Failed to retrieve current selection of public and private network classifications,PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet c4prod does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRVG-1512 : Failed to retrieve current selection of public and private network classifications,PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015],PRCW-1015 : Wallet greporadb does not exist. ,CLSW-9: The cluster wallet to be operated on does not exist. :[1015]

> Disable CVU:

oracle:grepora-srv: srvctl disable cvu
oracle:grepora-srv:
oracle:grepora-srv: crsctl stat res ora.cvu -p | grep ENABLED
ENABLED=0
oracle:grepora-srv: srvctl status cvu                       
CVU is disabled

Hope it helps!

ORA-600 [kwqitnmphe:ltbagi]

Hi all,

So, some time ago I starter receiving an internal error “ORA-600 [kwqitnmphe:ltbagi]” from a client’s database. Everything was up and running fine, but an ORA-600 is always an ORA-600.

Investigating on issue, found it can be related to a several issues as per MOS ORA-600 [kwqitnmphe:ltbagi] (Doc ID 1346009.1):

Bug Fixed Description
17831758 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] in Qnnn background process
20987661 12.2.0.1 QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi]
18591240 11.2.0.4.BP17, 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] is seen immediately after ORA-1089
18536720 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] processing History IOT in AQ
16204151 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] when subscriber is dropped pending enqueue/dequeue
12423122 11.2.0.3, 12.1.0.1 ORA-600 [kwqitnmphe:ltbagi] when scheduler uses AQ

In my situation it was a match to the QMON slave processes issue, the only one was not resolved on 12.1 yet (My DB is 12.1, bad luck?), as per MOS Bug 20987661 – QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi] (Doc ID 20987661.8).

It is fixed in 12.2.0.1, by now. For 12.1 we have a temporary workaround:

As sysdba:

  DECLARE
   po dbms_aqadm.aq$_purge_options_t;
  BEGIN
     po.block := FALSE;
     DBMS_AQADM.PURGE_QUEUE_TABLE(
       queue_table     => 'SYS.SYS$SERVICE_METRICS_TAB',
       purge_condition => NULL,
       purge_options   => po);
  END;
  /

Hope it helps!
Cheers!

ORA-02158: invalid CREATE INDEX option while using redef_table on Oracle 12cR1

Hey all,
I’m working on a table redefinition project to migrate the existing tables and indexes to new compressed tablespaces. As the customer asked to have near 0 downtime to its data we decided to go with DBMS_REDEFINITION.

Simple right? Well… I sure hoped so.
I’m preparing a serie of posts about it, but before that I would like to share some hands on issues and that the magic of the new redef_table is not that great yet, at least on 12cR1/12.1.0.2.
Prior 12c, when would need to redefine a table you would use the DBMS_REDEFINITION and its 6 steps:

0 – Manually create a interim table to receive the data with the same structure as the original table
1 – DBMS_REDEFINITION.can_redef_table
2 – DBMS_REDEFINITION.start_redef_table
3 – DBMS_REDEFINITION.sync_interim_table
4 – DBMS_REDEFINITION.copy_table_dependents
5 – DBMS_REDEFINITION.finish_redef_table

And sometimes you would need to user the DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT to help on some issues but if everything was good you would only need to do the steps above.

There are a few issues with the approach by that I mean BUGS 🙂 so you need to watch your back do an a good prep work.

In 12c you have a new procedure called DBMS_REDEFINITION.redef_table that would bundle all the 6 steps into one single procedure call. With its up and down side.
For me, the down side is that we can’t monitor the procedure, once this is no loger anything being recorded on dba_redefinition_errors.
By working as a transaction, everything works or its rolled back (Or it should but I will leave it for another post).

So the only way to know what is being done is to trace the session that is doing the redefinition. And that what I needed to do to see what was going on with a strange situation.

This is what was happening: I 1st tried the DBMS_REDEFINITION.redef_table and it raised ORA-02158: invalid CREATE INDEX option but when I used the 6 steps mentioned above (can_redef_table,start_redef_table,etc) it worked without issues:

That bugged me so I traced the session.

SQL> exec dbms_monitor.session_trace_enable(binds=>true,waits=>true);

SQL> BEGIN DBMS_REDEFINITION.REDEF_TABLE(uname => 'USER1',
tname => 'TEST1',
table_compression_type => 'COMPRESS FOR OLTP', 
table_part_tablespace => 'DATA_COMP', 
index_tablespace => 'DATA_COMP', 
index_key_compression_type   => 'COMPRESS ADVANCED LOW', 
lob_compression_type => 'COMPRESS HIGH', 
lob_tablespace => 'DATA_COMP', 
lob_store_as => 'SECUREFILE');
END;
/
BEGIN*
ERROR at line 1:ORA-02158: invalid CREATE INDEX option
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3385
ORA-06512: at line 2

And this was the create index statement that was in the trace:

250438 PARSE ERROR #140737488293344:len=421 dep=1 uid=0 oct=9 lid=0 tim=3977494319094 err=2158250439 CREATE INDEX "USER1"."TMP$$_INDEX0" ON "USER1"."REDEF$_16752430_0" ("VISIT_NO")250440   PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS COMPRESS ADVANCED LOWNOLOGGING250441   STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645250442   PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1250443   BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)250444   TABLESPACE "DATA_COMP"

Can you see the issue?
Well neither could I, a colleague read the trace again and found a silly bug.

Here it is The create index had this:

COMPRESS ADVANCED LOWNOLOGGING

Instead of this, it had this:

COMPRESS ADVANCED LOW NOLOGGING

A silly space was missing and was causing the entire redefinition to fail!

I could not find any reference in MOS but that was it a space prevented to use the redef_table and caused me to lose some hours on it.

Hope this save you some time on your troubleshooting and I will be posting other strange situations that I found using the redef_table on Oracle 12cR1.

See you next post!

Elisson Almeida

After Patch: MRP0: Background Media Recovery terminated with error 10485

Ok,
I had that some time ago after applying Patch 27475598 – Oracle JavaVM Component 11.2.0.4.180417 Database PSU.
Why? Well, this is Non RAC-Rolling Installable and also Not Data Guard Standby First Installable.

This means there downtime for this patch, no escape.

I had to (skipping all the standard opatch steps, you can see those on README):

  • Stop DG Replication:
dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-off';
show database mydg
  • Run postinstall.sql in upgrade mode with only 1 instance on (disable RAC):
cd $ORACLE_HOME/sqlpatch/27475598
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> alter system set cluster_database=false scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP UPGRADE
SQL> @postinstall.sql
SQL> alter system set cluster_database=true scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP

Ok, all good, seems all fine.

But now when starting my DG replication:

dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-on';
show database mydg

What I see is:

DGMGRL> show database mydg

Database - mydg

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       41 minutes 53 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    myprod

  Database Error(s):
    ORA-16766: Redo Apply is stopped

Database Status:
ERROR

DGMGRL>

And on Database Alert Log:

MRP0: Background Media Recovery terminated with error 10485
Errors in file /u01/app/oracle/diag/rdbms/axwest/greporaprod/trace/greporaprod_pr00_42628.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Recovery interrupted!
MRP0: Background Media Recovery process shutdown (greporaprod)

Well, in my case it happens because I use an Active Dataguard, in open read only. The solution? Start you DG in Mount Mode to apply the patching replication!

This is well described as per MOS: MRP process getting terminated with error ORA-10485 (Doc ID 1618485.1).

After getting sync, you can simple promote it to read only mode again.

Hope it helps!

Oracle 12.2.0.1 Trace Facility Problems

Hey!
Quick one today… Not sure if you noticed, but 12.2.0.1 has some problems with the trace facility. Some components/processes generate huge amount of trace files.

MOS notes:
– RMAN BACKUP Command Always Generates Trace File Having “kcbtse_print_kod” Message (Doc ID 2311975.1)
– MMON trace file grows and is flooded with: kmgs_parameter_update_timeout Gen0 messages (Doc ID 2319931.1)

Both have their patches already…

Additionally, Mike Dietrich’s blog posts about above notes:

https://mikedietrichde.com/2018/05/24/rman-backup-generates-traces-in-oracle-12-2-0-1/
https://mikedietrichde.com/2018/05/23/mmon-unconditional-traces-in-oracle-12-2-0-1/

 

GRID upgrade FREEZES – 11g to 12c

Hey guys,
Upgrading is always something critical and a delicate operation but when you have no feedback on in the screen even harder.

I was working on an upgrade and using the GUI to upgrade the GRID from 11g to 12c. The 11gr2 11.2.0.4 was working without issue and ASM was as well (note this point, we will come back here later on).

When it was time to run the rootupgrade.sh, it just got stuck. No matter what, the GRID upgrade to 12c just FROZE. Checking the logs the last message was only this:

CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

Looking the other logs (/u01/app/12.1.0/grid/cfgtoollogs/crsconfig) there were messages related to OCR, pointing it cannot get OCR key with CLUUTIL, try using OCRDUMP. I checked ORC with ocrdump and ocrcheck. No issues there as well. Also, as I said before, the cluster was working without any issues.

As I had no error code or any thing that would give me a more specific cause. I went to a broad search on google and MOS. Saw all kind of things until I found the MOS: Wrong DiscoveryString /dev/*: rootupgrade.sh/root.sh hangs: Check OCR key using ocrdump (Doc ID 1916106.1)

I checked any my ASM disk discovery string was set to /dev/* which did not strike me as an issue as I mentioned it was working… BUT when I changed the script in ASM to /dev/asm-* the upgrade worked like a charm.

Also as note there is this note, with some best practices for upgrading: How to Upgrade to/Downgrade from Grid Infrastructure 12.1 and Known Issues (Doc ID 1579762.1).

Hope this helps and save some time in your troubleshooting.

Élisson Almeida