declare i number := 0; begin for c1 in ( select address, hash_value, sql_id from v$sql where last_active_time < sysdate - 1/24 and executions < 2 ) loop begin dbms_shared_pool.purge(c1.address || ' ' || to_char(c1.hash_value), 'C'); i := i + 1; exception when others then dbms_output.put_line(c1.sql_ id); end; end loop; dbms_output.put_line('Cursors purged = ' || to_char(i)); end; /
Tag: bugs
ORA-00600: internal error code, arguments: [13011], [7907] on SYS.DBMS_AQ_INV
Hi all,
So a couple weeks ago I was facing the following on a client environment. It was appearing every 10 mins on alert.log:
ORA-00600: internal error code, arguments: [13011], [7907], [12679954], [1], [13461738], [0], [], [], [], [], [], [] ORA-01403: no data found ORA-06512: at "SYS.DBMS_AQ_INV", line 1248
I could find several references to ORA-600 [13011], always related to some internal corruption. As this table is related to Advanced Queueing, decided to check on Scheduler Job table structures:
SQL> analyze table SYS.SCHEDULER$_EVENT_LOG validate structure cascade; Table analyzed. SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade; analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade * ERROR at line 1: ORA-01499: table/index cross reference failure - see trace file SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure; Table analyzed.
Ahá!
Also on the generated trace file:
2019-03-01 22:26:37.736 SESSION ID:(39.32751) 2019-03-01 22:26:37.736 CLIENT ID) 2019-03-01 22:26:37.736 SERVICE NAME:(SYS$USERS) 2019-03-01 22:26:37.736 MODULE NAME:(sqlplus.exe) 2019-03-01 22:26:37.736 CLIENT DRIVER:(SQL*PLUS) 2019-03-01 22:26:37.736 ACTION NAME) 2019-03-01 22:26:37.736 CONTAINER ID:(1) 2019-03-01 22:26:37.736 Table/Index row count mismatch table 273184 : index 275017, 1832 Index root = tsn: 1 rdba: 0x00c0128a
So ANALYZE on table SCHEDULER$_JOB_RUN_DETAILS fails with CASCADE but succeeds without CASCADE. This means that there is a problem with one of the index of this table. It has gone out of sync with the table.
SQL> select 'alter index '||owner||'.'||index_name||' rebuild;' from dba_indexes where table_name='SCHEDULER$_JOB_RUN_DETAILS'; 'ALTERINDEX'||OWNER||'.'||INDEX_NAME||'REBUILD;' -------------------------------------------------------------------------------- alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild; SQL> alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild online; Index altered.
After this, error solved and no more recurrence of that ORA-600.
Also note I couldn’t find any document about this on MOS, so this is kind of exclusive by now. 🙂
Cheers!
ORA-600 [kggsmGetString:1]
Hi all!
So, checking on this error, found the following relevant reference: ORA-600 [kggsmGetString:1] (Doc ID 1541187.1).
Once it was during a SQL running, the only match could be Bug 17235420 – ORA-600 [kggsmGetString:1] with adaptive cursor sharing (Doc ID 17235420.8).
The problem?
No detailed information, no workaround, only patches to apply. The only information is “ORA-600 [kggsmGetString:1] can occur in 12c if adaptive cursor sharing is used.”
So, checking on SQL Plan details, could see it’s using the following adaptive optimizations:
Note ----- - dynamic statistics used: dynamic sampling (level=2) - statistics feedback used for this statement - this is an adaptive plan (rows marked '-' are inactive) - 2 Sql Plan Directives used for this statement
MOS note Recommendations for Adaptive Features in Oracle Database 12c Release 1 (Adaptive Features, Adaptive Statistics and 12c SQL Performance)” ID 2312911.1 recommends to apply Oct 2017 BP or one-off patches to disables adaptive statistics only.
In this case, the easiest solution is to disable both, adaptive plans, as this release only has one parameter to control that – optimizer_adaptive_features which defaults to TRUE.
And it’s online:
SQL> show parameter adaptive NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_adaptive_features boolean TRUE optimizer_adaptive_reporting_only boolean FALSE parallel_adaptive_multi_user boolean TRUE SQL> alter system set optimizer_adaptive_features=false scope=both; System altered. SQL> show parameter adaptive NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_adaptive_features boolean FALSE optimizer_adaptive_reporting_only boolean FALSE parallel_adaptive_multi_user boolean TRUE
After that, issue solved. ORA-600 didn’t repeated.
Hope it helps!
ORA-600 [kwqitnmphe:ltbagi]
Hi all,
So, some time ago I starter receiving an internal error “ORA-600 [kwqitnmphe:ltbagi]” from a client’s database. Everything was up and running fine, but an ORA-600 is always an ORA-600.
Investigating on issue, found it can be related to a several issues as per MOS ORA-600 [kwqitnmphe:ltbagi] (Doc ID 1346009.1):
Bug | Fixed | Description |
17831758 | 12.1.0.2, 12.2.0.1 | ORA-600 [kwqitnmphe:ltbagi] in Qnnn background process |
20987661 | 12.2.0.1 | QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi] |
18591240 | 11.2.0.4.BP17, 12.1.0.2, 12.2.0.1 | ORA-600 [kwqitnmphe:ltbagi] is seen immediately after ORA-1089 |
18536720 | 12.1.0.2, 12.2.0.1 | ORA-600 [kwqitnmphe:ltbagi] processing History IOT in AQ |
16204151 | 12.1.0.2, 12.2.0.1 | ORA-600 [kwqitnmphe:ltbagi] when subscriber is dropped pending enqueue/dequeue |
12423122 | 11.2.0.3, 12.1.0.1 | ORA-600 [kwqitnmphe:ltbagi] when scheduler uses AQ |
In my situation it was a match to the QMON slave processes issue, the only one was not resolved on 12.1 yet (My DB is 12.1, bad luck?), as per MOS Bug 20987661 – QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi] (Doc ID 20987661.8).
It is fixed in 12.2.0.1, by now. For 12.1 we have a temporary workaround:
As sysdba:
DECLARE po dbms_aqadm.aq$_purge_options_t; BEGIN po.block := FALSE; DBMS_AQADM.PURGE_QUEUE_TABLE( queue_table => 'SYS.SYS$SERVICE_METRICS_TAB', purge_condition => NULL, purge_options => po); END; /
Hope it helps!
Cheers!
Oracle 12.2.0.1 Trace Facility Problems
Hey!
Quick one today… Not sure if you noticed, but 12.2.0.1 has some problems with the trace facility. Some components/processes generate huge amount of trace files.
MOS notes:
– RMAN BACKUP Command Always Generates Trace File Having “kcbtse_print_kod” Message (Doc ID 2311975.1)
– MMON trace file grows and is flooded with: kmgs_parameter_update_timeout Gen0 messages (Doc ID 2319931.1)
Both have their patches already…
Additionally, Mike Dietrich’s blog posts about above notes:
https://mikedietrichde.com/2018/05/24/rman-backup-generates-traces-in-oracle-12-2-0-1/
https://mikedietrichde.com/2018/05/23/mmon-unconditional-traces-in-oracle-12-2-0-1/
ORA-00932 to Export Statistics After 12c Upgrade – Undocumented
Hello All,
I have a client that use to Export and Import Dictionary Statistics using a stats table. However, after upgrading database to 12c, is started to raise error below to export data to a stats table (created in 11g) or even to update stats in this table.
SQL> exec dbms_stats.upgrade_stat_table('GREPORA','MYSTATSTABLE'); DECLARE * ERROR at line 1: ORA-00932: inconsistent datatypes: expected BINARY got NUMBER ORA-06512: at "SYS.DBMS_STATS", line 40183 ORA-06512: at line 28 SQL> EXEC DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid => 'TEST_MATHEUS',statown => 'GREPORA' ); BEGIN DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid => 'TEST_MATHEUS',statown => 'GREPORA' ); END; * ERROR at line 1: ORA-00932: inconsistent datatypes: expected BINARY got NUMBER ORA-06512: at "SYS.DBMS_STATS", line 37085 ORA-06512: at line 1
I reviewed several references (all very interesting, by the way, recommend you to take a look):
– ORA-20002 on Importing Statistics using DBMS_STATS Procedures (Doc ID 740345.1)
– Datapump Export Fails With Ora-00932 (Doc ID 1300803.1)
– Error “ORA-00932 – Inconsistent Datatypes: Expected BINARY Got NUMBER” When Executing the Primavera Gather-Statistics Scripts on an Oracle 12c Database (Doc ID 2111114.1)
– https://oracle-base.com/articles/12c/concurrent-statistics-collection-12cr1
– http://www.morganslibrary.org/reference/pkgs/dbms_stats.html
– How To Export and Import Dictionary Statistics (Doc ID 1450820.1)
– Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations (Doc ID 798257.1)
But no reference found to this specific issue, not even in MOS.
After some tests, I realized that maybe columns order in stats table can be changed in implementation between 11g and 12c. Bingo! The same columns and datatypes are in place, but in different order.
This is not documented in MOS neither on internet/independent blogs, but seems EXPORT% on DBMS_STATS presume stats table in it’s internal code without specified column orders.
As we know, a good development practice is to specify column names when performing any operation. Bad on you, Oracle!
More“ORA-00932 to Export Statistics After 12c Upgrade – Undocumented”
Long running jobs AQ$_PLSQL_NTFN_%
Hello all!
Some time ago I had an issue with some jobs running for 64 days. Cool han?
All jobs names were AQ$_PLSQL_NTFN_%. All activity of jobs were related to SQLID f7zggdz9p7bhk in wait event “Streams AQ: waiting for messages in the queue”.
# SQLID f7zggdz9p7bhk: begin SYS.SCHEDULER$_JOB_EVENT_HANDLER(context => :1,reginfo => sys.aq$_reg_info(:2, :3, :4, :5, :6, :7),descr => sys.aq$_descriptor(:8, :9, :10, sys.msg_prop_t(:11, :12, :13, :14, :15, :16, :17, :18, sys.aq$_agent(:19, :20, :21), :22, :23), sys.aq$_ntfn_descriptor(:24), :25, :26), payload => :27, payloadl => :28); end;
Reviewing on MOS, found a match to Bug 20528052 – Many AQ$_PLSQL_NTFN jobs executed affecting database performance (Doc ID 20528052.8).
The root cause is procedure SYS.SCHEDULER$_JOB_EVENT_HANDLER keep waiting AQ PL/SQL Notification callbacks associated with scheduler job email notifications for messages in “SYS”.”SCHEDULER$_EVENT_QUEUE” which no longer exist.
From MOS, this can be easily solved with fix for Bug 16623661 – No email notification is sent by scheduler, wich is included in 12.1.0.1 (base) and affect versions 11.2.0.2 and later.
Also from MOS, and which I did, is drop these jobs. The problem is these jobs just restart and have the same problem again after automated recreation.
So, apply the fix or upgrade your database. 😉
Some related references:
Bug 20528052 – Many AQ$_PLSQL_NTFN jobs executed affecting database performance (Doc ID 20528052.8)
AQ$_PLSQL_NTFN Scheduler Jobs Executed in large numbers affecting Database performance (Doc ID 2001165.1)
Bug 21665897: FIX FOR BUG 14712567 CAUSES LARGE NUMBERS OF AQ$_PLSQL_NTFN_% TO BE SPAWNED
Hope it helps!
Cheers!
Logs Exploding: ORA-00904: “UTL_RAW”.”CAST_FROM_NUMBER”: invalid identifier
Hello all!
Last week I suddenly had several database connections being denied due “ORA-09925: Unable to create audit trail file”. This just a few hours a maintenance in this Database.
When investigating, realized there was TO MUCH traces like “$DBNAME_q00%.trc using several GBs.
As you know, those traces are from QMON (Queue Monitor) Background Process, usually related to Oracle Streams Advanced Queueing.
Oh! Also noticed that database was started with srvctl, as per usually recommended.
After some research I found MOS Note: ORA-00904 Reported by QMON When Starting Database With SRVCTL (Doc ID 1950142.1).
This situation seems to be a match to bug Bug 18680601/19995869. Regarding Doc:
– As solution, Oracle recommends apply merge of patches 19995869 and patch 18628216.
– As immediate action, the workaround is shutdown database and start with sqlplus instead of srvctl.
Check below one of my trace files to match to your situation.
Hope it helps.
Cheers!
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /app/oracle/product/11.2.0.4 System name: Linux Node name: racgrepora1 Release: 2.6.32-642.15.1.el6.x86_64 Version: #1 SMP Mon Feb 20 02:26:38 EST 2017 Machine: x86_64 Instance name: GREPORADB Redo thread mounted by this instance: 1 Oracle process number: 77 Unix process pid: 36422, image: oracle@racgrepora1 (Q001) *** 2017-08-01 05:05:51.352 *** SESSION ID:(646.64779) 2017-08-01 05:05:51.352 *** CLIENT ID:() 2017-08-01 05:05:51.352 *** SERVICE NAME:(SYS$BACKGROUND) 2017-08-01 05:05:51.352 *** MODULE NAME:(Streams) 2017-08-01 05:05:51.352 *** ACTION NAME:(QMON Slave) 2017-08-01 05:05:51.352 KSV 12801 error in slave process *** 2017-08-01 05:05:51.352 ORA-12801: error signaled in parallel query server PZ99, instance racwt22:JTS012 (2) ORA-00904: "UTL_RAW"."CAST_FROM_NUMBER": invalid identifier OPIRIP: Uncaught error 447. Error stack: ORA-00447: fatal error in background process ORA-12801: error signaled in parallel query server PZ99, instance racwt22:JTS012 (2) ORA-00904: "UTL_RAW"."CAST_FROM_NUMBER": invalid identifier
ORA-00600: [qkswcWithQbcRefdByMain4]
Hello all,
This days I found this in a client’s 12c Database when trying to create a Materialized View:
ORA-00600: internal error code, arguments: [qkswcWithQbcRefdByMain4]
A perfect match to MOS ORA-00600 [qkswcWithQbcRefdByMain4] when Create MV “WITH” clause (Doc ID 2232872.1).
The root cause is documented on BUG 22867413 – ORA-600 CALLING DBMS_ADVISOR.TUNE_MVIEW.
The given solution is to apply Patch 22867413.
After applying patch, issue solved. 🙂
ORA-07445: exception encountered: core dump [nstimexp()+45] [SIGSEGV] [ADDR:0x58] [PC:0x7F42ABB] [Address not mapped to object] []
Hello all,
I had faced some occourrences of this error in a 11.2.0.1 database recently.
ORA-07445: exception encountered: core dump [nstimexp()+45] [SIGSEGV] [ADDR:0x58] [PC:0x7F42ABB] [Address not mapped to object] []
After some investigation I found a match to Bug 3934729.
This issue is originally to matched to Bug 6918493, that is a reintroduction of Bug 2752985 but it’s fixed in 11.2.0.1.
However, on upgrading to 11.2.0.1 it’s a hit on Bug 3934729 which is fixed in 11.2.0.2.
Recommended actions are:
– Upgrade databases do 11.2.0.2 or higher. (best solution, but may require more efforts to validate the upgrade).
– Apply Patch 3934729: RANDOM ORA-07445 CORE DUMPS FROM DATABASE AND ORA-3113 FROM APPLICATION
– Set sqlnet.expire_time=0 (workaround)
– Ignore error.
After some research I decided to apply workaround, based on recommended usage of sqlnet.expire_time (Next weeks post is about this parameter :)).
This might be the root cause for the ORA-03135: connection lost contact and the actual value of this parameter on environment was 1, which is a very low value.
So, check which action is more suitable for your environment!
Hope it helps 🙂
Below some additional informations on my situation: