Logs Exploding: ORA-00904: “UTL_RAW”.”CAST_FROM_NUMBER”: invalid identifier

Hello all!
Last week I suddenly had several database connections being denied due “ORA-09925: Unable to create audit trail file”. This just a few hours a maintenance in this Database.

When investigating, realized there was TO MUCH traces like “$DBNAME_q00%.trc using several GBs.
As you know, those traces are from QMON (Queue Monitor) Background Process, usually related to Oracle Streams Advanced Queueing.
Oh! Also noticed that database was started with srvctl, as per usually recommended.

After some research I found MOS Note: ORA-00904 Reported by QMON When Starting Database With SRVCTL (Doc ID 1950142.1).
This situation seems to be a match to bug Bug 18680601/19995869. Regarding Doc:
– As solution, Oracle recommends apply merge of patches 19995869 and patch 18628216.
– As immediate action, the workaround is shutdown database and start with sqlplus instead of srvctl.

Check below one of my trace files to match to your situation.

Hope it helps.

Cheers!

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORACLE_HOME = /app/oracle/product/11.2.0.4
System name:    Linux
Node name:      racgrepora1
Release:        2.6.32-642.15.1.el6.x86_64
Version:        #1 SMP Mon Feb 20 02:26:38 EST 2017
Machine:        x86_64
Instance name: GREPORADB
Redo thread mounted by this instance: 1
Oracle process number: 77
Unix process pid: 36422, image: oracle@racgrepora1 (Q001)


*** 2017-08-01 05:05:51.352
*** SESSION ID:(646.64779) 2017-08-01 05:05:51.352
*** CLIENT ID:() 2017-08-01 05:05:51.352
*** SERVICE NAME:(SYS$BACKGROUND) 2017-08-01 05:05:51.352
*** MODULE NAME:(Streams) 2017-08-01 05:05:51.352
*** ACTION NAME:(QMON Slave) 2017-08-01 05:05:51.352

KSV 12801 error in slave process

*** 2017-08-01 05:05:51.352
ORA-12801: error signaled in parallel query server PZ99, instance racwt22:JTS012 (2)
ORA-00904: "UTL_RAW"."CAST_FROM_NUMBER": invalid identifier
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-12801: error signaled in parallel query server PZ99, instance racwt22:JTS012 (2)
ORA-00904: "UTL_RAW"."CAST_FROM_NUMBER": invalid identifier

ORA-00600: [qkswcWithQbcRefdByMain4]

Hello all,
This days I found this in a client’s 12c Database when trying to create a Materialized View:

ORA-00600: internal error code, arguments: [qkswcWithQbcRefdByMain4]

A perfect match to MOS ORA-00600 [qkswcWithQbcRefdByMain4] when Create MV “WITH” clause (Doc ID 2232872.1).

The root cause is documented on BUG 22867413 – ORA-600 CALLING DBMS_ADVISOR.TUNE_MVIEW.
The given solution is to apply Patch 22867413.

After applying patch, issue solved. 🙂

Continue reading

CRS Not Starting after Removing OS User: How to Workaround and How to Solve!

Hello all!
Turns that a few days ago a client reached me because his CRSD was simply not starting. Like this:

[root@proddb proddb]$ ./crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'proddb'
CRS-2676: Start of 'ora.crsd' on 'proddb' succeeded

[root@proddb proddb]$ ps -ef |grep crsd
root 19217 13424 0 11:53 pts/0 00:00:00 grep crsd

After some investigation, I found the following:

2017-01-24 14:00:06.859: [ CRSSEC][1690195712]{1:51052:2} Exception: OwnerEntry construction failed to retrieve user id by name with ACL string: owner:jacknobody:rwx and error: 1
2017-01-24 14:00:06.912: [ CRSSEC][1690195712]{1:51052:2} Exception: ACL entry creation failed for: owner:jacknobody:rwx

Hmmm, seems some CRS resources are owned by “Jack Nobody”… Turns that I this us was removed from OS:

[root@proddb proddb]$ cat /etc/passwd |grep jacknobody
[root@proddb proddb]$ 

What to do now?

Continue reading

ORA-07445: exception encountered: core dump [nstimexp()+45] [SIGSEGV] [ADDR:0x58] [PC:0x7F42ABB] [Address not mapped to object] []

Hello all,
I had faced some occourrences of this error in a 11.2.0.1 database recently.

ORA-07445: exception encountered: core dump [nstimexp()+45] [SIGSEGV] [ADDR:0x58] [PC:0x7F42ABB] [Address not mapped to object] []

After some investigation I found a match to Bug 3934729.
This issue is originally to matched to Bug 6918493, that is a reintroduction of Bug 2752985 but it’s fixed in 11.2.0.1.
However, on upgrading to 11.2.0.1 it’s a hit on Bug 3934729 which is fixed in 11.2.0.2.

Recommended actions are:
– Upgrade databases do 11.2.0.2 or higher. (best solution, but may require more efforts to validate the upgrade).
– Apply Patch 3934729: RANDOM ORA-07445 CORE DUMPS FROM DATABASE AND ORA-3113 FROM APPLICATION
– Set sqlnet.expire_time=0 (workaround)
– Ignore error.

After some research I decided to apply workaround, based on recommended usage of sqlnet.expire_time (Next weeks post is about this parameter :)).
This might be the root cause for the ORA-03135: connection lost contact and the actual value of this parameter on environment was 1, which is a very low value.

So, check which action is more suitable for your environment!
Hope it helps 🙂

Below some additional informations on my situation:

Continue reading

Online Data Patch Apply with multiple Databases on same Oracle Home: OPatch failed with error code 26

Hi all,
Tricky question, right? It’s easier than you think…
Actually, we don’t commonly think on those situations in first place, but it’s pretty common, specialy if considering server consolidation situations.

The trick is to use clause util enableOnlinePatch insted of apply after first database applying.
In this example I’m applying on-off patch 14084247 in online mode. Check:

# First Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString ORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
All checks passed.
Backing up files...

Patching component oracle.rdbms, 11.2.0.4.0...
Installing and enabling the online patch 'bug14084247.pch', on database 'ORA11'.


Verifying the update...
Patch 14084247 successfully applied
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log

OPatch succeeded.

All good, right?
Let’s see applying to second database with same command:

Second Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString OTHERORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log

Recommended actions: Please use 'opatch util applySql' for sql related patches or 'opatch util enableOnlinePatch' for online patches to add sids to already installed patch(es).

OPatch failed with error code 26

Beeep!
So, simply use clause util enableonlinepatch as per below.

Second Database (right way):

[oracle@PRODSERVER 14084247]$ opatch util enableonlinepatch -connectString OTHERORA11:sys:: -id 14084247
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2017-04-03_14-20-53PM_1.log

Invoking utility "enableonlinepatch"
Installing and enabling the online patch 'bug14084247.pch', on database 'OTHERORA11'.

OPatch succeeded.
[oracle@PRODSERVER 14084247]$

Ok, that’s it for today.

See you next week!