Online Data Patch Apply with multiple Databases on same Oracle Home: OPatch failed with error code 26

Hi all,
Tricky question, right? It’s easier than you think…
Actually, we don’t commonly think on those situations in first place, but it’s pretty common, specialy if considering server consolidation situations.

The trick is to use clause util enableOnlinePatch insted of apply after first database applying.
In this example I’m applying on-off patch 14084247 in online mode. Check:

# First Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString ORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
All checks passed.
Backing up files...

Patching component oracle.rdbms, 11.2.0.4.0...
Installing and enabling the online patch 'bug14084247.pch', on database 'ORA11'.


Verifying the update...
Patch 14084247 successfully applied
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log

OPatch succeeded.

All good, right?
Let’s see applying to second database with same command:

Second Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString OTHERORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log

Recommended actions: Please use 'opatch util applySql' for sql related patches or 'opatch util enableOnlinePatch' for online patches to add sids to already installed patch(es).

OPatch failed with error code 26

Beeep!
So, simply use clause util enableonlinepatch as per below.

Second Database (right way):

[oracle@PRODSERVER 14084247]$ opatch util enableonlinepatch -connectString OTHERORA11:sys:: -id 14084247
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2017-04-03_14-20-53PM_1.log

Invoking utility "enableonlinepatch"
Installing and enabling the online patch 'bug14084247.pch', on database 'OTHERORA11'.

OPatch succeeded.
[oracle@PRODSERVER 14084247]$

Ok, that’s it for today.

See you next week!

ORA-00001: unique constraint (RMAN.CKP_U1) violated

Hey,
Don’t create so much expectations on this post.

This is because I don’t exactly fixed the issue, but workarounded…
The thing is: This error is caused in catalog database, so the workaround is simple: do a RMAN-nocatalog, I mean, simply don’t connect in catalog to perform the backup.

After completing the backup, I’d suggest you to force a synchronization with command “RESYNC CATALOG“. In worst case, on next execution the implicit resync will fix everything. 🙂

There is no bigger explanations on this, but you can same workaround in MOS Bug 12588237 – RMAN-3002 ORA-1: unique constraint (ckp_u1) violated after dataguard switchover (Doc ID 12588237.8).

And this is it for today!
See you next week!

Exadata: ORA-07445: exception encountered: core dump [ocl_lock_get_waitobj_owner()+26] [11] [0x000000000] [] [] []

Hello all,

This is because the error is generated by an unpublished bug 17891564, as per described in MOS ORA-7445 [ocl_lock_get_waitobj_owner] on an Exadata storage cell (Doc ID 1906366.1).

It affects Exadata storage cell with image version between 11.2.1.2.0 and 11.2.3.3.0. The CELLSRV process crash with this error as per:

Cellsrv encountered a fatal signal 11
Errors in file /opt/oracle/cell11.2.3.3.0_LINUX.X64_131014.1/log/diag/asm/cell//trace/svtrc_11711_27.trc  (incident=257):
ORA-07445: exception encountered: core dump [ocl_lock_get_waitobj_owner()+26] [11] [0x000000000] [] [] []
Incident details in: /opt/oracle/cell11.2.3.3.0_LINUX.X64_131014.1/log/diag/asm/cell//incident/incdir_257/svtrc_11711_27_i257.trc

The CELLSRV process should auto restart after this error.

Continue reading

OGG-01411 – Cannot convert input file ./dirdat/xx with format RELEASE 9.0/9.5 to output file ./dirdat/zz

Hi.

If you search solutions to this Error, you will perceived only one documented root cause:

Error:

OGG-01411  Cannot convert input file ./dirdat/xx000549 with format RELEASE 9.0/9.5 to output file ./dirdat/zz000034 with format RELEASE 12.1.

Cause:

“The output trail of the data pump has a different format (version) than the input trail of the data pump”

If you are using GG 12.1 version, and all trails (rmttrail and exttrail) are correctly set with “format release”, you fell into a bug.

Oracle recommend to do upgrade to GG 12.2.

To work around this issue and start process, you need write a new trail, perform etrollover and reposition pump process.

On the Target System, process works fine, but not receive new trails, because pump process are abended.

GGSCI (lab2.grepora.net) 001> info rep01

REPLICAT rep01 Last Started 2017-03-20 12:08 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:09 ago)
Process ID 13563
Log Read Checkpoint File ./dirdat/zz000034
2017-03-22 09:53:31.004144 RBA 30683971

Continue reading

AIX: “WARNING: Heavy swapping observed on system in last 5 mins.”

Quick one today!

Having message below in your 11.2.0.3 on AIX like this?

WARNING: Heavy swapping observed on system in last 5 mins. 
pct of memory swapped in [31.28%] pct of memory swapped out [3.81%]. 
Please make sure there is no memory pressure and the SGA and PGA are configured correctly. 
Look at DBRM trace file for more details.

Stand down, this issue is caused by unpublished Bug 11801934, mentioned in MOS False Swap Warning Messages Printed To Alert.log On AIX (Doc ID 1508575.1).

Basically happens because the v$osstat does not reflect proper stats for the swap space paging.

So, stay calm and see you next week!

Extract abend with ORA-03113: end-of-file on communication channel

GoldenGate Extract / Replicat abend with bellow error:

Source Context :
SourceModule : [ggdb.ora.sess]
SourceID : [/scratch/aime/adestore/views/aime_stuya22/oggcore/OpenSys/src/gglib/ggdbora/ocisess.c]
SourceFunction : [OCISESS_context_def::oci_try(int, const char *, ...)]
SourceLine : [832]

2017-04-19 07:52:07 ERROR OGG-00665 OCI Error executing single row select (status = 3113-ORA-03113: end-of-file on communication channel)

It’s is a common stuff in Oracle Database not stable or it’s taking some 00600.

Review items:

  • Oracle Database alert log
  • If user process has been killed
  • Network communication (IPTABLES stuff)
  • Server crash

Suggest you to review ‘Master Note: Troubleshooting ORA-03113 (Doc ID 1506805.1)‘ on Oracle support services and keep database away from ORA-03113.

🙂