Online Data Patch Apply with multiple Databases on same Oracle Home: OPatch failed with error code 26

Hi all,
Tricky question, right? It’s easier than you think…
Actually, we don’t commonly think on those situations in first place, but it’s pretty common, specialy if considering server consolidation situations.

The trick is to use clause util enableOnlinePatch insted of apply after first database applying.
In this example I’m applying on-off patch 14084247 in online mode. Check:

# First Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString ORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
All checks passed.
Backing up files...

Patching component oracle.rdbms, 11.2.0.4.0...
Installing and enabling the online patch 'bug14084247.pch', on database 'ORA11'.


Verifying the update...
Patch 14084247 successfully applied
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_24/apply2017-04-03_14-17-24PM_1.log

OPatch succeeded.

All good, right?
Let’s see applying to second database with same command:

Second Database:

[oracle@PRODSERVER 14084247]$ opatch apply online -connectString OTHERORA11:sys::
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log


The patch should be applied/rolled back in '-all_nodes' mode only.
Converting the RAC mode to '-all_nodes' mode.
Applying interim patch '14084247' to OH '/opt/oracle/app/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
Log file location: /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/14084247_Apr_03_2017_14_17_45/apply2017-04-03_14-17-45PM_1.log

Recommended actions: Please use 'opatch util applySql' for sql related patches or 'opatch util enableOnlinePatch' for online patches to add sids to already installed patch(es).

OPatch failed with error code 26

Beeep!
So, simply use clause util enableonlinepatch as per below.

Second Database (right way):

[oracle@PRODSERVER 14084247]$ opatch util enableonlinepatch -connectString OTHERORA11:sys:: -id 14084247
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.

Oracle Home : /opt/oracle/app/product/11.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/app/product/11.2.0/db_1/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /opt/oracle/app/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2017-04-03_14-20-53PM_1.log

Invoking utility "enableonlinepatch"
Installing and enabling the online patch 'bug14084247.pch', on database 'OTHERORA11'.

OPatch succeeded.
[oracle@PRODSERVER 14084247]$

Ok, that’s it for today.

See you next week!

AIX: “WARNING: Heavy swapping observed on system in last 5 mins.”

Quick one today!

Having message below in your 11.2.0.3 on AIX like this?

WARNING: Heavy swapping observed on system in last 5 mins. 
pct of memory swapped in [31.28%] pct of memory swapped out [3.81%]. 
Please make sure there is no memory pressure and the SGA and PGA are configured correctly. 
Look at DBRM trace file for more details.

Stand down, this issue is caused by unpublished Bug 11801934, mentioned in MOS False Swap Warning Messages Printed To Alert.log On AIX (Doc ID 1508575.1).

Basically happens because the v$osstat does not reflect proper stats for the swap space paging.

So, stay calm and see you next week!

Infiniband Error: Cable is present on Port “X” but it is polling for peer port

Facing this error? Let me guess: Ports 03, 05, 06, 08, 09 and 12 are alerting? You have a Quarter Rack? Have recently installed Exadata plugin to version 12.1.0.3 or higher?
Don’t panic!

This is probably related to Bug 15937297 : EM 12C HAS ERRORS CABLE IS PRESENT ON PORT ‘N’ BUT IT IS POLLING FOR PEER PORT. The full message might be like “Cable is present on Port 6 but it is polling for peer port. This could happen when the peer port is unplugged/disabled“.

In fact, the bug was closed as not a bug. 🙂
As part of the 12.1.0.3 Exadata plugin, the IB switch ports are now checked for non-terminated cables. So these errors ‘polling for peer port’ are the expected behavior.  Once ‘polling for peer port’ is an enhanced feature of the 12.1.0.3 plugin, this explains why you most likely did not see these errors until you upgraded the OMS to 12.1.0.2 and then updated the plugins.

In Quarter Racks, the following ports 3, 5, 6, 8, 9 and 12 are usually cabled ahead of time, but not terminated. In some racks port 32 may also be unterminated. Checking for incident in OEM you might see something like this image:

newscreenshot-2016-12-26-as-20-03-50

Continue reading

RS-7445 [Serv MS leaking memory] [It will be restarted] [] [] [] [] [] [] [] [] [] []

Hello!
Having this error from cell alerthistory.log? Don’t panic!
Take a look in MOS: Exadata Storage Cell reports error RS-7445 [Serv MS Leaking Memory] (Doc ID 1954357.1). It’s related to Bug  – RS-7445 [SERV MS LEAKING MEMORY].

The issue is a memory leak in the Java executable and affects systems running with JDK 7u51 or later versions. This is relevant for all versions in Release 11.2 to 12.1.

What happens is that MS process is consuming high memory (up to 2GB).  Normally MS use around 1GB but because of the bug the memory allocated can grow upt to 2GB.  You can check it as per example below:

[root@exaserver ~]# ps -feal|grep java
0 S root     16493 14737  0  80   0 - 15317 pipe_w 18:34 pts/0    00:00:00 grep java
0 S root     22310 27043  2  80   0 - 267080 futex_ 18:15 ?       00:00:27 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages -Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar /opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst -err /opt/oracle/cell/cellsrv/deploy/log/ms.err

Note that: 267080 * 4096 = 1143MB (1GB). If your number is higher than this, it indicates the presence of the bug.

Continue reading