Just got an alarm from OEM with this message. How to check it?
– First thing is to be able to connect on ILOM from DBNode.
– From there we can test the IPv4 and/or IPv6 interfaces through ping, as pe shown below.
This is also documented as per this Doc: Oracle Integrated Lights Out Manager (ILOM) 3.0 HTML Documentation Collection – Test IPv4 or IPv6 Network Configuration (CLI)
In my case, it was only a false alarm, as I was able to connect to other DBNodes from this ILOM:
[root@greporasrv01db01 ~]# ssh greporasrv01-ilom.jcrew.com
The authenticity of host 'greporasrv01-ilom.grepora.com (10.48.18.64)' can't be established.
RSA key fingerprint is 59:c5:9f:b1:60:59:15:16:94:c8:94:88:7b:4e:52:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'greporasrv01-ilom.grepora.com' (RSA) to the list of known hosts.
Oracle(R) Integrated Lights Out Manager
Version 22.214.171.124 r116695
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
Warning: HTTPS certificate is set to factory default.
-> show /SP/network
commitpending = (Cannot show property)
dhcp_clientid = none
dhcp_server_ip = none
ipaddress = 10.50.12.64
ipdiscovery = static
ipgateway = 10.50.12.1
ipnetmask = 255.255.255.0
macaddress = 00:10:E0:95:73:E6
managementport = MGMT
outofbandmacaddress = 00:10:E0:95:73:E6
pendingipaddress = 10.50.12.64
pendingipdiscovery = static
pendingipgateway = 10.50.12.1
pendingipnetmask = 255.255.255.0
pendingmanagementport = MGMT
pendingvlan_id = (none)
sidebandmacaddress = 00:10:E0:95:73:E7
state = ipv4-only
vlan_id = (none)
-> cd /SP/network/test
ping = (Cannot show property)
ping6 = (Cannot show property)
-> set ping=10.50.12.51 -- DBNode1
Ping of 10.50.12.51 succeeded
-> set ping=10.50.12.52 -- DBNode2
Ping of 10.50.12.52 succeeded
Some time ago I needed to check topology of a client’s Exadata due a network issue and made a very useful note. Sharing with you now. 😀
This and other cool commands can be found here: Network Diagnostics information for Oracle Database Machine Environments (Doc ID 1053498.1)
# /opt/oracle.SupportTools/ibdiagtools/verify-topology -t quarterrack
Newer versions don’t require -t option.
In case of halfrack, -t halfrack should be used in my case.
Ok, but how to know it? You can have it from here:
[root@greporaexa onecommand]# grep -i MACHINETYPES databasemachine.xml
[MACHINETYPES]X4-2 Eighth Rack HC 4TB[/MACHINETYPES]
Hope it helps! 🙂
This days I had an alarm with message below:
Message=The aggregate sensor /SYS/CABLE_CONN_STAT has a fault.
There is some useful commands I used to verify all ports/sensors in my exadata cluster.
In summary, these commands:
1) Use Intelligent Platform Management Interface (IPMI) to read the Sensor Data Record (SDR) repository
2) Use Intelligent Platform Management Interface (IPMI) to view the ILOM SP System Event Log (SEL)
3) Display all host nodes with ibhosts
4) Use ibcheckstate to scan InfiniBand fabric and validate the port logical and physical state
5) Use ibcheckerrors to scan InfiniBand fabric and validate the connectivity as described in the topology file
6) Checking for sensor healthy from switch
7) Check the overall health of the InfiniBand switch, on the Exadata switch itself
The Commands are:
This is because the error is generated by an unpublished bug 17891564, as per described in MOS ORA-7445 [ocl_lock_get_waitobj_owner] on an Exadata storage cell (Doc ID 1906366.1).
It affects Exadata storage cell with image version between 126.96.36.199.0 and 188.8.131.52.0. The CELLSRV process crash with this error as per:
Cellsrv encountered a fatal signal 11
Errors in file /opt/oracle/cell184.108.40.206.0_LINUX.X64_131014.1/log/diag/asm/cell//trace/svtrc_11711_27.trc (incident=257):
ORA-07445: exception encountered: core dump [ocl_lock_get_waitobj_owner()+26]  [0x000000000]   
Incident details in: /opt/oracle/cell220.127.116.11.0_LINUX.X64_131014.1/log/diag/asm/cell//incident/incdir_257/svtrc_11711_27_i257.trc
The CELLSRV process should auto restart after this error.
Facing this error? Let me guess: Ports 03, 05, 06, 08, 09 and 12 are alerting? You have a Quarter Rack? Have recently installed Exadata plugin to version 18.104.22.168 or higher?
This is probably related to Bug 15937297 : EM 12C HAS ERRORS CABLE IS PRESENT ON PORT ‘N’ BUT IT IS POLLING FOR PEER PORT. The full message might be like “Cable is present on Port 6 but it is polling for peer port. This could happen when the peer port is unplugged/disabled“.
In fact, the bug was closed as not a bug. 🙂
As part of the 22.214.171.124 Exadata plugin, the IB switch ports are now checked for non-terminated cables. So these errors ‘polling for peer port’ are the expected behavior. Once ‘polling for peer port’ is an enhanced feature of the 126.96.36.199 plugin, this explains why you most likely did not see these errors until you upgraded the OMS to 188.8.131.52 and then updated the plugins.
In Quarter Racks, the following ports 3, 5, 6, 8, 9 and 12 are usually cabled ahead of time, but not terminated. In some racks port 32 may also be unterminated. Checking for incident in OEM you might see something like this image:
Having this error from cell alerthistory.log? Don’t panic!
Take a look in MOS: Exadata Storage Cell reports error RS-7445 [Serv MS Leaking Memory] (Doc ID 1954357.1). It’s related to Bug – RS-7445 [SERV MS LEAKING MEMORY].
The issue is a memory leak in the Java executable and affects systems running with JDK 7u51 or later versions. This is relevant for all versions in Release 11.2 to 12.1.
What happens is that MS process is consuming high memory (up to 2GB). Normally MS use around 1GB but because of the bug the memory allocated can grow upt to 2GB. You can check it as per example below:
[root@exaserver ~]# ps -feal|grep java
0 S root 16493 14737 0 80 0 - 15317 pipe_w 18:34 pts/0 00:00:00 grep java
0 S root 22310 27043 2 80 0 - 267080 futex_ 18:15 ? 00:00:27 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages -Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar /opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst -err /opt/oracle/cell/cellsrv/deploy/log/ms.err
Note that: 267080 * 4096 = 1143MB (1GB). If your number is higher than this, it indicates the presence of the bug.