Diagsnap Causing Node Eviction on 12c

Hi all,

So, you know how it is when we are having node evictions over a RAC. Lot’s of sessions getting killed, some effects for the clients and applications, also boss bossing in our shoulders to have a quick resolution over this. Plus it’s never an easy thing to drill down and understand.

Troubleshooting node reboots/evictions within Grid Infrastructure (GI) often is difficult due to the lack of Network and OS level resource information. To help circumvent this situation the diagsnap feature has been developed and integrated with Grid Infrastructure. Diagsnap is triggered to collect Network and OS level resource information when a given node is about to get evicted or when Grid Infrastructure is about to crash.

The diagsnap feature is enabled automatically starting from 12.1.0.2 Oct2017 PSU and 12.2.0.1 Oct2017 RU.
For more information about the diagsnap feature, refer to the Document 2345654.1 “What is diagsnap resource in 12c GI and above?”

However, after a lot of research and SR logs sending and interaction with Oracle to investigate a node eviction, we ended up finding a match to Bug 25785073 – OCSSD hangs while DIAGSNAP takes pstack causing a node reboot(Doc ID 25785073.8).

So Diagsnap is not helping, it is the cause of the issues.

After some research, seems this is not the only bug related to it. See some more:

  • Bug 27182006 – Auxiliary commands generated by DIAGSNAP spin CPU(Doc ID 27182006.8)
  • Bug 24692439 – Auxiliary commands generated by DIAGSNAP consumes high CPU(Doc ID 24692439.8)
  • Bug 23101338 – Disable diagsnap after 12.1.0.2.160419 GI PSU patch was installed(Doc ID 23101338.8)
  • Bug 28462215 – The Process diagsnap.pl is Restarted Every 2 Minutes(Doc ID 28462215.8)

Well,

At least for the 12Cs, I’m disabling it in all my environments:

– Check if diagsnap is disabled or not (DIAGSNAP=Disable if disabled, nothing if enabled)

$ egrep '^DIAGSNAP|^PSTACK' /u01/app/12.2.0.1/grid/crf/admin/crf$(hostname -s).ora

– Disable diagsnap (this will disable diagsnap on all nodes)

$ /u01/app/12.2.0.1/grid/bin/oclumon manage -disable diagsnap

– Check that diagsnap is disabled (DIAGSNAP=Disable if disabled, nothing if enabled) — to be done on each node

$ egrep '^DIAGSNAP|^PSTACK' /u01/app/12.2.0.1/grid/crf/admin/crf$(hostname -s).ora

Hope it helps!
Cheers!

High CPU usage by LMS and Node Evictions: Solved by Setting “_high_priority_processes”

Another thing that may help you in environments with highly interdependent applications:

Our env has high interconnect network block changing, and, as a consequence, high CPU usage by Global Cache Services (GCS)/Lock Manager Server Process (LMS).

This way, for each little latency in the interconnect interface, we were having a node eviction and all the impacts to the legacy application you can imagine (without gridlink or any solution to make the relocation ‘transparent’, as is usual to legacy application) and, of course, the business impact.

Oracle obviously suggested that we reduce the block concurrency over the cluster nodes grouping the application by affinity. But, it’s just no applicable to our env… 🙁

When nothing seemed to help, the workaround came from here: Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1).

Here is our change:

boesing@proddb> alter system set "_high_priority_processes"='LMS*|LGWR|VKTM' scope=spfile sid='*';
System altered.

No magic, but the problem stopped to happen. After that, we’re having some warnings about clock synchronization over the cluster nodes on CRS alerts. Like this:

CRS-2409:The clock on host proddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization. Service is running in observer mode.

I believe it happens because VKTM lost priority. But it’s OK: The node evictions has stopped! 😀

Matheus.