Are you having notifications like this one from you ASR?
ALERT: Oracle Auto Service Request (ASR) has detected a heartbeat failure for these assets.
[list with "Serial"; "Hostnam"; "Information" of affected targets]
IMPACT: ASR would not be able to create a Service Request (SR) if a fault were to occur.
ACTION: Determine why the heartbeat has failed for these assets and resolve the issue.
This is only a notification saying that ASR not able to reach a target.
For detailed information on how to troubleshoot, you can access MOS Oracle Auto Service Request (ASR) No Heartbeat Issue – How to Resolve (Doc ID 1346328.2)
In general why, things to test are:
– Access from ASR server to transport.oracle.com, via https, using port 443.
– Access from ASR server to ASR assets, via http, using port 6481.
Recently I started to have several alarms like this:
Internal error detected: java.lang.Throwable:oracle.sysman.gcagent.tmmain.execution.LongOpManager$ZombieDetection:1017.
Internal error detected: oracle.sysman.gcagent.task.TaskZombieException:oracle.sysman.gcagent.task.TaskFutureImpl$WrappedTask:620.
In agent log the message:
2017-06-13 12:54:09,232 [355:GC.Executor.14 (oracle_database:DB_DB12:%DB%)] ERROR - oracle_database:DB_DB12:%DB% oracle.sysman.gcagent.task.TaskZombieException: task declared as a zombie
To avoid generating pages and new incidents, I changed parameter _zombieCreateIncident to false on agents in related servers, as per described in MOS EM12c: Incident constantly raised for Oracle.sysman.gcagent.task.TaskZombieException: task declared as a zombie (Doc ID 2116834.1) regarding MOS Bug 22674258 – Zombie processes created for DB metric collection /workaround do not work.
I also added some other parameters that should help, like increasing the wait, all mentioned in same MOS notes.
Like this, in $AGENT_INST/sysman/config/emd.properties:
#GrepOra magical fixes:
And it solved the issue. At leat, the noise stopped. 🙂
Hope it helps,
Are you receiving old notifications from OEM? Like 2 or 3 days past, mostly already solved or after a blackout?
Yeah, this is annoying, specially when getting floods and floods of notifications.
Ok, so here go a very good tip: You can set grace period for notifications! 🙂
Easy easy, do this way:
emctl set property -name oracle.sysman.core.notification.grace_period -value [provide value in minutes]
The oracle.sysman.core.notification.grace_period OMS parameter has been introduced in 12c and allows the user to configure the grace period within which the notification should be sent. The value is set in minutes.
emctl set property -name oracle.sysman.core.notification.grace_period -value 1440
With this, OMS sends only those notifications which have been raised in the past 1440 mins (last 24 hours) and ignores all the notifications for events / incidents created prior to this time period.
After this, you’ll need to start OMS:
emctl start oms
The oracle.sysman.core.notification.grace_period parameter applies to all the Notification methods, but if the requirement is to specify the grace period for a particular notification method only, you can use the below parameters accordingly:
oracle.sysman.core.notification.grace_period_connector: For Connectors
oracle.sysman.core.notification.grace_period_email: For email notifications
oracle.sysman.core.notification.grace_period_oscmd: For OS Command notifications
oracle.sysman.core.notification.grace_period_plsql: For PLSQL notifications
oracle.sysman.core.notification.grace_period_snmp: For SNMP Trap notifications
oracle.sysman.core.notification.grace_period_ticket: For ticketing tools
This is weel described as per MOS: 12c Cloud control: How to Prevent OMS from Sending out Old Notifications for Events / Incidents Occurred in the Past? (Doc ID 1605351.1)
Yeah, these days I got some errors and when validating the server found the fllowing error:
su: cannot set user id: Resource temporarily unavailable
As you can imagine, in order to fix the issue, I adjusted the /etc/security/limits.conf increasing oracle nprocs to:
oracle soft nproc 4047
oracle hard nproc 20384
Ok, turns out that after a while I got the same errors again…
After some investigating I find that the EM Agent process was with 5020 threads!
Take a look:
Getting this error? OMS 188.8.131.52?
Ok… After an investigation on MOS and also a SR opened with Oracle is was pointed to relation with Bug 17575631 CLUSTER DB HOME PAGE ERRORS- REGIONS FETCH REAL TIME DATA WILL NOT BE DISPLAYED on Database Plugin.
It’s solved in OMS 184.108.40.206 and also in 220.127.116.11.1 EM Database Plugin Bundle patch.
I’ll show you how to workaround it. 🙂
cat /var/log/mysqld.log |grep `date +%y%m%d` | grep "\[Warning\]"
cat /var/log/mysqld.log |grep `date +%y%m%d` | grep "\[ERROR\]"
And a Bonus!
To get entries from X days ago:
cat /var/log/mysqld.log |grep `date --date="46 days ago" +%y%m%d`
As you know, we have some commercial solutions to monitoring/alerting MySQL, like MySQL Enterprise Monitor or Oracle Grid/Cloud Control.
But, regarding we are using MySQL instead of Oracle Database, we can assume it’s probably a decision taken based on cost. So, considering Open Source solutions, we basically have Nagios, Zabbix, OpenNMS…
Thinking on Nagios, in my opinion the “supra sumo” is mysql_health_check.pl.
Below whitepaper and presentation:
Good one by Sheeri Cabral and posted here!
Any way, with theese two we can make lots of magic:
– Check status of MySql server (slow queries, etc)
– Queries per second graph
– Allowes to run SQL Queries and setting thresholds for warning e critical. Ex:
check_db_query.pl -d database -q query [-w warn] [-c crit] [-C conn_file] [-p placeholder]
Ex for Nagios call:
command_line /usr/local/bin/perl $USER1$/check_db_query.pl -d "$ARG1$" -q "$ARG2$" $ARG3$
So, now it’s just make your queries and implement your free monitoring on MySQL! 🙂