AIX: “WARNING: Heavy swapping observed on system in last 5 mins.”

Quick one today!

Having message below in your 11.2.0.3 on AIX like this?

WARNING: Heavy swapping observed on system in last 5 mins. 
pct of memory swapped in [31.28%] pct of memory swapped out [3.81%]. 
Please make sure there is no memory pressure and the SGA and PGA are configured correctly. 
Look at DBRM trace file for more details.

Stand down, this issue is caused by unpublished Bug 11801934, mentioned in MOS False Swap Warning Messages Printed To Alert.log On AIX (Doc ID 1508575.1).

Basically happens because the v$osstat does not reflect proper stats for the swap space paging.

So, stay calm and see you next week!

Parameter Workarounds: Changing your Entire Database’s Behavior Because of a Query

This post is just a reflection, basically based on my last post about Adaptive Query Optimization/SQL Plan Directives. Several times we find some situations that can be solved by setting a parameter (likely a “_”) and we just proceed with an “alter system” disregarding about all the impact of this.

This is specially important when talking about “optimizer” parameters even for bugs causing ORA-600’s or performance issues… Changing those parameters in system will affect not only the SQL with the error but all SQLs in database, which may cause a really bad effect in some cases.

bad-idea-sign

Ok Matheus, but what can I do?
Let’s to the obvious and generally applied methods:

Continue reading

EM Repository: ORA-00060: Deadlock detected error in alert log

Hello!
Some time ago I found some ORA-00060: Deadlock detected. errors in a client OEM database… Like this:

Thu Dec 22 09:01:55 2016
ORA-00060: Deadlock detected. More info in file /oracle/oemdb/diag/rdbms/oemdb/oemdb/trace/oemdb_ora_1757.trc.
Thu Dec 22 09:02:07 2016
ORA-00060: Deadlock detected. More info in file /oracle/oemdb/diag/rdbms/oemdb/oemdb/trace/oemdb_ora_1759.trc.
ORA-00060: Deadlock detected. More info in file /oracle/oemdb/diag/rdbms/oemdb/oemdb/trace/oemdb_ora_1759.trc.

In summary, after investigating the trace (as per below), found that the issueis caused by the following command:

Continue reading

RS-7445 [Serv MS leaking memory] [It will be restarted] [] [] [] [] [] [] [] [] [] []

Hello!
Having this error from cell alerthistory.log? Don’t panic!
Take a look in MOS: Exadata Storage Cell reports error RS-7445 [Serv MS Leaking Memory] (Doc ID 1954357.1). It’s related to Bug  – RS-7445 [SERV MS LEAKING MEMORY].

The issue is a memory leak in the Java executable and affects systems running with JDK 7u51 or later versions. This is relevant for all versions in Release 11.2 to 12.1.

What happens is that MS process is consuming high memory (up to 2GB).  Normally MS use around 1GB but because of the bug the memory allocated can grow upt to 2GB.  You can check it as per example below:

[root@exaserver ~]# ps -feal|grep java
0 S root     16493 14737  0  80   0 - 15317 pipe_w 18:34 pts/0    00:00:00 grep java
0 S root     22310 27043  2  80   0 - 267080 futex_ 18:15 ?       00:00:27 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages -Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar /opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst -err /opt/oracle/cell/cellsrv/deploy/log/ms.err

Note that: 267080 * 4096 = 1143MB (1GB). If your number is higher than this, it indicates the presence of the bug.

Continue reading

Error: Starting ACFS in RHEL 6 (Can’t exec “/usr/bin/lsb_release”)

Quick tip:

# Error:
[root@db1gridserver1 bin]# ./acfsload start -s
Can’t exec “/usr/bin/lsb_release”: No such file or directory at /grid/product/11.2.0/lib/osds_acfslib.pm line 511.
Use of uninitialized value $LSB_RELEASE in split at /grid/product/11.2.0/lib/osds_acfslib.pm line 516.

# Solution:
[root@db1gridserver1 bin]# yum install redhat-lsb-core-4.0

Note: Bug 17359415 – Linux: Configuring ACFS reports that cannot execute ‘/usr/bin/lsb_release’ (Doc ID 17359415.8)

Matheus.