Oracle Database Lock Performance / False Locks

Hello all!
This is a very interesing one… what about database taking too long or even showing locks that don’t really exists?
I faced some weird situations related to the size of DBMS_LOCK_ALLOCATED.

After some research, seems root cause is descripted in MOS DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6).

The issue is:
– DBMS_LOCK_ALLOCATED is the table that keep all locks on database.
– This table keeps growing even though many of the locks are probably no longer used.
– Upon checking the EXPIRED column, it reveals that the locks’ expiration date is a year or more in the future.

Besides the mentioned “storage space issue” in MOS Doc, there is also the situation that we have actually 1073741823 “lockhandlers” available. And what if this ‘not releasing’ cause we run out of handlers? Unlike, but possible.
Having a big table can also cause bad performance on DBMS_LOCK.REQUEST, that is the basic mechanism for locks (passing the handler, lockmode, timeout and release instructions).

More“Oracle Database Lock Performance / False Locks”

Parameter Workarounds: Changing your Entire Database’s Behavior Because of a Query

This post is just a reflection, basically based on my last post about Adaptive Query Optimization/SQL Plan Directives. Several times we find some situations that can be solved by setting a parameter (likely a “_”) and we just proceed with an “alter system” disregarding about all the impact of this.

This is specially important when talking about “optimizer” parameters even for bugs causing ORA-600’s or performance issues… Changing those parameters in system will affect not only the SQL with the error but all SQLs in database, which may cause a really bad effect in some cases.

bad-idea-sign

Ok Matheus, but what can I do?
Let’s to the obvious and generally applied methods:

More“Parameter Workarounds: Changing your Entire Database’s Behavior Because of a Query”

Adaptive Query Optimization: SQL Plan Directives Causing High CPU after 12c Upgrade

Hello all!

As DBAs we are always being recommended by Oracle and also recommending to clients to update their databases, but we have to be aware about new features and their effects. This is the case of Adaptive Query Optimization and in this particular case on SQL Plan Directives.

SQL Plan Directives are one of the functionalities that compose the Adaptive Query Optimization in Oracle Database 12c. The basic idea is pretty interesting: The SQL Optimizer keeps reviewing bad (“suboptimal”, as they like to say) plans, tipically incorrect cardinality estimations and generates SQL plan directives, like for missing histograms or extended statistics.

In my case, just after the upgrade to 12c (made on Jan 27th), the CPU usage increased for the same report always ran in the database:

12c_upgrade

Ok, how to check it?

More“Adaptive Query Optimization: SQL Plan Directives Causing High CPU after 12c Upgrade”

Oracle TPS: Evaluating Transaction per Second

Sometimes this information has some ‘myth atmosphere’… Maybe because of that Oracle doesn’t have this information very clear and it’s not the most useful metric.
But for comparison to another systems and also to performance/’throughput’ with different infrastructure/database configuration, it can be useful.

It can be seen by AWR on “Report Summary” section, on “Load Profile”, “Transactions” item:

awr_tps

But if you want to calculate it through SQL query?
And if you want to have a historic from this metric?

I found a reference for this calculation here, using v$sysstat.
It’s the only reference I found, and it on 10g documentation… It refers this metric as:

Number of Transactions = (DeltaCommits+DeltaRollbacks)/Time

It also refers as DeltaCommits and DeltaRollbacks, respectively, “user commits” and user “rollbacks”.

Where it goes a possible SQL to do that:

WITH hist_snaps
AS (SELECT instance_number,
snap_id,
round(begin_interval_time,'MI') datetime,
(  begin_interval_time + 0 - LAG (begin_interval_time + 0)
OVER (PARTITION BY dbid, instance_number ORDER BY snap_id)) * 86400 diff_time
FROM dba_hist_snapshot), hist_stats
AS (SELECT dbid,
instance_number,
snap_id,
stat_name,
VALUE - LAG (VALUE) OVER (PARTITION BY dbid,instance_number,stat_name ORDER BY snap_id)
delta_value
FROM dba_hist_sysstat
WHERE stat_name IN ('user commits', 'user rollbacks'))
SELECT datetime,
ROUND (SUM (delta_value) / 3600, 2) "Transactions/s"
FROM hist_snaps sn, hist_stats st
WHERE     st.instance_number = sn.instance_number
AND st.snap_id = sn.snap_id
AND diff_time IS NOT NULL
GROUP BY datetime
ORDER BY 1 desc;

I like to use PL/SQL Developer to see this kind of data. And it regards us to make very good charts very quickly. I try it in a small database here, just for example:

7days_tps

Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of AWR metric here.

See ya!
Matheus.

Oracle Database: Compression Algorithms for Cloud Backup

Hi all!
Again talking about cloud backups for on-premise databases: An important aspect is to compress the data, so network consumption might be reduced once less data is being transfered.Compress_PDF
It’s also important to evaluate CPU consumption. As higher compress algorithm is, as much CPU it uses. So, pay attention!

Now, how to choose the compression algorithm? Here the options Oracle give us:

SQL> col ALGORITHM_NAME for a15
set line 200
SQL> select ALGORITHM_NAME,INITIAL_RELEASE,TERMINAL_RELEASE,ALGORITHM_DESCRIPTION,ALGORITHM_COMPATIBILITY  from v$rman_compression_algorithm;
ALGORITHM_NAME INITIAL_RELEASE    TERMINAL_RELEASE   ALGORITHM_DESCRIPTION                                            ALGORITHM_COMPATIB
-------------- ------------------ ------------------ ---------------------------------------------------------------- ------------------
BZIP2          10.0.0.0.0         11.2.0.0.0         good compression ratio                                           9.2.0.0.0
BASIC          10.0.0.0.0                            good compression ratio                                           9.2.0.0.0
LOW            11.2.0.0.0                            maximum possible compression speed                               11.2.0.0.0
ZLIB           11.0.0.0.0         11.2.0.0.0         balance between speed and compression ratio                      11.0.0.0.0
MEDIUM         11.2.0.0.0                            balance between speed and compression ratio                      11.0.0.0.0
HIGH           11.2.0.0.0                            maximum possible compression ratio                               11.2.0.0.0

How to identify our compression algorithm?

RMAN> show COMPRESSION ALGORITHM;
RMAN configuration parameters for database with db_unique_name EZM_PRFL are:
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default

And how to change it?

RMAN> CONFIGURE COMPRESSION ALGORITHM 'HIGH';
new RMAN configuration parameters:
CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
new RMAN configuration parameters are successfully stored
RMAN> show COMPRESSION ALGORITHM;
RMAN configuration parameters for database with db_unique_name EZM_PRFL are:
CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

Ok,
But how to evaluate my compression ratio?

See the difference between INPUT_BYTES_DISPLAY and OUTPUT_BYTES_DISPLAY columns from the query:

prddb> col STATUS for a10
prddb> col INPUT_BYTES_DISPLAY for a15
prddb> col OUTPUT_BYTES_DISPLAY for a15
prddb> col TIME_TAKEN_DISPLAY for a20
prddb> SELECT SESSION_KEY,
2         INPUT_TYPE,
3         STATUS,
4         TO_CHAR(START_TIME, 'mm/dd/yy hh24:mi') start_time,
5         TO_CHAR(END_TIME, 'mm/dd/yy hh24:mi') end_time,
6  --       ELAPSED_SECONDS / 3600 hrs,
7         COMPRESSION_RATIO,
8         INPUT_BYTES_DISPLAY,
9         OUTPUT_BYTES_DISPLAY,
10         TIME_TAKEN_DISPLAY
11    FROM V$RMAN_BACKUP_JOB_DETAILS
12    where input_type like 'DB%'
13   ORDER BY SESSION_KEY
14  /SESSION_KEY INPUT_TYPE    STATUS     START_TIME     END_TIME       COMPRESSION_RATIO INPUT_BYTES_DIS OUTPUT_BYTES_DI TIME_TAKEN_DISPLAY
----------- ------------- ---------- -------------- -------------- ----------------- --------------- --------------- --------------------
          2 DB FULL       COMPLETED  04/22/16 12:59 04/22/16 13:06        6,84838963     4.26G         636.50M       00:06:57
          9 DB FULL       COMPLETED  04/22/16 13:47 04/22/16 13:54        6,83764706     4.26G         637.50M       00:06:37
         14 DB FULL       COMPLETED  04/22/16 16:26 04/22/16 16:33        6,84189878     4.26G         637.25M       00:06:48

KB: https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmconfa.htm#BRADV89466

Done?
If you have any question, please let me know in the comments! 😉
Matheus.

Optimize fragmented tables in MySQL

It happens on MySQL, as you know. Run an Optimize Table solve the question.
BUT, be careful! During the optimize the table stay locked (writing is not possible).

Fragmentation-table
(Fragmented Table)

So what?
To not cause a lock in every table, the script below shows and runs (if you want to list but not run, comment the line) only for tables that have fragmentation.

It was very useful to me! 🙂

#!/bin/sh
echo -n "MySQL username: " ; read username
echo -n "MySQL password: " ; stty -echo ; read password ; stty echo ; echo
mysql -u $username -p"$password" -NBe "SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do
mysql -u $username -p"$password" -NBe "SHOW TABLE STATUS;" $database | while read name engine version rowformat rows avgrowlength datalength maxdatalength indexlength datafree autoincrement createtime updatetime checktime collation checksum createoptions comment ; do
if [ "$datafree" -gt 0 ] ; then
fragmentation=$(($datafree * 100 / $datalength))
echo "$database.$name is $fragmentation% fragmented."
mysql -u "$username" -p"$password" -NBe "OPTIMIZE TABLE $name;" "$database"
fi
done
done

The resul will be like:

MySQL username: root
MySQL password:
...
mysql.db is 12% fragmented.
mysql.db optimize status OK
mysql.user is 9% fragmented.
mysql.db optimize status OK
...

This script is a full copy from this post by Robert de Bock.
Thanks, Robert!

Matheus.

DBA_TAB_MODIFICATIONS

Do you know the view “dba_tab_modifications”?
It’s very useful to know what has changed since the last stats gathering of a table and all decision/information that comes with… See the example below..

The only need is to run “dbms_stats.flush_database_monitoring_info” before cheking… take a look:

mydb> create TABLE matheus_boesing.test (nro number);
Table created.
mydb> begin
2 for i in 1..1000 loop
3 insert into matheus_boesing.test values (i);
4 end loop;
5 commit;
6 end;
7 /
PL/SQL procedure successfully completed.
mydb> select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name ='test' and table_owner='MATHEUS_BOESING';
no rows selected
mydb> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
mydb> select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name ='test' and table_owner='MATHEUS_BOESING';
TABLE_OWNER TABLE_NAME INSERTS UPDATES DELETES
---------------------- -------------- ---------- ---------- ----------
MATHEUS_BOESING test 1000 0 0

mydb> EXEC DBMS_STATS.GATHER_TABLE_STATS('MATHEUS_BOESING','test');
PL/SQL procedure successfully completed.
mydb> select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name ='test' and table_owner='MATHEUS_BOESING';
no rows selected

For more information: http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4149.htm

Have a nice day! 😀
Matheus.

Application Hangs: resmgr:become active

Application APP hangs with resmgr:become active. There is a resource plan defined who has a specific group to this Application. What is wrong and how to fix?

Here I presume you what is a resource manager and a resource plan. And, of course, for what purpose they exists. You must to know that this event is related to high active sessions in the group of resource plan too.

Before everything else, please understand if this is an acceptable behavior of the application. Then, in which resource group the sessions in this event are. The are other application in this same group with an unacceptable behavior? Yes? So, fix it.
No? Consider tho adjust the resource plan, switch the application to a new group, or, like in my case, remap the Application APP to the right group… ¬¬

To make it clear: In my case, the mapping is missing, so the schema MYAPP (Application APP) fit to OTHER_GROUP, where we use to set minimal limits:

SID SERIAL# INST_ID USERNAME RESOURCE_CONSUMER_GROUP EVENT
----- ---------- ---------- ------------------------------ -----------
492 29459 2 MYAPP OTHER_GROUPS resmgr:become active
1102 19145 2 MYAPP OTHER_GROUPS resmgr:become active
955 33161 2 MYAPP OTHER_GROUPS resmgr:become active
1084 33839 2 MYAPP OTHER_GROUPS db file sequential read
MYDB> show parameters resource_manager_plan
NAME TYPE VALUE
--------------------- ------ --------------
resource_manager_plan string MYDB_PLAN
MYDB> select group_or_subplan, active_sess_pool_p1, cpu_p1, cpu_p2, cpu_p3, cpu_p4 from DBA_RSRC_PLAN_DIRECTIVES where plan = 'MYDB_PLAN'
Enter value for plano: MYDB_PLAN
GROUP_OR_SUBPLAN ACTIVE_SESS_POOL_P1 CPU_P1 CPU_P2 CPU_P3 CPU_P4
------------------------------ ------------------- ---------- ---------- ---------- ----------
BATCH_GROUP 60 0 10 0 0
SYS_GROUP 80 0 0 0
APP_PLAN 20 0 30 0 0
OTHER_GROUPS 20 0 20 0 0
GGATE_GROUP 0 10 0 0
PAYTRUE_GROUP 40 0 30 0 0
DBA_GROUP 20 0 0 0

You can configure the mapping by user like that:

BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
-- DBMS_RESOURCE_MANAGER.service_name (or a lot of possibilities. Google it!)
value => 'MYAPP',
consumer_group => 'APP_PLAN');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

To switch the connected sessions, it can be done like:

SELECT 'EXEC DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS ('''||SID||''','''||SERIAL#||''',''APP_PLAN'');' FROM V$SESSION where username='MYAPP'
and RESOURCE_CONSUMER_GROUP='OTHER_GROUPS';

Remember that creating a resource plan without making the mappings is a bit pointless… 😛

Matheus.

Application Looping Until Lock a Row with NOWAIT Clause

Yesterday I treated an interesting situation:
A BATCH stayed on “SQL*Net message from client” event but the last_call_et was always on 1 or 0. Seems OK, with some client contention to send the commands to the DBMS, right? Nope.

It was caused by a loop in the application code “waiting” for a row lock but without “DBMS waiting events” (something like “select * from table for update nowait”). Take a look in how it was identified below.

First the session with no SQL_ID, no wait events and last_Call_et=0 of a “BATH_PROCESS” user:

proddb2> @sid
Sid:9796
Inst:
LAST_CALL_ET SQL_ID   EVENT STATUS SID SERIAL# INST_ID USERNAME
------------ ------- ------------- ---------- ------------------------
0 SQL*Net message from client INACTIVE 9796 45117 2 BATCH_PROCESS
proddb2> @trace
Enter value for sid: 9796
Enter value for serial: 45117
PL/SQL procedure successfully completed.

As you see, with no idea about what is happening, I started a trace. The trace was stuck with this:

*** 2015-06-15 14:03:25.755
WAIT #4574470448: nam='SQL*Net message from client' ela=993072 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833326636999
CLOSE #4574470448:c=10,e=15,dep=0,type=3,tim=12833326637228
PARSE #4574470448:c=25,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1139820409,tim=12833326637286
BINDS #4574470448:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=01 fl2=1000000 frm=00 csi=00 siz=24 off=0
kxsbbbfp=110a8d0d8 bln=22 avl=05 flg=05
value=5022011
WAIT #4574470448: nam='gc cr block 2-way' ela= 709 p1=442 p2=5944 p3=8483 obj#=0 tim=12833326638533
WAIT #4574470448: nam='gc cr block 2-way' ela= 541 p1=3 p2=2088264 p3=4367 obj#=0 tim=12833326639352
WAIT #4574470448: nam='gc cr block 2-way' ela= 651 p1=442 p2=5944 p3=8483 obj#=0 tim=12833326641673
WAIT #4574470448: nam='enq: TX - row lock contention' ela= 1093 name|mode=1415053318 usn<obj#=23141074 tim=12833326643029
EXEC #4574470448:c=1776,e=5836,p=0,cr=117,cu=1,mis=0,r=0,dep=0,og=1,plh=1139820409,tim=12833326643150
ERROR #4574470448:err=54 tim=12833326643172
WAIT #4574470448: nam='SQL*Net break/reset to client' ela= 9 driver id=1413697536 break?=1 p3=0 obj#=23141074 tim=12833326643373
WAIT #4574470448: nam='SQL*Net break/reset to client' ela= 503 driver id=1413697536 break?=0 p3=0 obj#=23141074 tim=12833326643891
WAIT #4574470448: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833326643915

AHÁ!
Did you see the “err=54” there? Yes. You know this error:

ORA-00054: Resource busy and acquire with NOWAIT specified

It’s caused by a SELECT FOR UPDATE NOWAIT in the code.
But, this select is in a loop, so the session don’t go ahead until have it.
(Obviously it could be coded with some treatment/better logic for this loop and errors, buuuut…)

What can we do now?
The easy way is to discover the holding session and kill it.
And sometimes the easy way is the best way. 😉

For that, we use the “obj#” and “value”, also bolded in the trace.
As I know the application, I know that the used field in all “where clauses” is the “RECNO” column. But if you don’t, it’s needed to discover. With this information in mind:

proddb2>select * from dba_objects where object_id='23141074';
OWNER OBJECT_NAME
------------------------------ ----------------
OWNER_EXAMPLE TABLE_XPTO
proddb2> select * from OWNER_EXAMPLE.TABLE_XPTO WHERE recno=5022011;
COL_KEY FSAMED0 FSAMED1 FSMNEG1 FSMNEG2 FSMNEG3 COL_DATE RECNO
------- ---------- ---------- ---------- ---------- ---------- -----
1002974 0 0 -516.8 0 0 15/06/2015 00:00:00 5022011

Ok, I know the row that is holded by the other session.
Let’s discover which session is causing a lock by myself (but in my case, without “NOWAIT” clause, to have time to find the holder):

proddb5>select * from OWNER_EXAMPLE.TABLE_XPTO WHERE recno=5022011 for update;

In another sqlplus session:

proddb2> @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------- ---------- ---------- --------------- ----------------------
5 14174 479 MATHEUS_BOESING enq:TX - row lock contention VALID 11006 1
2 4233 12879 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT
1 15410 7697 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT

AHÁ again!
The SID 11006. Let’s see who is there:

proddb2> @sid
Sid:11006
Inst:
SQL_ID SEQ# EVENT STATUS SID SERIAL# INST_ID USERNAME
-------------------- ---------- --------------------------------------
9jzm6vn5j06js 24919 enq: TX - row lock contention ACTIVE 11006 44627 1 DBLINK_OTHER_BATCH_SCHEMA

Ok, it’s another session of a different batch process in a remote database holding this row. As it’s less relevant, lets kill! Muahaha!
Then, you’ll see, my session get the lock and is in the middle of a transaction:

proddb1> @kill
***
sid : 11006
serial : 44627
***
System altered.
***
proddb1> @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------- ---------- ---------- --------------- --------------------
5 14174 479 MATHEUS_BOESING transaction UNKNOWN
2 4332 56037 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT
1 12058 9 MATHEUS_BOESING class slave wait NO HOLDER

To release the “row locked” to my principal process, lets suicide (kill my own session, this case, that is holding the row lock right now).

proddb5> @kill
***
sid : 14174
serial : 479
***
System altered.
***

After kill all the holding sessions, my BATCH_PROCESS just gone! 😀
Take a look on the trace (running ok):

WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531019
FETCH #4576933904:c=45,e=71,p=0,cr=3,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531062
WAIT #4576933904: nam='SQL*Net message from client' ela= 562 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531654
WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531788
FETCH #4576933904:c=55,e=86,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531826
WAIT #4576933904: nam='SQL*Net message from client' ela= 715 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981532576
WAIT #4576933904: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981532721
FETCH #4576933904:c=61,e=96,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981532758
WAIT #4576933904: nam='SQL*Net message from client' ela= 600 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981533617
WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534163
FETCH #4576933904:c=52,e=82,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981534203
WAIT #4576933904: nam='SQL*Net message from client' ela= 517 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534752

Now, with the problem solved, lets disable the trace and continue the other daily tasks… 🙂

proddb2> @untrace
Enter value for sid: 9796
Enter value for serial: 45117
PL/SQL procedure successfully completed.

I hope it was useful!
If helped you, make a comment! 😀

See ya!
Matheus.

High CPU usage by LMS and Node Evictions: Solved by Setting “_high_priority_processes”

Another thing that may help you in environments with highly interdependent applications:

Our env has high interconnect network block changing, and, as a consequence, high CPU usage by Global Cache Services (GCS)/Lock Manager Server Process (LMS).

This way, for each little latency in the interconnect interface, we were having a node eviction and all the impacts to the legacy application you can imagine (without gridlink or any solution to make the relocation ‘transparent’, as is usual to legacy application) and, of course, the business impact.

Oracle obviously suggested that we reduce the block concurrency over the cluster nodes grouping the application by affinity. But, it’s just no applicable to our env… 🙁

When nothing seemed to help, the workaround came from here: Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1).

Here is our change:

boesing@proddb> alter system set "_high_priority_processes"='LMS*|LGWR|VKTM' scope=spfile sid='*';
System altered.

No magic, but the problem stopped to happen. After that, we’re having some warnings about clock synchronization over the cluster nodes on CRS alerts. Like this:

CRS-2409:The clock on host proddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization. Service is running in observer mode.

I believe it happens because VKTM lost priority. But it’s OK: The node evictions has stopped! 😀

Matheus.