Creating a Read-Only Account on Database with VPD or Label Security

Hi all,

This is an interesting case, specific to be understood by easy to be resolved.

The whole story started when a client asked for a Read-Only account (let’s call it RO_USER) with access to objects under another schema (let’s call it SCHEMA_OWNER). Easy going, right?

  • Create user
  • Grant select on SCHEMA_OWNER tables
  • Possibly grant execute on SCHEMA_OWNER procedures/packages/functions
  • Possibly private synonyms on RO_USER for SCHEMA_OWNER objects

However, when connecting with RO_USER and executing a query on a table, that’s what happened:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
0

When connecting with SCHEMA_OWNER and executing the same query:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
9255013

Hm, in general, the known limitations for this approach are:

  • Private database links: In case this is the issue, the only alternative is using Proxy Connection. Trying this:
SQL> alter user SCHEMA_OWNER grant connect through RO_USER;

User altered.

SQL> conn RO_USER[SCHEMA_OWNER]/***********
Connected.
SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

  COUNT(*)
----------
   9255013

WORKING!

However, when checking for the scenario, noticed this is a real table, not a synonym or view using a private database link. Why is that?

Also, this alternative creates some problems, as the RO user would now have access to DML on SCHEMA_OWNER tables, not Read-Only access anymore.

Well, the other option:

  • VPD or Label Security: Limit access to data depending on the current schema. That’s a match:
SQL> select object_owner,object_name,policy_name,function, PACKAGE from dba_policies where object_name='TABLE_EXAMPLE';

OBJECT_OWNER	     OBJECT_NAME		    POLICY_NAME 		   FUNCTION	         PACKAGE
-------------------- ------------------------------ ------------------------------ --------------------- ---------
SCHEMA_OWNER	     TABLE_EXAMPLE		     POLICY_EXAMPLE	           FCN_TABLE_EXAMPLE	 PKG_EXAMPLE

OK!

So what to do?

Here is the trick: https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvpoli.htm#i1006985

Using EXEMPT ACCESS POLICY. As per Oracle Document “[…] database users granted the EXEMPT ACCESS POLICY privilege, either directly or through a database role, are exempt from VPD enforcements.”

This is also valid for Datapump and Legacy Export as per MOS When Is Privilege “Exempt Access Policy” Needed For Export? (Doc ID 2339517.1).

Let’s check for it:

SQL> GRANT EXEMPT ACCESS POLICY TO RO_USER;
Grant succeeded.

SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;
COUNT(*)
----------
 9255015

And what about Label Security?

That was not my case, as you could see, but as per the same Oracle Document: ” They are also exempt from some Oracle Label Security policy enforcement controls, such as READ_CONTROL and CHECK_CONTROL, regardless of the export mode, application, or utility used to access the database or update its data.”

I hope it helps you!

The cluster upgrade state is [ROLLING PATCH] with correct Patch Level in all nodes

Hi all,
When performing a spot health check in a client environment, got this:

[oracle@dbserver1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@dbserver1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [26717470].

By checking over the applied patched on the 3 nodes, it’s matching:

+ASM1@dbserver1 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM2@dbserver2 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM3@dbserver3 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

So, it’s most likely some patch completed wrongly. Here is the quick fix:

$GI_HOME/bin/clscfg -patch
$GI_HOME/bin/crsctl stop rollingpatch

Once done, issue fixed!
I hope it helps!

AWS RDS: Reading Oracle Traces from SQL*Plus

Hi all,

One more for AWS services. Let’s say you need to read trace files from RDS, how to do it?

Here is an example of how to list and read those files based on the directories from the database setting. To see the existent directories:

SQL> select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID
---------- ------------------------- ------------------------------------------------------------ -------------
SYS BDUMP /rdsdbdata/log/trace 0
SYS ADUMP /rdsdbdata/log/audit 0
SYS OPATCH_LOG_DIR /rdsdbbin/oracle/QOpatch 0
SYS OPATCH_SCRIPT_DIR /rdsdbbin/oracle/QOpatch 0
SYS DATA_PUMP_DIR /rdsdbdata/datapump 0
SYS OPATCH_INST_DIR /rdsdbbin/oracle/OPatch 0
SYS TMP /rdsdbdata/userdirs/01 0

7 rows selected.

Once identified the directory needed, we can list the file as per:

SQL> select * from table (rdsadmin.rds_file_util.listdir( p_directory => 'BDUMP')) order by mtime;

FILENAME TYPE FILESIZE MTIME
---------------------------------------- ---------- ---------- ---------
CPROD1_ora_48800.trm file 73 24-DEC-18
CPROD1_ora_48800.trc file 998 24-DEC-18
CPROD1_ora_86597.trc file 998 24-DEC-18
CPROD1_ora_86597.trm file 73 24-DEC-18
CPROD1_ora_7999.trc file 881 24-DEC-18
CPROD1_ora_7999.trm file 71 24-DEC-18
CPROD1_ora_7997.trm file 71 24-DEC-18
CPROD1_ora_7997.trc file 881 24-DEC-18
CPROD1_ora_8240.trm file 71 24-DEC-18
CPROD1_ora_8240.trc file 881 24-DEC-18
CPROD1_ora_8381.trm file 72 24-DEC-18
CPROD1_ora_8381.trc file 995 24-DEC-18
CPROD1_ora_8540.trc file 881 24-DEC-18
CPROD1_ora_8540.trm file 71 24-DEC-18
CPROD1_ora_9876.trc file 881 24-DEC-18
CPROD1_ora_9876.trm file 71 24-DEC-18
CPROD1_ora_11142.trm file 72 24-DEC-18
CPROD1_ora_11142.trc file 883 24-DEC-18
CPROD1_ora_11182.trc file 883 24-DEC-18
CPROD1_ora_11182.trm file 72 24-DEC-18
CPROD1_ora_55077.trm file 73 24-DEC-18
CPROD1_ora_55077.trc file 997 24-DEC-18
CPROD1_ora_92260.trm file 73 24-DEC-18
CPROD1_ora_92260.trc file 997 24-DEC-18
CPROD1_ora_123869.trc file 1000 24-DEC-18
CPROD1_ora_123869.trm file 74 24-DEC-18
CPROD1_ora_41305.trc file 998 24-DEC-18
CPROD1_ora_41305.trm file 73 24-DEC-18
CPROD1_j002_3293.trc file 114049 24-DEC-18
CPROD1_j002_3293.trm file 370 24-DEC-18
CPROD1_mmon_71739.trc file 7511332 24-DEC-18
CPROD1_mmon_71739.trm file 738330 24-DEC-18
CPROD1_ora_92888.trc file 997 24-DEC-18
CPROD1_ora_92888.trm file 73 24-DEC-18
trace/ directory 323584 24-DEC-18
alert_CPROD1.log file 204808 24-DEC-18
CPROD1_ora_70145.trc file 1470 24-DEC-18
CPROD1_ora_70145.trm file 109 24-DEC-18

3845 rows selected.

With the trace file name confirmed, you can see the content with the following:

SQL> select * from table (rdsadmin.rds_file_util.read_text_file( p_directory => 'BDUMP', p_filename => 'trace/CPROD1_s003_81573.trc'));

From the trace file:
=========================
..................................................................................
index undo for leaf key operations
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x000e289e.51da.47
Dump kdilk : itl=98, kdxlkflg=0x1 sdc=0 indexid=0x1c53db block=0x0019bdc0
(kdxlpu): purge leaf row
key :(24):
07 78 76 0c 17 17 09 32 08 c7 07 1a 02 28 15 01 18 06 00 1a 3c 99 00 1e

File 3 is not mirrored.

End dump previous blocks for kdsgrp
* kdsgrp1-2: ***********************************************
kdsDumpState: RID context dump

45511581 rows selected.

Also, as a side note, something that may help you out is the listing with LIKE clause:

SQL> select * from table (rdsadmin.rds_file_util.listdir( p_directory => 'BDUMP')) where filename like '%CPROD1_s003_81573.trc%';

FILENAME TYPE FILESIZE MTIME
---------------------------------------- ---------- ---------- ---------
CPROD1_s003_81573.trc file 1948134047 23-DEC-18

I hope it helps! See you next post!

ORA-19665: size % in file header does not match actual file size of %

That’s an unexpected message to get, right?
I got it related to an ORA-7445: ORA-07445: exception encountered: core dump [kcflfi()+1016] [SIGFPE] [Integer divide by zero] [0x10047EF18] [] []

What’s next?
After some checks, found the following (got from related trace the file name, matching to file_id 106):

SQL> select file_name, bytes from dba_data_files where file_id=106;

FILE_NAME                                         BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529069056

SQL> select name, bytes from v$datafile where file#= 106;

NAME                                             BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529067281

This means the Database Dictionary has different sizes for the datafile.
Looking at MOS, it seems to be a match to ORA-07445: Exception Encountered (Doc ID 1958870.1).

How to resolve it?

SQL> alter database datafile 106 offline to drop;
RMAN> restore datafile 106;
RMAN> recover datafile 106;
SQL> alter database datafile 106 online;

This resolved my case, fixing the views.

BE CAREFUL:

  • Make sure you have a backup before dropping the datafile.
  • Make sure you can put the datafile offline or proceed in a non-business hour.
  • Follow change procedures for Production, of course. Things may get wild.

And what if I don’t have a backup?
1. You may want to take it. It may not work, though, considering the original mismatch.
2. Export/Import logically:
– Export the data from the related tablespace (Datapump or Legacy Export, check for limitations and datatypes).
– Drop the tablespace and recreate it.
– Import the data back.

As usual, test it in a non-production environment to validate your plan and commands.

I hope it helps!

Automating APEX Applications Backup

Hi all,

As you might be aware and already posted by me here, o export APEX Applications the APEXExport Tool should be used.
Here we’ll also see some alternatives to accomplish that.

Plan A) Using same as discussed in the previous post for exporting all Applications in an instance:

As being, for any backup script, 3 parts need to be backed up

1. With database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) dump your APEX applications need to run:

expdp matheusdba schemas=MY_APP_SCHEMA directory=DIR_BKP dumpfile=APEX_APP_SCHEMA.dmp logfile=APEX_APP_SCHEMA.log

2. Run the APEXExport using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

3. Run the APEXExport using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Plan B) Take separate Backups for each Application or Workspace

Here is the interesting part: You can take also backups by applicationid or workspace id, with the following:

1. Exporting by ApplicationID:

APEXExport -db hostname:listenerport:dbservicename -user -password -applicationid 31500

2. Exporting by WorkspaceID:

APEXExport -db hostname:listenerport:dbservicename -user -password -workspaceid 9999

But how take all the workspace IDs? You can use the following SQL statement in SQLWORKSHOP:

select wwv_flow_api.get_security_group_id from dual;

Now you have all the basic exporting steps, it’s up to you to build the script as it fits better for you.

I hope it helps!

Additional notes:

  • The workspace export should export all of the shared components from the workspaces.
  • This does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

Additional Reference:

  • APEXExport Gives: Java.lang.NoClassDefFoundError: Oracle/apex/APEXExport (Doc ID 2265534.1)

Moving APEX Applications Repository

Hello,
Most likely you land here because you need to migrate APEX Applications/Workspaces from one database to another, correct? You are in the right place!

We’ll use the APEXExport for this end.

Here you have a quick summary of the steps to use the tool, assuming:

  • The source APEX instance is at least 4.2.4.
  • The target instance must be 4.2.4 or higher.

Also, be aware that the APEX installation (the APEX and FLOWS_FILES schemas) cannot be exported in this manner or in any other manner.
So the APEX itself must pre-exist, what we’ll do is migrate the workspaces from one installation to another.

To Export:

1. Use database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) to generate a dumpfile with all DB objects and data that your APEX applications need to run.
This will normally be the objects in the schemas that your APEX workspaces are dependent upon.

2. Run the APEXExport twice as follows:

2.1 First run it using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

2.2 Now run it using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Note that that workspace export should export all of the shared components from the workspaces.
Note that this does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

To Import:

1. Import the dumpfile generated for the regular database schemas your APEX Application use.
2. Import the workspaces via sqlplus as per:
2.1 connect sys / as sysdba
2.2 alter session set current_schema = APEX_040200;
2.3 run the scripts to create the workspaces

@<script_generated>.sql

This will create the workspaces with the same workspace IDs as the source DB.
This also prevents the need to modify the workspace ID contained in each of the application exports.

3. From the same session as above, accomplish the import of each of the application exports.

SQL> @.sql 
SQL> @.sql [...] 
SQL> @.sql

I hope it helps!

OEM: Metric “Tablespace Allocation Metric” not Collected – Agent is Running but Not Ready

Hi all,

That’s an interesting case with OEM. A client reported the metric “Tablespace Allocation Metric” is not being updated on OEM for a specific database. In this case, the last gathering was in Nov/2020, as you’ll see.

When checking for it, the first try as usual was checking on the OEM agent status, and here is what I got:

oracle:dbserver@mydb02 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1prod - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 


Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)


Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds


Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

Agent not ready, that’s interesting.
Trying then to clear the agent state as this has solved some previous similar cases:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl clearstate agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD clearstate completed successfully

Now running the problematic metric manually:

oracle:dbserver02@c1test2 /u01/app/oracle: runCollection c1test_DW:oracle_pdb tbspAllocation                                                                <
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection error:The agent is running but is currently not ready to accept client requests

Ok, trying to just upload the case:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl upload
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload error:The agent is running but is currently not ready to accept client requests

Maybe something is stuck, so let’s kill the process and start all over again:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ...

 stopped.

Making sure we have no agent process running:

oracle:dbserver02@c1test2 /u01/app/oracle:  ps -ef | grep java | agent
oracle:dbserver02@c1test2 /u01/app/oracle:

Also adjusting the threshod for metric running:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl  setproperty agent -a
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded
oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded

And starting the agent:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting agent ........................................................................................................................... started but not ready.

On the Agent log:

oracle:dbserver02@c1test2 /u01/app/oracle: tail /u01/app/oracle/product/agent12c/agent_inst/sysman/log/gcagent.log
oracle:dbserver02@c1test2 /u01/app/oracle:
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest Begin
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - Agent Overall Health: 0
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest End
Response:
initializing
2020-12-20 13:15:03,457 [35:686116F5] INFO - >>> Reporting response: StatusAgentResponse (initializing) (request id 1) <<< 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - closing request input stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - overriding the buffer with a thread local copy (size: 8192b) 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - closing request output stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - StatusAgentAction.call() is complete. 2020-12-20 13:15:03,458 [35:B5326F3F:HTTP Listener-35 - /emd/lifecycle/main/] DEBUG - removing entry for emdctl@18081@dbserver02=>[160849530330001] completely
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - requests executed.
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:03,458 [35:3C0B0663:HTTP Listener-35] DEBUG - using connection SCEP@1197017148 [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5 bytesProduced = 26
2020-12-20 13:15:03,780 [35:3C0B0663] DEBUG - using connection SCEP@1197017148  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 26 bytesProduced = 5
2020-12-20 13:15:06,986 [31:858161EB] DEBUG - Submitting task SchedulerHeartbeat for execution
2020-12-20 13:15:06,986 [395:1AE716D8] DEBUG - Begin task SchedulerHeartbeat on Thread: GC.SysExecutor.8
2020-12-20 13:15:06,986 [395:F944F4C8:GC.SysExecutor.8 (SchedulerHeartbeat)] DEBUG - Scheduler heartbeat
2020-12-20 13:15:06,988 [395:F944F4C8] DEBUG - Scheduling next SchedulerHeartbeat after delay 29998 including periodShift of 0 milliseconds
2020-12-20 13:15:06,988 [395:1AE716D8:GC.SysExecutor.8] DEBUG - End task SchedulerHeartbeat
2020-12-20 13:15:07,016 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:07,017 [396:1AE716D9] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.9
2020-12-20 13:15:07,017 [396:391F60D7:GC.SysExecutor.9 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:07,017 [396:1AE716D9:GC.SysExecutor.9] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:12,017 [37:1AE716D0] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.0
2020-12-20 13:15:12,017 [37:FE21F10E:GC.SysExecutor.0 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:12,017 [37:1AE716D0:GC.SysExecutor.0] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,189 [33:6D553CF6] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:12,190 [35:3C0B0663] DEBUG - using connection SCEP@1611645943  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 100 bytesProduced = 121
2020-12-20 13:15:12,191 [35:7107E334:HTTP Listener-35 - /emd/persistence/main/] DEBUG - HTTPListener Threads allocated resource from LifecycleRequestHandler partition
2020-12-20 13:15:17,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:17,018 [45:1AE716D1] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.1
2020-12-20 13:15:17,018 [45:CBCC52CF:GC.SysExecutor.1 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:17,018 [45:1AE716D1:GC.SysExecutor.1] DEBUG - End task HeapMonitorTask

Following MOS Enterprise Manager12c: Oracle Database Tablespace Monthly Space Usage shows no data (Doc ID 1536654.1), a few changes were made:

$/AGENT_INST/bin/emctl setproperty agent -allow_new -name MaxInComingConnections -value 150
$/AGENT_INST/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210

The status before the change:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1test - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 

Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)

Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds

Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

And the status after the change:

oracle:dbserver02@c1test2 /u01/app/oracle:  /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 56994
Parent Process ID      : 56654
Agent URL              : https://dbserver02:3872/emd/main/
Local Agent URL in NAT : https://dbserver02:3872/emd/main/
Repository URL         : https://omsweb:4903/empbs/upload
Started at             : 2020-12-20 13:08:35
Started by user        : oracle
Operating System       : Linux version 3.10.0-957.27.2.el7.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2020-12-20 13:40:41
Last attempted upload                        : 2020-12-20 13:40:41
Total Megabytes of XML files uploaded so far : 1.02
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 10.85%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2020-12-20 13:40:40
Last successful heartbeat to OMS             : 2020-12-20 13:40:40
Next scheduled heartbeat to OMS              : 2020-12-20 13:41:40

---------------------------------------------------------------
Agent is Running and Ready

Great! Agent issue resolved.
However, the metric is not being gathered not even after running it manually:

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_CDBROOT:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent scheduler | grep tbspAllocation
2020-12-28 23:05:14.562 : rac_database:c1test:tbspAllocation_cdb
2020-12-29 03:07:21.988 : rac_database:c4prod:tbspAllocation_cdb
2020-12-29 03:08:11.888 : rac_database:c6prod:tbspAllocation_cdb
2020-12-29 03:09:39.103 : rac_database:c2prod:tbspAllocation_cdb
2020-12-29 03:09:55.372 : rac_database:c3prod:tbspAllocation_cdb

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_DW:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

On OEM Repository database:

SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,max(ROLLUP_TIMESTAMP )
from mgmt$metric_daily where TARGET_NAME like '%c1test%'
and TARGET_TYPE='oracle_pdb'
and METRIC_NAME='tbspAllocation'
group by TARGET_NAME,TARGET_TYPE,TARGET_GUID;  2    3    4    5

TARGET_NAME                    TARGET_TYPE          TARGET_GUID                      MAX(ROLLUP_TIMESTAM
------------------------------ -------------------- -------------------------------- -------------------
c1test_DW         oracle_pdb           7B1DF5DD4555EB978330A6D522004D44 2020-11-12 00:00:00
c1test_CDBROOT    oracle_pdb           4CE72911295C0287E053837F649B7D0E 2020-11-12 00:00:00


SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,ROLLUP_TIMESTAMP from mgmt$metric_daily where TARGET_NAME like '%c1test%' and TARGET_TYPE='oracle_pdb' and ROLLUP_TIMESTAMP>sysdate-3 order by 4

TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_DW         oracle_pdb           2020-11-06 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-07 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-08 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-09 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-10 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-11 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-12 00:00:00  1575.9375
c1test_CDBROOT    oracle_pdb           2020-11-05 00:00:00 37581.5625


TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_CDBROOT    oracle_pdb           2020-11-08 00:00:00  227138.75
c1test_CDBROOT    oracle_pdb           2020-11-09 00:00:00 455087.688
c1test_CDBROOT    oracle_pdb           2020-11-10 00:00:00 278230.875
c1test_CDBROOT    oracle_pdb           2020-11-11 00:00:00 208727.188
c1test_CDBROOT    oracle_pdb           2020-11-12 00:00:00 454964.063

Ok, so in summary: After fixing all issues on OEM side, everything running fine, still the database metrics are not being updated.

Long story short: After some investigation, bumped in MOS Database Hangs With Simple Queries like on view dba_data_files & dba_free_space (Doc ID 2665935.1)
Turns out this seemed to be a match. So proceeding with the recommendation:

SQL> alter session set container=DW;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 DW                             READ WRITE NO
SQL> select count(*) from dba_recyclebin;

  COUNT(*)
----------
     28522

SQL> purge recyclebin;

Recyclebin purged.

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Once done, all issues solved and metric being collected again:

Some additional reference:

  • Database Tablespace Metrics: Tablespace Allocation Is Not Collected (Metric tbspAllocation) (Doc ID 404692.1)
  • EM 12c : emctl start agent Fails With Error ‘Starting agent … started but not ready’ (Doc ID 1591477.1)
  • EM12c : emctl start / status agent ‘Agent Running but Not Ready’ ‘ERROR – The agent is overloaded [current requests: 30]’ Reported in gcagent.log (Doc ID 1546529.1)

I hope it helps!

Dataguard Broker: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

This seems like a simple message to be fixed, right?

The parameter is different between the broker configuration and the database parameters, most likely changed directly on the database after the DGBroker configuration be created or the database added. However, there is an interesting thing in this case.

Let’s check on the error first. On the primary database side of the broker configuration:

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Properties:
DGConnectIdentifier = 'myprodDB'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/, +DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/', +DATADG3/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
WARNING

And checking for the status in the standby database server:

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
Warning: ORA-16809: multiple warnings detected for the database

mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
WARNING (status updated 12 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

Ok, let’s check now for the database parameters perspective on the Primary:

SQL> show parameter convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA/MYDATABASE/DATAFILE/, +
DATADG/myprodDB/DATAFILE
log_file_name_convert string +DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, 
                             +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/
pdb_file_name_convert string

Comparing the settings:

  • LogFileNameConvert=’+DATA/MYDATABASE/ONLINELOG/, +DATA/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA3/MYDATABASE/ONLINELOG/’, +DATADG3/myprodDB/ONLINELOG/’
  • log_file_name_convert=+DATA/MYDATABASE/ONLINELOG/,+DATA/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/

It seems all right!

What is the problem then?

That’s the interesting part. Checking on MOS Usage and Limitation of db_file_name_convert and log_file_name_convert (Doc ID 1367014.1):

When using the Data Guard Broker the Values for these Parameters are limited to 512 Bytes (Characters) due to the Limit of the corresponding Data Guard Broker Properties ‘DbFileNameConvert’ and ‘LogFileNameConvert’.

That’s new to me! So, possible alternatives are:

  • Use OMF (Oracle Managed Files)
  • Use the same File Structure on both Sites
  • Rename and create Datafiles/RedoLog Files manually

What I did in my case?

We checked and confirmed with the client the only places for the logfiles are DATA and DATA2 (multiplexed). So the fix was easy:

edit database 'myprodDB' set property 'LogFileNameConvert' = "+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/";

Once done:

DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 3 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Database Status:
SUCCESS

DGMGRL> show database mySTDB;

Database - mySTDB

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 325.00 KByte/s
Real Time Query: OFF
Instance(s):
mySTDB

Database Status:
SUCCESS

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Properties:
DGConnectIdentifier = 'myprodDB.'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/2MYDATABASE/ONLINELOG/,+DATADG2/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

Did you know that?
I hope it helps!

Relying in Guaranteed Restore Points? Be careful!

Hi all,

Are you relying on Guaranteed Restore Points (GRP) as a fallback plan for your migration or upgrade strategy? Be careful!

When performing some non-Prod upgrade with the Autoupgrade tool,  after completing the upgrade, I wanted to roll it back and go through the process again,  This is what happened:

SQL> startup
ORA-29702: error occurred in Cluster Group Service operation

When looking for it found this blog post from Mike I missed the last year: https://mikedietrichde.com/2020/11/13/ora-29702-and-your-instance-does-not-startup-in-the-cluster-anymore/

This means my database is not starting anymore! Oh man, glad that I’m in the testing phase!

This caused by of Bug 31561819 – Incompatible maxmembers at CRSD Level Causing Database Instance Not Able to Start.

As per Mike’s post, “you don’t need to even restore or flashback a database to hit this error. A simple instance in NOMOUNT state leads to the same error. Without even any datafile.”

The bug is fixed on:

  • 19.9.0.0.201020 (Oct 2020) OCW RU
  • 18.12.0.0.201020 (Oct 2020) OCW RU
  • 12.2.0.1.201020 (Oct 2020) OCW RU

As being, you should include this patch BEFORE starting any move! Do it right away if you are on these versions!

Also, be aware of the latest change regarding Restore Points propagation on 19c, as per MOS Automatic Propagate Restore Points from Primary to Standby site in 19C (Doc ID 2463082.1)

In my case, the usage is exactly for a 12.1->19c upgrade. So, the fix is not even available (no Extended Support in place). As being, we had to think on alternate fallback plans, like a physical standby. But this is a topic for another post.

So for YOU:

  • Apply this patch if you can!
  • If not, be very careful on the fallback plans and as usual: Test, Test and Test!

See you next post!

Opatchauto Failing on “CheckActiveFilesAndExecutables” during Prerequisite Check

Hi all,
So, very recently when applying the 2021 January CPU in a client environment, the following happened:

[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto apply

OPatchauto session is initiated at Sun Mar 14 03:00:06 2021

System initialization log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchautodb/systemconfig2021-03-14_03-00-08AM.log.

Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_03-00-13AM.log
The id for this session is 1J89

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/db
Patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing patch validation checks on home /u01/app/oracle/product/19c/db
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/db


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/db
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/grid
Patch applicability verified successfully on home /u01/app/oracle/product/19c/grid


Executing patch validation checks on home /u01/app/oracle/product/19c/grid
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/grid


Preparing to bring down database service on home /u01/app/oracle/product/19c/db
Successfully prepared home /u01/app/oracle/product/19c/db to bring down database service


Bringing down database service on home /u01/app/oracle/product/19c/db
Following database has been stopped and will be restarted later during the session: er1pprd,obiee
Database service successfully brought down on home /u01/app/oracle/product/19c/db


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/oracle/product/19c/grid
Prepatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_03-06-15AM.log
CRS service brought down successfully on home /u01/app/oracle/product/19c/grid


Start applying binary patch on home /u01/app/oracle/product/19c/db
Failed while applying binary patches on home /u01/app/oracle/product/19c/db

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : dbserver01->/u01/app/oracle/product/19c/db Type[sidb]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/db, host: dbserver01.
Command failed: /u01/app/oracle/product/19c/db/OPatch/opatchauto apply /ora02/soft/jan21cpu/32126842/32226239 -oh /u01/app/oracle/product/19c/db -target_type oracle_database -binary -invPtrLoc /u01/app/oracle/product/19c/grid/oraInst.loc -jre /u01/app/oracle/product/19c/grid/OPatch/jre -persistresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_dbserver01_sidb_2.ser -analyzedresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_analyze_dbserver01_sidb_2.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_03-17-58AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Mar 14 03:19:25 2021
Time taken to complete the session 19 minutes, 19 seconds

opatchauto failed with error code 42

OK, going by parts, let's see what we have on the refered log:

[Mar 14, 2021 3:33:32 AM] [INFO] Start fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Finish fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:32 AM] [INFO] Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:


Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:33 AM] [INFO] UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [INFO] Finishing UtilSession at Sat Mar 14 03:33:33 PDT 2021
[Mar 14, 2021 3:33:33 AM] [INFO] Log file location: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/oapatch_2021-03-14_03-06-15AM.log

This is an interesting situation.

After some validations making sure no service is online, the path is writable, oracle and root have the required privilege and access, I found some relevant Oracle notes:

  • 19c Installation Fails with error “libclntsh.so: file format not recognized; treating as linker script” (Doc ID 2631283.1): Pointing to file corruption
  • While Applying a Weblogic Patch, opatch Fails with “Prerequisite check “CheckActiveFilesAndExecutables” failed” Error (Doc ID 2705809.1): Not a DB note and pointing to other processes using the files.
  • Opatch failure due to “CheckActiveFilesAndExecutables” as Remote registry service holding files (Doc ID 2462952.1): Remote registry holding the binaries.
  • Prerequisite Check “Checkactivefilesandexecutables” Failed (Doc ID 1281644.1): Patch requisite miss on 10g
  • Failed to apply PSU due to CheckActiveFilesAndExecutables check failure (Doc ID 2506432.1): SQLPlus holding the binaries.
  • [OCI]: Database System Patching Failed With Error “DCS-10001:Internal Error Encountered: Failure : Failed To Apply” And Opatch Log Shows “Prerequisite check “CheckActiveFilesAndExecutables” failed” (Doc ID 2687607.1): My case is not an OCI and not in RAC.

So, no matches at all.

However, this last note gave me the hints I needed. From Doc ID 2687607.1, for RAC environments:

/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -unlock
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -init
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -prepatch
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -postpatch

So, in my case, a Standalone On-Premise Database (and GI):

/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
[ Apply the patch! ]
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -postpatch

Check the output:

[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/haunlock__2021-03-14_04-00-35AM.log
2021/03/14 04:01:01 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/roothas_2021-03-14_04-01-09AM.log
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-01-16AM.log
2021/03/14 04:01:27 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
2021/03/14 04:01:27 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

And now resuming the Opatchauto:

[root@dbserver01 jan21cpu]# cd 32126842/32226239/
[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Sun Mar 14 04:02:07 2021
Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_04-02-10AM.log
Resuming existing session with id 1J89

Start applying binary patch on home /u01/app/oracle/product/19c/db
Binary patch applied successfully on home /u01/app/oracle/product/19c/db


Start applying binary patch on home /u01/app/oracle/product/19c/grid

Binary patch applied successfully on home /u01/app/oracle/product/19c/grid


Performing postpatch operations on CRS - starting CRS service on home /u01/app/oracle/product/19c/grid
Postpatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-27-58AM.log
CRS service started successfully on home /u01/app/oracle/product/19c/grid


Preparing home /u01/app/oracle/product/19c/db after database service restarted
No step execution required.........


Trying to apply SQL patch on home /u01/app/oracle/product/19c/db
SQL patch applied successfully on home /u01/app/oracle/product/19c/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:dbserver01
SIDB Home:/u01/app/oracle/product/19c/db
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Reason: This patch is not applicable to this specified target type - "oracle_database"


==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log


Host:dbserver01
SIHA Home:/u01/app/oracle/product/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

OPatchauto session completed at Sun Mar 14 04:31:48 2021
Time taken to complete the session 29 minutes, 43 seconds

And here is the relevant point: This has been happening to me on several environments and servers across the recent weeks. Always for 2021 January CPU.
My guess is that this might have something to do with this CPU binaries set or, most likely, with the latest OPatch version:

[oracle@dbserver01 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.24

I hope it helps you as well!