Creating a Read-Only Account on Database with VPD or Label Security

Hi all,

This is an interesting case, specific to be understood by easy to be resolved.

The whole story started when a client asked for a Read-Only account (let’s call it RO_USER) with access to objects under another schema (let’s call it SCHEMA_OWNER). Easy going, right?

  • Create user
  • Grant select on SCHEMA_OWNER tables
  • Possibly grant execute on SCHEMA_OWNER procedures/packages/functions
  • Possibly private synonyms on RO_USER for SCHEMA_OWNER objects

However, when connecting with RO_USER and executing a query on a table, that’s what happened:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
0

When connecting with SCHEMA_OWNER and executing the same query:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
9255013

Hm, in general, the known limitations for this approach are:

  • Private database links: In case this is the issue, the only alternative is using Proxy Connection. Trying this:
SQL> alter user SCHEMA_OWNER grant connect through RO_USER;

User altered.

SQL> conn RO_USER[SCHEMA_OWNER]/***********
Connected.
SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

  COUNT(*)
----------
   9255013

WORKING!

However, when checking for the scenario, noticed this is a real table, not a synonym or view using a private database link. Why is that?

Also, this alternative creates some problems, as the RO user would now have access to DML on SCHEMA_OWNER tables, not Read-Only access anymore.

Well, the other option:

  • VPD or Label Security: Limit access to data depending on the current schema. That’s a match:
SQL> select object_owner,object_name,policy_name,function, PACKAGE from dba_policies where object_name='TABLE_EXAMPLE';

OBJECT_OWNER	     OBJECT_NAME		    POLICY_NAME 		   FUNCTION	         PACKAGE
-------------------- ------------------------------ ------------------------------ --------------------- ---------
SCHEMA_OWNER	     TABLE_EXAMPLE		     POLICY_EXAMPLE	           FCN_TABLE_EXAMPLE	 PKG_EXAMPLE

OK!

So what to do?

Here is the trick: https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvpoli.htm#i1006985

Using EXEMPT ACCESS POLICY. As per Oracle Document “[…] database users granted the EXEMPT ACCESS POLICY privilege, either directly or through a database role, are exempt from VPD enforcements.”

This is also valid for Datapump and Legacy Export as per MOS When Is Privilege “Exempt Access Policy” Needed For Export? (Doc ID 2339517.1).

Let’s check for it:

SQL> GRANT EXEMPT ACCESS POLICY TO RO_USER;
Grant succeeded.

SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;
COUNT(*)
----------
 9255015

And what about Label Security?

That was not my case, as you could see, but as per the same Oracle Document: ” They are also exempt from some Oracle Label Security policy enforcement controls, such as READ_CONTROL and CHECK_CONTROL, regardless of the export mode, application, or utility used to access the database or update its data.”

I hope it helps you!

The cluster upgrade state is [ROLLING PATCH] with correct Patch Level in all nodes

Hi all,
When performing a spot health check in a client environment, got this:

[oracle@dbserver1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@dbserver1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [26717470].

By checking over the applied patched on the 3 nodes, it’s matching:

+ASM1@dbserver1 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM2@dbserver2 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM3@dbserver3 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

So, it’s most likely some patch completed wrongly. Here is the quick fix:

$GI_HOME/bin/clscfg -patch
$GI_HOME/bin/crsctl stop rollingpatch

Once done, issue fixed!
I hope it helps!

ORA-19665: size % in file header does not match actual file size of %

That’s an unexpected message to get, right?
I got it related to an ORA-7445: ORA-07445: exception encountered: core dump [kcflfi()+1016] [SIGFPE] [Integer divide by zero] [0x10047EF18] [] []

What’s next?
After some checks, found the following (got from related trace the file name, matching to file_id 106):

SQL> select file_name, bytes from dba_data_files where file_id=106;

FILE_NAME                                         BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529069056

SQL> select name, bytes from v$datafile where file#= 106;

NAME                                             BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529067281

This means the Database Dictionary has different sizes for the datafile.
Looking at MOS, it seems to be a match to ORA-07445: Exception Encountered (Doc ID 1958870.1).

How to resolve it?

SQL> alter database datafile 106 offline to drop;
RMAN> restore datafile 106;
RMAN> recover datafile 106;
SQL> alter database datafile 106 online;

This resolved my case, fixing the views.

BE CAREFUL:

  • Make sure you have a backup before dropping the datafile.
  • Make sure you can put the datafile offline or proceed in a non-business hour.
  • Follow change procedures for Production, of course. Things may get wild.

And what if I don’t have a backup?
1. You may want to take it. It may not work, though, considering the original mismatch.
2. Export/Import logically:
– Export the data from the related tablespace (Datapump or Legacy Export, check for limitations and datatypes).
– Drop the tablespace and recreate it.
– Import the data back.

As usual, test it in a non-production environment to validate your plan and commands.

I hope it helps!

Moving APEX Applications Repository

Hello,
Most likely you land here because you need to migrate APEX Applications/Workspaces from one database to another, correct? You are in the right place!

We’ll use the APEXExport for this end.

Here you have a quick summary of the steps to use the tool, assuming:

  • The source APEX instance is at least 4.2.4.
  • The target instance must be 4.2.4 or higher.

Also, be aware that the APEX installation (the APEX and FLOWS_FILES schemas) cannot be exported in this manner or in any other manner.
So the APEX itself must pre-exist, what we’ll do is migrate the workspaces from one installation to another.

To Export:

1. Use database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) to generate a dumpfile with all DB objects and data that your APEX applications need to run.
This will normally be the objects in the schemas that your APEX workspaces are dependent upon.

2. Run the APEXExport twice as follows:

2.1 First run it using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

2.2 Now run it using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Note that that workspace export should export all of the shared components from the workspaces.
Note that this does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

To Import:

1. Import the dumpfile generated for the regular database schemas your APEX Application use.
2. Import the workspaces via sqlplus as per:
2.1 connect sys / as sysdba
2.2 alter session set current_schema = APEX_040200;
2.3 run the scripts to create the workspaces

@<script_generated>.sql

This will create the workspaces with the same workspace IDs as the source DB.
This also prevents the need to modify the workspace ID contained in each of the application exports.

3. From the same session as above, accomplish the import of each of the application exports.

SQL> @.sql 
SQL> @.sql [...] 
SQL> @.sql

I hope it helps!

OEM: Metric “Tablespace Allocation Metric” not Collected – Agent is Running but Not Ready

Hi all,

That’s an interesting case with OEM. A client reported the metric “Tablespace Allocation Metric” is not being updated on OEM for a specific database. In this case, the last gathering was in Nov/2020, as you’ll see.

When checking for it, the first try as usual was checking on the OEM agent status, and here is what I got:

oracle:dbserver@mydb02 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1prod - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 


Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)


Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds


Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

Agent not ready, that’s interesting.
Trying then to clear the agent state as this has solved some previous similar cases:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl clearstate agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD clearstate completed successfully

Now running the problematic metric manually:

oracle:dbserver02@c1test2 /u01/app/oracle: runCollection c1test_DW:oracle_pdb tbspAllocation                                                                <
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection error:The agent is running but is currently not ready to accept client requests

Ok, trying to just upload the case:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl upload
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload error:The agent is running but is currently not ready to accept client requests

Maybe something is stuck, so let’s kill the process and start all over again:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ...

 stopped.

Making sure we have no agent process running:

oracle:dbserver02@c1test2 /u01/app/oracle:  ps -ef | grep java | agent
oracle:dbserver02@c1test2 /u01/app/oracle:

Also adjusting the threshod for metric running:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl  setproperty agent -a
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded
oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded

And starting the agent:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting agent ........................................................................................................................... started but not ready.

On the Agent log:

oracle:dbserver02@c1test2 /u01/app/oracle: tail /u01/app/oracle/product/agent12c/agent_inst/sysman/log/gcagent.log
oracle:dbserver02@c1test2 /u01/app/oracle:
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest Begin
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - Agent Overall Health: 0
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest End
Response:
initializing
2020-12-20 13:15:03,457 [35:686116F5] INFO - >>> Reporting response: StatusAgentResponse (initializing) (request id 1) <<< 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - closing request input stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - overriding the buffer with a thread local copy (size: 8192b) 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - closing request output stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - StatusAgentAction.call() is complete. 2020-12-20 13:15:03,458 [35:B5326F3F:HTTP Listener-35 - /emd/lifecycle/main/] DEBUG - removing entry for emdctl@18081@dbserver02=>[160849530330001] completely
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - requests executed.
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:03,458 [35:3C0B0663:HTTP Listener-35] DEBUG - using connection SCEP@1197017148 [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5 bytesProduced = 26
2020-12-20 13:15:03,780 [35:3C0B0663] DEBUG - using connection SCEP@1197017148  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 26 bytesProduced = 5
2020-12-20 13:15:06,986 [31:858161EB] DEBUG - Submitting task SchedulerHeartbeat for execution
2020-12-20 13:15:06,986 [395:1AE716D8] DEBUG - Begin task SchedulerHeartbeat on Thread: GC.SysExecutor.8
2020-12-20 13:15:06,986 [395:F944F4C8:GC.SysExecutor.8 (SchedulerHeartbeat)] DEBUG - Scheduler heartbeat
2020-12-20 13:15:06,988 [395:F944F4C8] DEBUG - Scheduling next SchedulerHeartbeat after delay 29998 including periodShift of 0 milliseconds
2020-12-20 13:15:06,988 [395:1AE716D8:GC.SysExecutor.8] DEBUG - End task SchedulerHeartbeat
2020-12-20 13:15:07,016 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:07,017 [396:1AE716D9] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.9
2020-12-20 13:15:07,017 [396:391F60D7:GC.SysExecutor.9 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:07,017 [396:1AE716D9:GC.SysExecutor.9] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:12,017 [37:1AE716D0] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.0
2020-12-20 13:15:12,017 [37:FE21F10E:GC.SysExecutor.0 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:12,017 [37:1AE716D0:GC.SysExecutor.0] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,189 [33:6D553CF6] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:12,190 [35:3C0B0663] DEBUG - using connection SCEP@1611645943  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 100 bytesProduced = 121
2020-12-20 13:15:12,191 [35:7107E334:HTTP Listener-35 - /emd/persistence/main/] DEBUG - HTTPListener Threads allocated resource from LifecycleRequestHandler partition
2020-12-20 13:15:17,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:17,018 [45:1AE716D1] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.1
2020-12-20 13:15:17,018 [45:CBCC52CF:GC.SysExecutor.1 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:17,018 [45:1AE716D1:GC.SysExecutor.1] DEBUG - End task HeapMonitorTask

Following MOS Enterprise Manager12c: Oracle Database Tablespace Monthly Space Usage shows no data (Doc ID 1536654.1), a few changes were made:

$/AGENT_INST/bin/emctl setproperty agent -allow_new -name MaxInComingConnections -value 150
$/AGENT_INST/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210

The status before the change:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1test - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 

Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)

Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds

Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

And the status after the change:

oracle:dbserver02@c1test2 /u01/app/oracle:  /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 56994
Parent Process ID      : 56654
Agent URL              : https://dbserver02:3872/emd/main/
Local Agent URL in NAT : https://dbserver02:3872/emd/main/
Repository URL         : https://omsweb:4903/empbs/upload
Started at             : 2020-12-20 13:08:35
Started by user        : oracle
Operating System       : Linux version 3.10.0-957.27.2.el7.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2020-12-20 13:40:41
Last attempted upload                        : 2020-12-20 13:40:41
Total Megabytes of XML files uploaded so far : 1.02
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 10.85%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2020-12-20 13:40:40
Last successful heartbeat to OMS             : 2020-12-20 13:40:40
Next scheduled heartbeat to OMS              : 2020-12-20 13:41:40

---------------------------------------------------------------
Agent is Running and Ready

Great! Agent issue resolved.
However, the metric is not being gathered not even after running it manually:

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_CDBROOT:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent scheduler | grep tbspAllocation
2020-12-28 23:05:14.562 : rac_database:c1test:tbspAllocation_cdb
2020-12-29 03:07:21.988 : rac_database:c4prod:tbspAllocation_cdb
2020-12-29 03:08:11.888 : rac_database:c6prod:tbspAllocation_cdb
2020-12-29 03:09:39.103 : rac_database:c2prod:tbspAllocation_cdb
2020-12-29 03:09:55.372 : rac_database:c3prod:tbspAllocation_cdb

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_DW:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

On OEM Repository database:

SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,max(ROLLUP_TIMESTAMP )
from mgmt$metric_daily where TARGET_NAME like '%c1test%'
and TARGET_TYPE='oracle_pdb'
and METRIC_NAME='tbspAllocation'
group by TARGET_NAME,TARGET_TYPE,TARGET_GUID;  2    3    4    5

TARGET_NAME                    TARGET_TYPE          TARGET_GUID                      MAX(ROLLUP_TIMESTAM
------------------------------ -------------------- -------------------------------- -------------------
c1test_DW         oracle_pdb           7B1DF5DD4555EB978330A6D522004D44 2020-11-12 00:00:00
c1test_CDBROOT    oracle_pdb           4CE72911295C0287E053837F649B7D0E 2020-11-12 00:00:00


SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,ROLLUP_TIMESTAMP from mgmt$metric_daily where TARGET_NAME like '%c1test%' and TARGET_TYPE='oracle_pdb' and ROLLUP_TIMESTAMP>sysdate-3 order by 4

TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_DW         oracle_pdb           2020-11-06 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-07 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-08 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-09 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-10 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-11 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-12 00:00:00  1575.9375
c1test_CDBROOT    oracle_pdb           2020-11-05 00:00:00 37581.5625


TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_CDBROOT    oracle_pdb           2020-11-08 00:00:00  227138.75
c1test_CDBROOT    oracle_pdb           2020-11-09 00:00:00 455087.688
c1test_CDBROOT    oracle_pdb           2020-11-10 00:00:00 278230.875
c1test_CDBROOT    oracle_pdb           2020-11-11 00:00:00 208727.188
c1test_CDBROOT    oracle_pdb           2020-11-12 00:00:00 454964.063

Ok, so in summary: After fixing all issues on OEM side, everything running fine, still the database metrics are not being updated.

Long story short: After some investigation, bumped in MOS Database Hangs With Simple Queries like on view dba_data_files & dba_free_space (Doc ID 2665935.1)
Turns out this seemed to be a match. So proceeding with the recommendation:

SQL> alter session set container=DW;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 DW                             READ WRITE NO
SQL> select count(*) from dba_recyclebin;

  COUNT(*)
----------
     28522

SQL> purge recyclebin;

Recyclebin purged.

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Once done, all issues solved and metric being collected again:

Some additional reference:

  • Database Tablespace Metrics: Tablespace Allocation Is Not Collected (Metric tbspAllocation) (Doc ID 404692.1)
  • EM 12c : emctl start agent Fails With Error ‘Starting agent … started but not ready’ (Doc ID 1591477.1)
  • EM12c : emctl start / status agent ‘Agent Running but Not Ready’ ‘ERROR – The agent is overloaded [current requests: 30]’ Reported in gcagent.log (Doc ID 1546529.1)

I hope it helps!

Dataguard Broker: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

This seems like a simple message to be fixed, right?

The parameter is different between the broker configuration and the database parameters, most likely changed directly on the database after the DGBroker configuration be created or the database added. However, there is an interesting thing in this case.

Let’s check on the error first. On the primary database side of the broker configuration:

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Properties:
DGConnectIdentifier = 'myprodDB'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/, +DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/', +DATADG3/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
WARNING

And checking for the status in the standby database server:

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
Warning: ORA-16809: multiple warnings detected for the database

mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
WARNING (status updated 12 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

Ok, let’s check now for the database parameters perspective on the Primary:

SQL> show parameter convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA/MYDATABASE/DATAFILE/, +
DATADG/myprodDB/DATAFILE
log_file_name_convert string +DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, 
                             +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/
pdb_file_name_convert string

Comparing the settings:

  • LogFileNameConvert=’+DATA/MYDATABASE/ONLINELOG/, +DATA/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA3/MYDATABASE/ONLINELOG/’, +DATADG3/myprodDB/ONLINELOG/’
  • log_file_name_convert=+DATA/MYDATABASE/ONLINELOG/,+DATA/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/

It seems all right!

What is the problem then?

That’s the interesting part. Checking on MOS Usage and Limitation of db_file_name_convert and log_file_name_convert (Doc ID 1367014.1):

When using the Data Guard Broker the Values for these Parameters are limited to 512 Bytes (Characters) due to the Limit of the corresponding Data Guard Broker Properties ‘DbFileNameConvert’ and ‘LogFileNameConvert’.

That’s new to me! So, possible alternatives are:

  • Use OMF (Oracle Managed Files)
  • Use the same File Structure on both Sites
  • Rename and create Datafiles/RedoLog Files manually

What I did in my case?

We checked and confirmed with the client the only places for the logfiles are DATA and DATA2 (multiplexed). So the fix was easy:

edit database 'myprodDB' set property 'LogFileNameConvert' = "+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/";

Once done:

DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 3 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Database Status:
SUCCESS

DGMGRL> show database mySTDB;

Database - mySTDB

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 325.00 KByte/s
Real Time Query: OFF
Instance(s):
mySTDB

Database Status:
SUCCESS

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Properties:
DGConnectIdentifier = 'myprodDB.'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/2MYDATABASE/ONLINELOG/,+DATADG2/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

Did you know that?
I hope it helps!

Relying in Guaranteed Restore Points? Be careful!

Hi all,

Are you relying on Guaranteed Restore Points (GRP) as a fallback plan for your migration or upgrade strategy? Be careful!

When performing some non-Prod upgrade with the Autoupgrade tool,  after completing the upgrade, I wanted to roll it back and go through the process again,  This is what happened:

SQL> startup
ORA-29702: error occurred in Cluster Group Service operation

When looking for it found this blog post from Mike I missed the last year: https://mikedietrichde.com/2020/11/13/ora-29702-and-your-instance-does-not-startup-in-the-cluster-anymore/

This means my database is not starting anymore! Oh man, glad that I’m in the testing phase!

This caused by of Bug 31561819 – Incompatible maxmembers at CRSD Level Causing Database Instance Not Able to Start.

As per Mike’s post, “you don’t need to even restore or flashback a database to hit this error. A simple instance in NOMOUNT state leads to the same error. Without even any datafile.”

The bug is fixed on:

  • 19.9.0.0.201020 (Oct 2020) OCW RU
  • 18.12.0.0.201020 (Oct 2020) OCW RU
  • 12.2.0.1.201020 (Oct 2020) OCW RU

As being, you should include this patch BEFORE starting any move! Do it right away if you are on these versions!

Also, be aware of the latest change regarding Restore Points propagation on 19c, as per MOS Automatic Propagate Restore Points from Primary to Standby site in 19C (Doc ID 2463082.1)

In my case, the usage is exactly for a 12.1->19c upgrade. So, the fix is not even available (no Extended Support in place). As being, we had to think on alternate fallback plans, like a physical standby. But this is a topic for another post.

So for YOU:

  • Apply this patch if you can!
  • If not, be very careful on the fallback plans and as usual: Test, Test and Test!

See you next post!

Exadata DNS Change – Pitfalls to be avoided

Hi all, it’s been a while but here I am!

There were some changes in the infrastructure at the place I work and I was asked to do a DNS change on a bit old Exadata X5. I had never done one before this, so the idea of this post is to help others who might face the issues I had.

The first thing I did was to look up the documentation about it and see the steps, yes there are blogs about it but the doc can help to get at least the first glance of the situation.

Long story short: Exadata has lots of components and the new DNS should be changed on all of them.

Here is a summary of the steps.

Infiniband switches

Connect to the switches and sudo to ilom-admin and change the DNS

su - ilom-admin
show /SP/clients/dns
set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3
show /SP/clients/dns

 

Database nodes

For my image I only needed to change the /etc/resolv.conf, if you have a newer one you will need to user ipconf – That´s why you need to go to the documentation, at least there we hope that they will put some mentions on the pitfalls (well keep reading and you will see that was not my case)

Also changed the DNS on wach database node ilom, runing the ipmtool from the each node

ipmitool sunoem cli 'show /SP/clients/dns'
ipmitool sunoem cli 'set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3'
ipmitool sunoem cli 'show /SP/clients/dns'


Cell nodes – Here things start to get interesting

For the storage cell there are some points that need to be taken under consideration:

Increase the ASM disk_repair_time – the goal here is to avoid a full rebalance if you do this within its timeframe, if you don’t know this parameter,  ASM will wait for up to the interval specified for DISK_REPAIR_TIME for the disk(s) to come online. If the disk(s) come back online within this interval, a resync operation will occur, where only the extents that were modified while the disks were offline are written to the disks once back online. If the disk(s) do not come back within this interval, ASM will initiate a forced drop of the disk(s), which will trigger a rebalance.

On each cell node we need to make sure all disks are OK, stop all cell disks, stop all cell services and user ipconfig to change the DNS configuration

#Check that putting the grid disks offline will not cause a problem for Oracle ASM - it should all say YES on the 3rd column 
cellcli -e LIST GRIDDISK ATTRIBUTES name,asmmodestatus,asmdeactivationoutcome

#Inactivate all grid disks on the cell - may take a while to complete
cellcli -e ALTER GRIDDISK ALL INACTIVE


#Confirm the grid disks are offline, it should show asmmodestatus=OFFLINE or asmmodestatus=UNUSED, and asmdeactivationoutcome=Yes for all grid disks
cellcli -e LIST GRIDDISK ATTRIBUTES name, asmmodestatus,asmdeactivationoutcome

#Confirm that the disks are offline
cellcli -e LIST GRIDDISK

#Shut down the cell services and ocrvottargetd service
cellcli -e ALTER CELL SHUTDOWN SERVICES ALL
service ocrvottargetd stop #on some images this services does not exists

To execute the ipconf on the old way we only need to call it can follow the prompts, but if you have a newer image you will need to provide its parameters as is shown in the documentation.

The documentation says that after it we could start the cell services back up but I would recommend validating the DNS prior to doing that, why is that you might say because mine did not work and I could have a bigger issue with a cell node without DNS trying to start the services.

So, how to test, use nslookup, dig and curl

nslookup dns_domain.com
curl -v 192.168.16.1:53
dig another_server_in_the_network

 

My tests did not work, I was able to ping the DNS servers but not to resolve any name, I had an SR on MOS but did not help much either, looking up as this is a production system I tried to see if the firewall was up on the Linux site, and to my surprise it was.

I tried to manually add rules to iptables but it did not work and then I came across this note Exadata: New DNS server is not accessible after changing using IPCONF (Doc ID 1581417.1)

And there it was, I needed to restart the cellwall service to recreate the iptables rules.

# Restart cellwall service
service cellwall restart
service cellwall status

One final point, check if ASM started the rebalance or not, if it did, do not start to bring down another cell node until the rebalance is finish, otherwise you may run into deeper issues.

 

I hope it helps!

Elisson Almeida

Opatchauto Failing on “CheckActiveFilesAndExecutables” during Prerequisite Check

Hi all,
So, very recently when applying the 2021 January CPU in a client environment, the following happened:

[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto apply

OPatchauto session is initiated at Sun Mar 14 03:00:06 2021

System initialization log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchautodb/systemconfig2021-03-14_03-00-08AM.log.

Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_03-00-13AM.log
The id for this session is 1J89

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/db
Patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing patch validation checks on home /u01/app/oracle/product/19c/db
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/db


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/db
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/grid
Patch applicability verified successfully on home /u01/app/oracle/product/19c/grid


Executing patch validation checks on home /u01/app/oracle/product/19c/grid
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/grid


Preparing to bring down database service on home /u01/app/oracle/product/19c/db
Successfully prepared home /u01/app/oracle/product/19c/db to bring down database service


Bringing down database service on home /u01/app/oracle/product/19c/db
Following database has been stopped and will be restarted later during the session: er1pprd,obiee
Database service successfully brought down on home /u01/app/oracle/product/19c/db


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/oracle/product/19c/grid
Prepatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_03-06-15AM.log
CRS service brought down successfully on home /u01/app/oracle/product/19c/grid


Start applying binary patch on home /u01/app/oracle/product/19c/db
Failed while applying binary patches on home /u01/app/oracle/product/19c/db

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : dbserver01->/u01/app/oracle/product/19c/db Type[sidb]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/db, host: dbserver01.
Command failed: /u01/app/oracle/product/19c/db/OPatch/opatchauto apply /ora02/soft/jan21cpu/32126842/32226239 -oh /u01/app/oracle/product/19c/db -target_type oracle_database -binary -invPtrLoc /u01/app/oracle/product/19c/grid/oraInst.loc -jre /u01/app/oracle/product/19c/grid/OPatch/jre -persistresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_dbserver01_sidb_2.ser -analyzedresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_analyze_dbserver01_sidb_2.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_03-17-58AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Mar 14 03:19:25 2021
Time taken to complete the session 19 minutes, 19 seconds

opatchauto failed with error code 42

OK, going by parts, let's see what we have on the refered log:

[Mar 14, 2021 3:33:32 AM] [INFO] Start fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Finish fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:32 AM] [INFO] Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:


Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:33 AM] [INFO] UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [INFO] Finishing UtilSession at Sat Mar 14 03:33:33 PDT 2021
[Mar 14, 2021 3:33:33 AM] [INFO] Log file location: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/oapatch_2021-03-14_03-06-15AM.log

This is an interesting situation.

After some validations making sure no service is online, the path is writable, oracle and root have the required privilege and access, I found some relevant Oracle notes:

  • 19c Installation Fails with error “libclntsh.so: file format not recognized; treating as linker script” (Doc ID 2631283.1): Pointing to file corruption
  • While Applying a Weblogic Patch, opatch Fails with “Prerequisite check “CheckActiveFilesAndExecutables” failed” Error (Doc ID 2705809.1): Not a DB note and pointing to other processes using the files.
  • Opatch failure due to “CheckActiveFilesAndExecutables” as Remote registry service holding files (Doc ID 2462952.1): Remote registry holding the binaries.
  • Prerequisite Check “Checkactivefilesandexecutables” Failed (Doc ID 1281644.1): Patch requisite miss on 10g
  • Failed to apply PSU due to CheckActiveFilesAndExecutables check failure (Doc ID 2506432.1): SQLPlus holding the binaries.
  • [OCI]: Database System Patching Failed With Error “DCS-10001:Internal Error Encountered: Failure : Failed To Apply” And Opatch Log Shows “Prerequisite check “CheckActiveFilesAndExecutables” failed” (Doc ID 2687607.1): My case is not an OCI and not in RAC.

So, no matches at all.

However, this last note gave me the hints I needed. From Doc ID 2687607.1, for RAC environments:

/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -unlock
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -init
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -prepatch
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -postpatch

So, in my case, a Standalone On-Premise Database (and GI):

/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
[ Apply the patch! ]
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -postpatch

Check the output:

[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/haunlock__2021-03-14_04-00-35AM.log
2021/03/14 04:01:01 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/roothas_2021-03-14_04-01-09AM.log
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-01-16AM.log
2021/03/14 04:01:27 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
2021/03/14 04:01:27 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

And now resuming the Opatchauto:

[root@dbserver01 jan21cpu]# cd 32126842/32226239/
[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Sun Mar 14 04:02:07 2021
Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_04-02-10AM.log
Resuming existing session with id 1J89

Start applying binary patch on home /u01/app/oracle/product/19c/db
Binary patch applied successfully on home /u01/app/oracle/product/19c/db


Start applying binary patch on home /u01/app/oracle/product/19c/grid

Binary patch applied successfully on home /u01/app/oracle/product/19c/grid


Performing postpatch operations on CRS - starting CRS service on home /u01/app/oracle/product/19c/grid
Postpatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-27-58AM.log
CRS service started successfully on home /u01/app/oracle/product/19c/grid


Preparing home /u01/app/oracle/product/19c/db after database service restarted
No step execution required.........


Trying to apply SQL patch on home /u01/app/oracle/product/19c/db
SQL patch applied successfully on home /u01/app/oracle/product/19c/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:dbserver01
SIDB Home:/u01/app/oracle/product/19c/db
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Reason: This patch is not applicable to this specified target type - "oracle_database"


==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log


Host:dbserver01
SIHA Home:/u01/app/oracle/product/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

OPatchauto session completed at Sun Mar 14 04:31:48 2021
Time taken to complete the session 29 minutes, 43 seconds

And here is the relevant point: This has been happening to me on several environments and servers across the recent weeks. Always for 2021 January CPU.
My guess is that this might have something to do with this CPU binaries set or, most likely, with the latest OPatch version:

[oracle@dbserver01 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.24

I hope it helps you as well!

5 Best Practices for Setting Dispatchers for Shared Connections

Hi all,

Here are 5 Best Practices / Tips for when setting dispatches with Shared Connections:

1. Set local_listener on both instances on the database

alter system set LOCAL_LISTENER=”(address=(protocol=tcp)(port=1521)(host=yourhost))” scope=both sid=’instance_name’;
Ref: Shared Server: Dispatchers Are Not Registered With Listener (Doc ID 465881.1)

2. Dispatchers parameter should be set to utilize the VIP name of the host

alter system set dispatchers='(address=(protocol=tcp)(host=node1-vip))(dispatchers=2)’ scope=both sid=’instance_name’;
Ref: How To Configure Shared Server Dispatchers For RAC Environment (Doc ID 578524.1)

3. Dispatchers count should be set appropriately considering the number of sessions expected to connect to the database

A general rule of thumb is that 1 dispatcher can handle 50 shared server connections with minimal performance impact.
Ref: Shared Server Only: TNS-12518, TNS-12564 and TNS-12602 Errors at Connect Time (Doc ID 1539104.1)

4. Arguments can be used with the dispatchers parameter for closer control of how the shared server sessions are used

SESSIONS – Determines the max sessions allowed for each dispatcher.
CONNECTIONS – The maximum number of network connections to allow for each dispatcher.
Ref: https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DISPATCHERS.html#GUID-DCBCCF94-8A73-4805-9138-412DA413FC7C

5. Shared_servers parameter can be set to control total number of shared servers spawned by the database

shared_servers set to 1 –> This will enable shared server sessions on the database.
max_shared_server  –> Specifies the maximum number of shared servers that can run simultaneously.
shared_server_sessions  –> Specifies the total number of shared server user sessions that can run simultaneously. Setting this parameter enables to reserve user sessions for dedicated servers.
Ref: Automatic Shared Server Configuration (Doc ID 265931.1)

See you next post!