Dataguard Broker: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

This seems like a simple message to be fixed, right?

The parameter is different between the broker configuration and the database parameters, most likely changed directly on the database after the DGBroker configuration be created or the database added. However, there is an interesting thing in this case.

Let’s check on the error first. On the primary database side of the broker configuration:

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
rmprdb01
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Properties:
DGConnectIdentifier = 'myprodDB'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/, +DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/', +DATADG3/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
WARNING

And checking for the status in the standby database server:

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
Warning: ORA-16809: multiple warnings detected for the database

mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
WARNING (status updated 12 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE
Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting

Database Warning(s):
ORA-16707: the value of the property LogFileNameConvert is invalid, valid values are pairs of file specifications

Database Status:
WARNING

Ok, let’s check now for the database parameters perspective on the Primary:

SQL> show parameter convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA/MYDATABASE/DATAFILE/, +
DATADG/myprodDB/DATAFILE
log_file_name_convert string +DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, 
                             +DATADG2/myprodDB/ONLINELOG/, +DATA3/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/
pdb_file_name_convert string

Comparing the settings:

  • LogFileNameConvert=’+DATA/MYDATABASE/ONLINELOG/, +DATA/myprodDB/ONLINELOG/, +DATA2/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA3/MYDATABASE/ONLINELOG/’, +DATADG3/myprodDB/ONLINELOG/’
  • log_file_name_convert=+DATA/MYDATABASE/ONLINELOG/,+DATA/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG3/myprodDB/ONLINELOG/

It seems all right!

What is the problem then?

That’s the interesting part. Checking on MOS Usage and Limitation of db_file_name_convert and log_file_name_convert (Doc ID 1367014.1):

When using the Data Guard Broker the Values for these Parameters are limited to 512 Bytes (Characters) due to the Limit of the corresponding Data Guard Broker Properties ‘DbFileNameConvert’ and ‘LogFileNameConvert’.

That’s new to me! So, possible alternatives are:

  • Use OMF (Oracle Managed Files)
  • Use the same File Structure on both Sites
  • Rename and create Datafiles/RedoLog Files manually

What I did in my case?

We checked and confirmed with the client the only places for the logfiles are DATA and DATA2 (multiplexed). So the fix was easy:

edit database 'myprodDB' set property 'LogFileNameConvert' = "+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/MYDATABASE/ONLINELOG/, +DATADG2/myprodDB/ONLINELOG/";

Once done:

DGMGRL> show configuration;

Configuration - my_dg_configuration

Protection Mode: MaxPerformance
Members:
myprodDB - Primary database
mySTDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 3 seconds ago)

DGMGRL> show database myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Database Status:
SUCCESS

DGMGRL> show database mySTDB;

Database - mySTDB

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 325.00 KByte/s
Real Time Query: OFF
Instance(s):
mySTDB

Database Status:
SUCCESS

DGMGRL> show database verbose myprodDB;

Database - myprodDB

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYDATABASE

Properties:
DGConnectIdentifier = 'myprodDB.'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '1800'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+DATA/MYDATABASE/DATAFILE/, +DATA/myprodDB/DATAFILE'
LogFileNameConvert = '+DATA/MYDATABASE/ONLINELOG/,+DATADG/myprodDB/ONLINELOG/,+DATA/2MYDATABASE/ONLINELOG/,+DATADG2/myprodDB/ONLINELOG/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.100)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=myprodDB_DGMGRL)(INSTANCE_NAME=MYDATABASE)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'MYDATABASE_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

Did you know that?
I hope it helps!

OEM: Metric “Tablespace Allocation Metric” not Collected – Agent is Running but Not Ready

Hi all,

That’s an interesting case with OEM. A client reported the metric “Tablespace Allocation Metric” is not being updated on OEM for a specific database. In this case, the last gathering was in Nov/2020, as you’ll see.

When checking for it, the first try as usual was checking on the OEM agent status, and here is what I got:

oracle:dbserver@mydb02 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1prod - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 


Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)


Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds


Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

Agent not ready, that’s interesting.
Trying then to clear the agent state as this has solved some previous similar cases:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl clearstate agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD clearstate completed successfully

Now running the problematic metric manually:

oracle:dbserver02@c1test2 /u01/app/oracle: runCollection c1test_DW:oracle_pdb tbspAllocation                                                                <
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection error:The agent is running but is currently not ready to accept client requests

Ok, trying to just upload the case:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl upload
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload error:The agent is running but is currently not ready to accept client requests

Maybe something is stuck, so let’s kill the process and start all over again:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ...

 stopped.

Making sure we have no agent process running:

oracle:dbserver02@c1test2 /u01/app/oracle:  ps -ef | grep java | agent
oracle:dbserver02@c1test2 /u01/app/oracle:

Also adjusting the threshod for metric running:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl  setproperty agent -a
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded
oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded

And starting the agent:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting agent ........................................................................................................................... started but not ready.

On the Agent log:

oracle:dbserver02@c1test2 /u01/app/oracle: tail /u01/app/oracle/product/agent12c/agent_inst/sysman/log/gcagent.log
oracle:dbserver02@c1test2 /u01/app/oracle:
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest Begin
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - Agent Overall Health: 0
2020-12-20 13:15:03,457 [35:686116F5] DEBUG - StatusAgentAction: satisfyRequest End
Response:
initializing
2020-12-20 13:15:03,457 [35:686116F5] INFO - >>> Reporting response: StatusAgentResponse (initializing) (request id 1) <<< 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - closing request input stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,457 [35:686116F5] DEBUG - overriding the buffer with a thread local copy (size: 8192b) 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - closing request output stream for "StatusAgentRequest (AGENT timeout:300)" 2020-12-20 13:15:03,458 [35:686116F5] DEBUG - StatusAgentAction.call() is complete. 2020-12-20 13:15:03,458 [35:B5326F3F:HTTP Listener-35 - /emd/lifecycle/main/] DEBUG - removing entry for emdctl@18081@dbserver02=>[160849530330001] completely
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - requests executed.
2020-12-20 13:15:03,458 [35:B5326F3F] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:03,458 [35:3C0B0663:HTTP Listener-35] DEBUG - using connection SCEP@1197017148 [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5 bytesProduced = 26
2020-12-20 13:15:03,780 [35:3C0B0663] DEBUG - using connection SCEP@1197017148  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 26 bytesProduced = 5
2020-12-20 13:15:06,986 [31:858161EB] DEBUG - Submitting task SchedulerHeartbeat for execution
2020-12-20 13:15:06,986 [395:1AE716D8] DEBUG - Begin task SchedulerHeartbeat on Thread: GC.SysExecutor.8
2020-12-20 13:15:06,986 [395:F944F4C8:GC.SysExecutor.8 (SchedulerHeartbeat)] DEBUG - Scheduler heartbeat
2020-12-20 13:15:06,988 [395:F944F4C8] DEBUG - Scheduling next SchedulerHeartbeat after delay 29998 including periodShift of 0 milliseconds
2020-12-20 13:15:06,988 [395:1AE716D8:GC.SysExecutor.8] DEBUG - End task SchedulerHeartbeat
2020-12-20 13:15:07,016 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:07,017 [396:1AE716D9] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.9
2020-12-20 13:15:07,017 [396:391F60D7:GC.SysExecutor.9 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:07,017 [396:1AE716D9:GC.SysExecutor.9] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:12,017 [37:1AE716D0] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.0
2020-12-20 13:15:12,017 [37:FE21F10E:GC.SysExecutor.0 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:12,017 [37:1AE716D0:GC.SysExecutor.0] DEBUG - End task HeapMonitorTask
2020-12-20 13:15:12,189 [33:6D553CF6] DEBUG - HTTPListener Threads deallocated resource back to LifecycleRequestHandler partition
2020-12-20 13:15:12,190 [35:3C0B0663] DEBUG - using connection SCEP@1611645943  [d=true,io=1,w=true,b=false|false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 100 bytesProduced = 121
2020-12-20 13:15:12,191 [35:7107E334:HTTP Listener-35 - /emd/persistence/main/] DEBUG - HTTPListener Threads allocated resource from LifecycleRequestHandler partition
2020-12-20 13:15:17,017 [31:858161EB] DEBUG - Submitting task HeapMonitorTask for execution
2020-12-20 13:15:17,018 [45:1AE716D1] DEBUG - Begin task HeapMonitorTask on Thread: GC.SysExecutor.1
2020-12-20 13:15:17,018 [45:CBCC52CF:GC.SysExecutor.1 (HeapMonitorTask)] DEBUG - Scheduling next HeapMonitorTask after delay 5000 including periodShift of 0 milliseconds
2020-12-20 13:15:17,018 [45:1AE716D1:GC.SysExecutor.1] DEBUG - End task HeapMonitorTask

Following MOS Enterprise Manager12c: Oracle Database Tablespace Monthly Space Usage shows no data (Doc ID 1536654.1), a few changes were made:

$/AGENT_INST/bin/emctl setproperty agent -allow_new -name MaxInComingConnections -value 150
$/AGENT_INST/bin/emctl setproperty agent -allow_new -name _cancelThread  -value 210

The status before the change:

oracle:dbserver02@c1test2 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : (unknown)
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 61641
Parent Process ID      : 61394
Currently initializing component             : Target Manager (2) (54 of 70)
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
        TargetID = oracle_pdb.c4test_PDB1 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB2 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = oracle_pdb.c4test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c6test_CDBROOT - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:29 
        TargetID = oracle_pdb.c3test_PDB3 - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 
        TargetID = rac_database.c1test - EventType - TARGET_EVENT for operation LOAD_TARGET submitted at 2020-12-20 12:54:30 

Target Manager Current Activity              : Compute Dynamic Properties (total operations: 37, active: 7, finished: 28)

Current target operations in progress
-------------------------------------
        oracle_pdb.c6test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_PDB1 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB2 - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c4test_CDBROOT - LOAD_TARGET_DYNAMIC running for 120 seconds
        oracle_pdb.c3test_PDB3 - LOAD_TARGET_DYNAMIC running for 120 seconds
        rac_database.c1test - LOAD_TARGET_DYNAMIC running for 120 seconds

Dynamic property executor tasks running
------------------------------


---------------------------------------------------------------
Agent is Running but Not Ready

And the status after the change:

oracle:dbserver02@c1test2 /u01/app/oracle:  /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 12.1.0.5.0
OMS Version            : 12.1.0.5.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u01/app/oracle/product/agent12c/agent_inst
Agent Log Directory    : /u01/app/oracle/product/agent12c/agent_inst/sysman/log
Agent Binaries         : /u01/app/oracle/product/agent12c/core/12.1.0.5.0
Agent Process ID       : 56994
Parent Process ID      : 56654
Agent URL              : https://dbserver02:3872/emd/main/
Local Agent URL in NAT : https://dbserver02:3872/emd/main/
Repository URL         : https://omsweb:4903/empbs/upload
Started at             : 2020-12-20 13:08:35
Started by user        : oracle
Operating System       : Linux version 3.10.0-957.27.2.el7.x86_64 (amd64)
Last Reload            : (none)
Last successful upload                       : 2020-12-20 13:40:41
Last attempted upload                        : 2020-12-20 13:40:41
Total Megabytes of XML files uploaded so far : 1.02
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 10.85%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2020-12-20 13:40:40
Last successful heartbeat to OMS             : 2020-12-20 13:40:40
Next scheduled heartbeat to OMS              : 2020-12-20 13:41:40

---------------------------------------------------------------
Agent is Running and Ready

Great! Agent issue resolved.
However, the metric is not being gathered not even after running it manually:

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_CDBROOT:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl status agent scheduler | grep tbspAllocation
2020-12-28 23:05:14.562 : rac_database:c1test:tbspAllocation_cdb
2020-12-29 03:07:21.988 : rac_database:c4prod:tbspAllocation_cdb
2020-12-29 03:08:11.888 : rac_database:c6prod:tbspAllocation_cdb
2020-12-29 03:09:39.103 : rac_database:c2prod:tbspAllocation_cdb
2020-12-29 03:09:55.372 : rac_database:c3prod:tbspAllocation_cdb

oracle:dbserver01@c1test1 /u01/app/oracle: /u01/app/oracle/product/agent12c/core/12.1.0.5.0/bin/emctl control agent runCollection c1test_DW:oracle_pdb tbspAllocation
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

On OEM Repository database:

SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,max(ROLLUP_TIMESTAMP )
from mgmt$metric_daily where TARGET_NAME like '%c1test%'
and TARGET_TYPE='oracle_pdb'
and METRIC_NAME='tbspAllocation'
group by TARGET_NAME,TARGET_TYPE,TARGET_GUID;  2    3    4    5

TARGET_NAME                    TARGET_TYPE          TARGET_GUID                      MAX(ROLLUP_TIMESTAM
------------------------------ -------------------- -------------------------------- -------------------
c1test_DW         oracle_pdb           7B1DF5DD4555EB978330A6D522004D44 2020-11-12 00:00:00
c1test_CDBROOT    oracle_pdb           4CE72911295C0287E053837F649B7D0E 2020-11-12 00:00:00


SQL> select TARGET_NAME,TARGET_TYPE,TARGET_GUID,ROLLUP_TIMESTAMP from mgmt$metric_daily where TARGET_NAME like '%c1test%' and TARGET_TYPE='oracle_pdb' and ROLLUP_TIMESTAMP>sysdate-3 order by 4

TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_DW         oracle_pdb           2020-11-06 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-07 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-08 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-09 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-10 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-11 00:00:00  1575.9375
c1test_DW         oracle_pdb           2020-11-12 00:00:00  1575.9375
c1test_CDBROOT    oracle_pdb           2020-11-05 00:00:00 37581.5625


TARGET_NAME                    TARGET_TYPE          ROLLUP_TIMESTAMP       AVERAGE
------------------------------ -------------------- ------------------- ----------
c1test_CDBROOT    oracle_pdb           2020-11-08 00:00:00  227138.75
c1test_CDBROOT    oracle_pdb           2020-11-09 00:00:00 455087.688
c1test_CDBROOT    oracle_pdb           2020-11-10 00:00:00 278230.875
c1test_CDBROOT    oracle_pdb           2020-11-11 00:00:00 208727.188
c1test_CDBROOT    oracle_pdb           2020-11-12 00:00:00 454964.063

Ok, so in summary: After fixing all issues on OEM side, everything running fine, still the database metrics are not being updated.

Long story short: After some investigation, bumped in MOS Database Hangs With Simple Queries like on view dba_data_files & dba_free_space (Doc ID 2665935.1)
Turns out this seemed to be a match. So proceeding with the recommendation:

SQL> alter session set container=DW;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 DW                             READ WRITE NO
SQL> select count(*) from dba_recyclebin;

  COUNT(*)
----------
     28522

SQL> purge recyclebin;

Recyclebin purged.

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Once done, all issues solved and metric being collected again:

Some additional reference:

  • Database Tablespace Metrics: Tablespace Allocation Is Not Collected (Metric tbspAllocation) (Doc ID 404692.1)
  • EM 12c : emctl start agent Fails With Error ‘Starting agent … started but not ready’ (Doc ID 1591477.1)
  • EM12c : emctl start / status agent ‘Agent Running but Not Ready’ ‘ERROR – The agent is overloaded [current requests: 30]’ Reported in gcagent.log (Doc ID 1546529.1)

I hope it helps!

Moving APEX Applications Repository

Hello,
Most likely you land here because you need to migrate APEX Applications/Workspaces from one database to another, correct? You are in the right place!

We’ll use the APEXExport for this end.

Here you have a quick summary of the steps to use the tool, assuming:

  • The source APEX instance is at least 4.2.4.
  • The target instance must be 4.2.4 or higher.

Also, be aware that the APEX installation (the APEX and FLOWS_FILES schemas) cannot be exported in this manner or in any other manner.
So the APEX itself must pre-exist, what we’ll do is migrate the workspaces from one installation to another.

To Export:

1. Use database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) to generate a dumpfile with all DB objects and data that your APEX applications need to run.
This will normally be the objects in the schemas that your APEX workspaces are dependent upon.

2. Run the APEXExport twice as follows:

2.1 First run it using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

2.2 Now run it using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Note that that workspace export should export all of the shared components from the workspaces.
Note that this does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

To Import:

1. Import the dumpfile generated for the regular database schemas your APEX Application use.
2. Import the workspaces via sqlplus as per:
2.1 connect sys / as sysdba
2.2 alter session set current_schema = APEX_040200;
2.3 run the scripts to create the workspaces

@<script_generated>.sql

This will create the workspaces with the same workspace IDs as the source DB.
This also prevents the need to modify the workspace ID contained in each of the application exports.

3. From the same session as above, accomplish the import of each of the application exports.

SQL> @.sql 
SQL> @.sql [...] 
SQL> @.sql

I hope it helps!

ORA-19665: size % in file header does not match actual file size of %

That’s an unexpected message to get, right?
I got it related to an ORA-7445: ORA-07445: exception encountered: core dump [kcflfi()+1016] [SIGFPE] [Integer divide by zero] [0x10047EF18] [] []

What’s next?
After some checks, found the following (got from related trace the file name, matching to file_id 106):

SQL> select file_name, bytes from dba_data_files where file_id=106;

FILE_NAME                                         BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529069056

SQL> select name, bytes from v$datafile where file#= 106;

NAME                                             BYTES 
------------------------------------------------ --------------
+DATA/MYDB/DATAFILE/DATAFILE_XX.558.1015447173   14529067281

This means the Database Dictionary has different sizes for the datafile.
Looking at MOS, it seems to be a match to ORA-07445: Exception Encountered (Doc ID 1958870.1).

How to resolve it?

SQL> alter database datafile 106 offline to drop;
RMAN> restore datafile 106;
RMAN> recover datafile 106;
SQL> alter database datafile 106 online;

This resolved my case, fixing the views.

BE CAREFUL:

  • Make sure you have a backup before dropping the datafile.
  • Make sure you can put the datafile offline or proceed in a non-business hour.
  • Follow change procedures for Production, of course. Things may get wild.

And what if I don’t have a backup?
1. You may want to take it. It may not work, though, considering the original mismatch.
2. Export/Import logically:
– Export the data from the related tablespace (Datapump or Legacy Export, check for limitations and datatypes).
– Drop the tablespace and recreate it.
– Import the data back.

As usual, test it in a non-production environment to validate your plan and commands.

I hope it helps!

Exadata Healthcheck – Top 5 Tools and Features!

Hi all,
It’s not new for Oracle DBAs the countless great tools we have out of the box to help us out with our daily tasks, such as ORAchk/EXAchk/ODAchk, Database Security Assessment Tool (DBSAT), Hang Manager, Cluster Health Advisor (CHA), Cluster Verification Utility (CVU), Memory Guard, Tracefile Analyzer (TFA) with tools like oratop, procwatcher, oswatcher, pstack, RDA, and the list goes on and on…

The good news is, most of the tools are now together on the Autonomous Health Framework (AHF), since version 12.2. None of those tools are running by default though, so you might need to choose some to start and enable on your environment.

But out of all this list, what if we could choose the top 5 features we can and should use as a start for Exadata Environment? Well, I did mine, see it below.

Oh, and by the way, you don’t pay anything else for them, counting you already have Oracle Support Services!

1. Cluster Health Advisor – Calibrate your Exa Environment!

Available along with the AFH since 12.2, the CHA works along the Cluster Health Monitor to provide you fine-grained notifications and correlations about your environment. And when I say it, I mean it: YOUR environment. This is because the CHA works better if you calibrate it with your statistics. As usual, not the worse problematic day or the low workload night, but an average day which can be used as a reference. All this is stored in the GIMR (as shown below) and used for future comparison and model inference.

This means the CHA is not a long list of IFs with fixed metrics, but an intelligent tool monitoring over 127 processes that perform work based on your workload. Not only this, the CHA is enriched with Machine Learning algorithms that model over 30 known DB problems based on over 150 metric predictors.

 

An example of inference can be seen below, where network and Global Cache statistics are used to inference a network issue.

Not rocket science, but always nice to have someone digesting tons of logs and metrics and reaching this sort of conclusion unassisted, right? You as DBA can steal all credits for the finding, no hard feelings.

And this is just one of the things CHA provides. It has tons of other functionalities. You should try using it more!

 

2. EXAchk – Daily Automated Runs (and Reports)

Most likely if you have an Exadata, you are used to running from time to time an EXAchk to review the recommendations and best practices for your environment. It’s something that requires almost no effort to run and to copy the reports, or you most likely have created an script to do so. What if I tell you Oracle has now automated this with AHF?

All you need to do is to confirm the scheduled runs and set the address for the reports to be sent. Find below a quick Cheatsheet:

a. Checking Status of the EXAchk

[root@exa01dbadm01 ~]# exachk -d info
------------------------------------------------------------

Master node = exa01dbadm01

exachk daemon version = 211300

Install location = /opt/oracle.ahf/exachk

Started at = Wed Jun 16 11:58:03 MDT 2021

Scheduler type = TFA Scheduler


[root@exa01dbadm01 ~]# exachk -d status
exachk is using TFA Scheduler. TFA PID: 369350

b. Checking Status of TFA Daemon Status and Auto Start

[root@exa01dbadm01 ~]# ahfctl statusahf

.-----------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+--------------+---------------+--------+------+------------+----------------------+------------------+
| exa01dbadm01 | RUNNING | 369350 | 5000 | 21.1.3.0.0 | 21130020210607124914 | COMPLETE |
| exa01dbadm02 | RUNNING | 118950 | 5000 | 21.1.3.0.0 | 21130020210607124914 | COMPLETE |
'--------------+---------------+--------+------+------------+----------------------+------------------'

------------------------------------------------------------

Master node = exa01dbadm01

exachk daemon version = 211300

Install location = /opt/oracle.ahf/exachk

Started at = Wed Jun 16 11:58:03 MDT 2021

Scheduler type = TFA Scheduler

------------------------------------------------------------
ID: exachk.autostart_client_exatier1
------------------------------------------------------------
AUTORUN_FLAGS = -usediscovery -profile exatier1 -syslog -dball -showpass -tag autostart_client_exatier1 -readenvconfig
COLLECTION_RETENTION = 7
AUTORUN_SCHEDULE = 3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: exachk.autostart_client
------------------------------------------------------------
AUTORUN_FLAGS = -usediscovery -syslog -tag autostart_client -readenvconfig
COLLECTION_RETENTION = 14
AUTORUN_SCHEDULE = 3 3 * * 0
------------------------------------------------------------

Next auto run starts on Jun 17, 2021 02:03:00

ID:exachk.AUTOSTART_CLIENT_EXATIER1

c. Gather EXAchk Next Automated Run

[root@exa01dbadm01 ~]# exachk -d nextautorun

Next auto run starts on Jun 17, 2021 02:03:00

ID:exachk.AUTOSTART_CLIENT_EXATIER1

[root@exa01dbadm01 ~]#

d. Changing EXAchk Notifications:

[root@exa01dbadm01 ~]# exachk -get NOTIFICATION_EMAIL,AUTORUN_SCHEDULE,COLLECTION_RETENTION
------------------------------------------------------------
ID: exachk.autostart_client_exatier1
------------------------------------------------------------
COLLECTION_RETENTION = 7
AUTORUN_SCHEDULE = 3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: exachk.autostart_client
------------------------------------------------------------
COLLECTION_RETENTION = 14
AUTORUN_SCHEDULE = 3 3 * * 0
------------------------------------------------------------


[root@exa01dbadm01 ~]# exachk -id autostart_client -set NOTIFICATION_EMAIL=boesing@pythian.com

Updated attribute ['NOTIFICATION_EMAIL=boesing@pythian.com'] for Id[exachk.AUTOSTART_CLIENT]

Successfully copied Daemon Store to Remote Nodes


[root@exa01dbadm01 ~]# exachk -get NOTIFICATION_EMAIL,AUTORUN_SCHEDULE,COLLECTION_RETENTION
------------------------------------------------------------
ID: exachk.autostart_client_exatier1
------------------------------------------------------------
COLLECTION_RETENTION = 7
AUTORUN_SCHEDULE = 3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: exachk.autostart_client
------------------------------------------------------------
NOTIFICATION_EMAIL = boesing@pythian.com
COLLECTION_RETENTION = 14
AUTORUN_SCHEDULE = 3 3 * * 0
------------------------------------------------------------

[root@exa01dbadm01 ~]# exachk -id autostart_client_exatier1 -set NOTIFICATION_EMAIL=boesing@pythian.com
Updated attribute ['NOTIFICATION_EMAIL=boesing@pythian.com'] for Id[exachk.AUTOSTART_CLIENT_EXATIER1]

Successfully copied Daemon Store to Remote Nodes


[root@exa01dbadm01 ~]# exachk -get NOTIFICATION_EMAIL,AUTORUN_SCHEDULE,COLLECTION_RETENTION
------------------------------------------------------------
ID: exachk.autostart_client_exatier1
------------------------------------------------------------
NOTIFICATION_EMAIL = boesing@pythian.com
COLLECTION_RETENTION = 7
AUTORUN_SCHEDULE = 3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: exachk.autostart_client
------------------------------------------------------------
NOTIFICATION_EMAIL = boesing@pythian.com
COLLECTION_RETENTION = 14
AUTORUN_SCHEDULE = 3 3 * * 0
------------------------------------------------------------

e. Change EXAchk Schedule and Retention

[root@exa01dbadm01 ~]# exachk -id autostart_client_exaier1 –set "AUTORUN_SCHEDULE=0 3 * * *" -> Time= 3 AM daily
[root@exa01dbadm01 ~]# exachk-id autostart_client –set "collection_retention=90"

f. EXAchk: Testing Email Sending and Running EXAchk Report over email

This is for ad-hoc testing to check about email sending, out of the scheduled runs.

[root@exa01dbadm01 ~]# exachk -testemail notification_email=boesing@pythian.com
Email Successfully sent to ['boesing@pythian.com'] from 'root@exa01dbadm01
[root@exa01dbadm01 ~]# exachk -sendemail notification_email=boesing@pythian.com


Searching for running databases . . . . .

. . . . . . . . . . . .
List of running databases registered in OCR

1. xxxxxx
2. yyyy
3. None of above

Select databases from list for checking best practices. For multiple databases, select 3 for All or comma separated number like 1,2 etc [1-3][3].
[...]
Detailed report (html) - /u01/app/oracle/oracle.ahf/data/exa01dbadm01/exachk/user_root/output/exachk_exa01dbadm01_xxxxx_061621_134748/exachk_exa01dbadm01_xxxxx_061621_134748.html

UPLOAD [if required] - /u01/app/oracle/oracle.ahf/data/exa01dbadm01/exachk/user_root/output/exachk_exa01dbadm01_xxxxxx_061621_134748.zip
Email Successfully sent to ('boesing@pythian.com',) from 'root@exa01dbadm01' with attachment

3. TFA – Sanitize and Mask Options

Even with all the concerns on sensitive data being more and more relevant, this is something that actually surprised me. It’s possible to Sanitize and Mask data in collections. For example, mask will hide your inner data (let’s say table names):

[root@exa01dbadm01 ~]# tfactl diagcollect -srdc ORA-00600 -mask

Sanitize will hide your hardware setting. Not that useful if you have an Exadata, but might be interesting if you have commodity hardware you don’t want Oracle to know about.

[root@exa01dbadm01 ~]# tfactl diagcollect -srdc ORA-00600 -sanitize

4. TFA Changes – “Nothing was Changed” Resolver Tool

This is for all the DBAs which had already this dialogue:

Client: Yesterday was running fine, and today it’s veeeery slow. Nothing was changed!
DBA: Something changed, that’s for sure.
Client: Absolutely nothing changed.

So now we can access if indeed nothing changed from the client’s perspective (perhaps an automatic statistics gathering or something) or if anybody did something and is hard to identify.

It takes parameters from OS and DB and tracks of old and new values, reporting changes:

[root@exa01dbadm01 ~]# tfactl changes

Output from host : exa01dbadm02
------------------------------
No Changes Found

Output from host : exa01dbadm01
------------------------------
[Nov/14/2021 00:08:33.000]: [db.dbprod19.dbprod191]: Parameter: log_archive_dest_2: Value: service=dbprod19stb => ASYNC NOAFFIRM delay=240 optional compression=disable max_failure=0 reopen=300 db_unique_name=dbprod19stb net_timeout=300
[Nov/14/2021 00:08:33.000]: [db.dbprod19.dbprod191]: Parameter: log_archive_dest_2: Value: service=dbprod19stb => valid_for=(online_logfile,all_roles)

5. Oracle Health Check Collections Manager

Not a surprise if you don’t know this tool, but I’d really recommend you do look for it now. It’s a great tool and as with everything in this post, it’s free!

Oracle Health Check Collections Manager is an APEX companion application to Oracle EXAchk that gives you an enterprise-wide view of your health check collection data. All you need to have is an APEX 4.2 or 5 version and deploy the tool. The main idea is that you can consolidate all your reports in one place and, as a plus, you can manage all your EXAchk reports across the time, including a view on the items regression you may have.

This is an example of the view of the collections:

And this is an example of a new best practices failure:

Do you agree with my top list? Let me know your thoughts!

Creating a Read-Only Account on Database with VPD or Label Security

Hi all,

This is an interesting case, specific to be understood by easy to be resolved.

The whole story started when a client asked for a Read-Only account (let’s call it RO_USER) with access to objects under another schema (let’s call it SCHEMA_OWNER). Easy going, right?

  • Create user
  • Grant select on SCHEMA_OWNER tables
  • Possibly grant execute on SCHEMA_OWNER procedures/packages/functions
  • Possibly private synonyms on RO_USER for SCHEMA_OWNER objects

However, when connecting with RO_USER and executing a query on a table, that’s what happened:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
0

When connecting with SCHEMA_OWNER and executing the same query:

select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

COUNT(*)
----------
9255013

Hm, in general, the known limitations for this approach are:

  • Private database links: In case this is the issue, the only alternative is using Proxy Connection. Trying this:
SQL> alter user SCHEMA_OWNER grant connect through RO_USER;

User altered.

SQL> conn RO_USER[SCHEMA_OWNER]/***********
Connected.
SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;

  COUNT(*)
----------
   9255013

WORKING!

However, when checking for the scenario, noticed this is a real table, not a synonym or view using a private database link. Why is that?

Also, this alternative creates some problems, as the RO user would now have access to DML on SCHEMA_OWNER tables, not Read-Only access anymore.

Well, the other option:

  • VPD or Label Security: Limit access to data depending on the current schema. That’s a match:
SQL> select object_owner,object_name,policy_name,function, PACKAGE from dba_policies where object_name='TABLE_EXAMPLE';

OBJECT_OWNER	     OBJECT_NAME		    POLICY_NAME 		   FUNCTION	         PACKAGE
-------------------- ------------------------------ ------------------------------ --------------------- ---------
SCHEMA_OWNER	     TABLE_EXAMPLE		     POLICY_EXAMPLE	           FCN_TABLE_EXAMPLE	 PKG_EXAMPLE

OK!

So what to do?

Here is the trick: https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvpoli.htm#i1006985

Using EXEMPT ACCESS POLICY. As per Oracle Document “[…] database users granted the EXEMPT ACCESS POLICY privilege, either directly or through a database role, are exempt from VPD enforcements.”

This is also valid for Datapump and Legacy Export as per MOS When Is Privilege “Exempt Access Policy” Needed For Export? (Doc ID 2339517.1).

Let’s check for it:

SQL> GRANT EXEMPT ACCESS POLICY TO RO_USER;
Grant succeeded.

SQL> select count(*) from SCHEMA_OWNER.TABLE_EXAMPLE;
COUNT(*)
----------
 9255015

And what about Label Security?

That was not my case, as you could see, but as per the same Oracle Document: ” They are also exempt from some Oracle Label Security policy enforcement controls, such as READ_CONTROL and CHECK_CONTROL, regardless of the export mode, application, or utility used to access the database or update its data.”

I hope it helps you!

The cluster upgrade state is [ROLLING PATCH] with correct Patch Level in all nodes

Hi all,
When performing a spot health check in a client environment, got this:

[oracle@dbserver1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@dbserver1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [26717470].

By checking over the applied patched on the 3 nodes, it’s matching:

+ASM1@dbserver1 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM2@dbserver2 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

+ASM3@dbserver3 > kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328
21125181
21359749
21436941
21527488
21694919
21949015
22806133
23144544
24007012
24340679
24732088
24846605
25397136
25869760
26392164
26392192
26609798
26717470

So, it’s most likely some patch completed wrongly. Here is the quick fix:

$GI_HOME/bin/clscfg -patch
$GI_HOME/bin/crsctl stop rollingpatch

Once done, issue fixed!
I hope it helps!

OMS Opatch out of space on /tmp

This is a quick and old, but still a good one. while installing the Opatch using the JAR file I got the error saying that I had no space under /tmp.

This is a new server that I´m building and I missed requesting a bigger /tmp.

So what to do now?

This installer does not recognize the environment variables such as TMPDIR so you need to set a _JAVA_OPTIONS with a path to replace its default location.

grepora01[oracle] /u01/software/6880880 $ # $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar  -silent oracle_home=$ORACLE_HOME -J-Djava.io.tmpdir=/u01/tmp
Launcher log file is /tmp/OraInstall2021-03-24_12-53-25PM/launcher2021-03-24_12-53-25PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2593.207 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 4099 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 0 MB    Failed <<<<
Some system prerequisite checks failed.
You must fulfill these requirements before continuing.

To fix it, you can simply:

grepora01[oracle] /u01/software/6880880 $ # export _JAVA_OPTIONS="-Djava.io.tmpdir=/u01/tmp"
grepora01[oracle] /u01/software/6880880 $ # $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar  -silent oracle_home=$ORACLE_HOME
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/u01/tmp
Launcher log file is /u01/tmp/OraInstall2021-03-24_01-30-12PM/launcher2021-03-24_01-30-12PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2593.207 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 4099 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 198503 MB    Passed
Preparing to launch the Oracle Universal Installer from /u01/tmp/OraInstall2021-03-24_01-30-12PM
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/u01/tmp

Installation Summary

Disk Space : Required 34 MB, Available 198,462 MB
Feature Sets to Install:
        Next Generation Install Core 13.9.4.0.1
        OPatch 13.9.4.2.5
        OPatch Auto OPlan 13.9.4.2.5
Session log file is /u01/tmp/OraInstall2021-03-24_01-30-12PM/install2021-03-24_01-30-12PM.log

Loading products list. Please wait.

 1%
 40%
Loading products. Please wait.
 42%
 43%
 45%
 46%
 48%
 49%
 50%
 51%
 52%
 54%
 55%
 57%
 58%
 60%
 61%
 62%
 64%
 65%
 67%
 68%
 70%
 71%
 72%
 74%
 75%
 77%
 78%
 80%
 81%
 82%
 84%
 85%
 87%
 88%
 90%
 91%
 92%
 94%
 95%
 97%
 98%
 99%

Updating Libraries

Starting Installations
 1%
 2%
 3%
 4%
 5%
 6%
 7%
 8%
 9%
 10%
 11%
 12%
 13%
 14%
 15%
 16%
 17%
 18%
 19%
 20%
 21%
 22%
 23%
 24%
 25%
 26%
 27%
 28%
 29%
 30%
 31%
 32%
 33%
 34%
 35%
 36%
 37%
 38%
 39%
 40%
 41%
 42%
 43%
 44%
 45%
 46%
 47%
 48%
 49%
 50%
 51%
 52%
 53%
 54%
 55%
 56%
 57%
 58%
 59%
 60%
 61%
 62%
 63%
 64%
 65%
 66%
 67%
 68%
 69%
 70%
 71%
 72%
 73%
 74%
 75%
 76%
 77%
 78%
 79%
 80%
 81%
 82%
 83%
 84%
 85%
 86%
 87%
 88%
 89%
 90%
 91%
 92%
 93%
 94%
 95%

Install pending

Installation in progress

Component : oracle.glcm.logging 1.6.4.0.0

Copying files for oracle.glcm.logging 1.6.4.0.0

Component : oracle.glcm.comdev 7.8.4.0.0

Copying files for oracle.glcm.comdev 7.8.4.0.0

 Component : oracle.glcm.dependency 1.8.4.0.0

Copying files for oracle.glcm.dependency 1.8.4.0.0

 Component : oracle.glcm.xmldh 3.4.4.0.0

Copying files for oracle.glcm.xmldh 3.4.4.0.0

 Component : oracle.glcm.wizard 7.8.4.0.0

Copying files for oracle.glcm.wizard 7.8.4.0.0

 Component : oracle.nginst.common 13.9.4.0.0

Copying files for oracle.nginst.common 13.9.4.0.0

 Component : oracle.nginst.core 13.9.4.0.0

Copying files for oracle.nginst.core 13.9.4.0.0

 Component : oracle.glcm.opatch.common.api 13.9.4.0.0

Copying files for oracle.glcm.opatch.common.api 13.9.4.0.0

 Component : oracle.glcm.encryption 2.7.4.0.0

Copying files for oracle.glcm.encryption 2.7.4.0.0

 Component : oracle.swd.opatch 13.9.4.2.5

Copying files for oracle.swd.opatch 13.9.4.2.5

 Component : oracle.glcm.osys.core 13.9.1.0.0

Copying files for oracle.glcm.osys.core 13.9.1.0.0

Install successful

Post feature install pending

Post Feature installing

 Feature Set : glcm_encryption_lib

 Feature Set : oracle.glcm.osys.core.classpath

 Feature Set : commons-cli_1.3.1.0.0

 Feature Set : oracle.glcm.opatch.common.api.classpath

Post Feature installing commons-cli_1.3.1.0.0

Post Feature installing oracle.glcm.opatch.common.api.classpath

 Feature Set : glcm_common_lib

 Feature Set : glcm_common_logging_lib

Post Feature installing glcm_common_lib

Post Feature installing glcm_common_logging_lib

 Feature Set : oracle.glcm.opatchauto.core.classpath

 Feature Set : oracle.glcm.opatchauto.core.binary.classpath

Post Feature installing oracle.glcm.opatchauto.core.classpath

Post Feature installing oracle.glcm.opatchauto.core.binary.classpath

Post Feature installing oracle.glcm.osys.core.classpath

Post Feature installing glcm_encryption_lib

 Feature Set : oracle.glcm.oplan.core.classpath

Post Feature installing oracle.glcm.oplan.core.classpath

 Feature Set : oracle.glcm.opatchauto.core.actions.classpath

Post Feature installing oracle.glcm.opatchauto.core.actions.classpath

 Feature Set : oracle.glcm.opatchauto.core.wallet.classpath

Post Feature installing oracle.glcm.opatchauto.core.wallet.classpath

Post feature install complete

String substitutions pending

String substituting

 Component : oracle.glcm.logging 1.6.4.0.0

String substituting oracle.glcm.logging 1.6.4.0.0

 Component : oracle.glcm.comdev 7.8.4.0.0

String substituting oracle.glcm.comdev 7.8.4.0.0

 Component : oracle.glcm.dependency 1.8.4.0.0

String substituting oracle.glcm.dependency 1.8.4.0.0

 Component : oracle.glcm.xmldh 3.4.4.0.0

String substituting oracle.glcm.xmldh 3.4.4.0.0

 Component : oracle.glcm.wizard 7.8.4.0.0

String substituting oracle.glcm.wizard 7.8.4.0.0

 Component : oracle.nginst.common 13.9.4.0.0

String substituting oracle.nginst.common 13.9.4.0.0

 Component : oracle.nginst.core 13.9.4.0.0

String substituting oracle.nginst.core 13.9.4.0.0

 Component : oracle.glcm.opatch.common.api 13.9.4.0.0

String substituting oracle.glcm.opatch.common.api 13.9.4.0.0

 Component : oracle.glcm.encryption 2.7.4.0.0

String substituting oracle.glcm.encryption 2.7.4.0.0

 Component : oracle.swd.opatch 13.9.4.2.5

String substituting oracle.swd.opatch 13.9.4.2.5

 Component : oracle.glcm.osys.core 13.9.1.0.0

String substituting oracle.glcm.osys.core 13.9.1.0.0

String substitutions complete

Link pending

Linking in progress

 Component : oracle.glcm.logging 1.6.4.0.0

Linking oracle.glcm.logging 1.6.4.0.0

 Component : oracle.glcm.comdev 7.8.4.0.0

Linking oracle.glcm.comdev 7.8.4.0.0

 Component : oracle.glcm.dependency 1.8.4.0.0

Linking oracle.glcm.dependency 1.8.4.0.0

 Component : oracle.glcm.xmldh 3.4.4.0.0

Linking oracle.glcm.xmldh 3.4.4.0.0

 Component : oracle.glcm.wizard 7.8.4.0.0

Linking oracle.glcm.wizard 7.8.4.0.0

 Component : oracle.nginst.common 13.9.4.0.0

Linking oracle.nginst.common 13.9.4.0.0

 Component : oracle.nginst.core 13.9.4.0.0

Linking oracle.nginst.core 13.9.4.0.0

 Component : oracle.glcm.opatch.common.api 13.9.4.0.0

Linking oracle.glcm.opatch.common.api 13.9.4.0.0

 Component : oracle.glcm.encryption 2.7.4.0.0

Linking oracle.glcm.encryption 2.7.4.0.0

 Component : oracle.swd.opatch 13.9.4.2.5

Linking oracle.swd.opatch 13.9.4.2.5

 Component : oracle.glcm.osys.core 13.9.1.0.0

Linking oracle.glcm.osys.core 13.9.1.0.0

Linking in progress

Link successful

Setup pending

Setup in progress

 Component : oracle.glcm.logging 1.6.4.0.0

Setting up oracle.glcm.logging 1.6.4.0.0

 Component : oracle.glcm.comdev 7.8.4.0.0

Setting up oracle.glcm.comdev 7.8.4.0.0

 Component : oracle.glcm.dependency 1.8.4.0.0

Setting up oracle.glcm.dependency 1.8.4.0.0

 Component : oracle.glcm.xmldh 3.4.4.0.0

Setting up oracle.glcm.xmldh 3.4.4.0.0

 Component : oracle.glcm.wizard 7.8.4.0.0

Setting up oracle.glcm.wizard 7.8.4.0.0

 Component : oracle.nginst.common 13.9.4.0.0

Setting up oracle.nginst.common 13.9.4.0.0

 Component : oracle.nginst.core 13.9.4.0.0

Setting up oracle.nginst.core 13.9.4.0.0

 Component : oracle.glcm.opatch.common.api 13.9.4.0.0

Setting up oracle.glcm.opatch.common.api 13.9.4.0.0

 Component : oracle.glcm.encryption 2.7.4.0.0

Setting up oracle.glcm.encryption 2.7.4.0.0

Component : oracle.swd.opatch 13.9.4.2.5

Setting up oracle.swd.opatch 13.9.4.2.5

Component : oracle.glcm.osys.core 13.9.1.0.0

Setting up oracle.glcm.osys.core 13.9.1.0.0

Setup successful

Save inventory pending

Saving inventory

96%

Saving inventory complete

 97%

Configuration complete

Component : glcm_common_logging_lib

Component : glcm_common_lib

Saving the inventory glcm_common_lib

Saving the inventory glcm_common_logging_lib

Component : oracle.glcm.opatch.common.api.classpath

Saving the inventory oracle.glcm.opatch.common.api.classpath

Component : cieCfg_common_rcu_lib

Saving the inventory cieCfg_common_rcu_lib

Component : oracle.glcm.osys.core.classpath

Saving the inventory oracle.glcm.osys.core.classpath

Component : cieCfg_common_lib

Saving the inventory cieCfg_common_lib

Component : oracle.glcm.logging

Saving the inventory oracle.glcm.logging

Component : oracle.glcm.oplan.core.classpath

Saving the inventory oracle.glcm.oplan.core.classpath

Component : glcm_encryption_lib

Saving the inventory glcm_encryption_lib

Component : svctbl_lib

Saving the inventory svctbl_lib

Component : com.bea.core.binxml_dependencies

Saving the inventory com.bea.core.binxml_dependencies

Component : svctbl_jmx_client

Saving the inventory svctbl_jmx_client

Component : cieCfg_wls_shared_lib

Saving the inventory cieCfg_wls_shared_lib

Component : rcuapi_lib

Saving the inventory rcuapi_lib

Component : rcu_core_lib

Saving the inventory rcu_core_lib

Component : cieCfg_cam_lib

Saving the inventory cieCfg_cam_lib

Component : cieCfg_cam_external_lib

Saving the inventory cieCfg_cam_external_lib

Component : cieCfg_cam_impl_lib

Saving the inventory cieCfg_cam_impl_lib

Component : cieCfg_wls_lib

Saving the inventory cieCfg_wls_lib

Component : cieCfg_wls_external_lib

Saving the inventory cieCfg_wls_external_lib

Component : cieCfg_wls_impl_lib

Saving the inventory cieCfg_wls_impl_lib

Component : rcu_dependencies_lib

Saving the inventory rcu_dependencies_lib

Component : oracle.fmwplatform.fmwprov_lib

Saving the inventory oracle.fmwplatform.fmwprov_lib

Component : fmwplatform-wlst-dependencies

Saving the inventory fmwplatform-wlst-dependencies

Component : oracle.fmwplatform.ocp_lib

Saving the inventory oracle.fmwplatform.ocp_lib

Component : oracle.fmwplatform.ocp_plugin_lib

Saving the inventory oracle.fmwplatform.ocp_plugin_lib

Component : wlst.wls.classpath

Saving the inventory wlst.wls.classpath

Component : maven.wls.classpath

Saving the inventory maven.wls.classpath

Component : com.oracle.webservices.fmw.ws-assembler

Saving the inventory com.oracle.webservices.fmw.ws-assembler

Component : sdpmessaging_dependencies

Saving the inventory sdpmessaging_dependencies

Component : sdpclient_dependencies

Saving the inventory sdpclient_dependencies

Component : oracle.jrf.wls.classpath

Saving the inventory oracle.jrf.wls.classpath

Component : oracle.jrf.wlst

Saving the inventory oracle.jrf.wlst

Component : fmwshare-wlst-dependencies

Saving the inventory fmwshare-wlst-dependencies

Component : oracle.fmwshare.pyjar

Saving the inventory oracle.fmwshare.pyjar

Component : mapviewer-client

Saving the inventory mapviewer-client

Component : bitech-analysis-application-thirdparty

Saving the inventory bitech-analysis-application-thirdparty

Component : glcm_common_logging_lib

Component : glcm_common_lib

Saving the inventory glcm_common_logging_lib

Saving the inventory glcm_common_lib

Component : oracle.glcm.opatch.common.api.classpath

Component : glcm_encryption_lib

Saving the inventory oracle.glcm.opatch.common.api.classpath

Saving the inventory glcm_encryption_lib

Component : oracle.glcm.oplan.core.classpath

Component : oracle.glcm.osys.core.classpath

Saving the inventory oracle.glcm.oplan.core.classpath

Saving the inventory oracle.glcm.osys.core.classpath

The install operation completed successfully.

Logs successfully copied to /u01/app/oraInventory/logs.

 

 

Relying in Guaranteed Restore Points? Be careful!

Hi all,

Are you relying on Guaranteed Restore Points (GRP) as a fallback plan for your migration or upgrade strategy? Be careful!

When performing some non-Prod upgrade with the Autoupgrade tool,  after completing the upgrade, I wanted to roll it back and go through the process again,  This is what happened:

SQL> startup
ORA-29702: error occurred in Cluster Group Service operation

When looking for it found this blog post from Mike I missed the last year: https://mikedietrichde.com/2020/11/13/ora-29702-and-your-instance-does-not-startup-in-the-cluster-anymore/

This means my database is not starting anymore! Oh man, glad that I’m in the testing phase!

This caused by of Bug 31561819 – Incompatible maxmembers at CRSD Level Causing Database Instance Not Able to Start.

As per Mike’s post, “you don’t need to even restore or flashback a database to hit this error. A simple instance in NOMOUNT state leads to the same error. Without even any datafile.”

The bug is fixed on:

  • 19.9.0.0.201020 (Oct 2020) OCW RU
  • 18.12.0.0.201020 (Oct 2020) OCW RU
  • 12.2.0.1.201020 (Oct 2020) OCW RU

As being, you should include this patch BEFORE starting any move! Do it right away if you are on these versions!

Also, be aware of the latest change regarding Restore Points propagation on 19c, as per MOS Automatic Propagate Restore Points from Primary to Standby site in 19C (Doc ID 2463082.1)

In my case, the usage is exactly for a 12.1->19c upgrade. So, the fix is not even available (no Extended Support in place). As being, we had to think on alternate fallback plans, like a physical standby. But this is a topic for another post.

So for YOU:

  • Apply this patch if you can!
  • If not, be very careful on the fallback plans and as usual: Test, Test and Test!

See you next post!

Exadata DNS Change – Pitfalls to be avoided

Hi all, it’s been a while but here I am!

There were some changes in the infrastructure at the place I work and I was asked to do a DNS change on a bit old Exadata X5. I had never done one before this, so the idea of this post is to help others who might face the issues I had.

The first thing I did was to look up the documentation about it and see the steps, yes there are blogs about it but the doc can help to get at least the first glance of the situation.

Long story short: Exadata has lots of components and the new DNS should be changed on all of them.

Here is a summary of the steps.

Infiniband switches

Connect to the switches and sudo to ilom-admin and change the DNS

su - ilom-admin
show /SP/clients/dns
set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3
show /SP/clients/dns

 

Database nodes

For my image I only needed to change the /etc/resolv.conf, if you have a newer one you will need to user ipconf – That´s why you need to go to the documentation, at least there we hope that they will put some mentions on the pitfalls (well keep reading and you will see that was not my case)

Also changed the DNS on wach database node ilom, runing the ipmtool from the each node

ipmitool sunoem cli 'show /SP/clients/dns'
ipmitool sunoem cli 'set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3'
ipmitool sunoem cli 'show /SP/clients/dns'


Cell nodes – Here things start to get interesting

For the storage cell there are some points that need to be taken under consideration:

Increase the ASM disk_repair_time – the goal here is to avoid a full rebalance if you do this within its timeframe, if you don’t know this parameter,  ASM will wait for up to the interval specified for DISK_REPAIR_TIME for the disk(s) to come online. If the disk(s) come back online within this interval, a resync operation will occur, where only the extents that were modified while the disks were offline are written to the disks once back online. If the disk(s) do not come back within this interval, ASM will initiate a forced drop of the disk(s), which will trigger a rebalance.

On each cell node we need to make sure all disks are OK, stop all cell disks, stop all cell services and user ipconfig to change the DNS configuration

#Check that putting the grid disks offline will not cause a problem for Oracle ASM - it should all say YES on the 3rd column 
cellcli -e LIST GRIDDISK ATTRIBUTES name,asmmodestatus,asmdeactivationoutcome

#Inactivate all grid disks on the cell - may take a while to complete
cellcli -e ALTER GRIDDISK ALL INACTIVE


#Confirm the grid disks are offline, it should show asmmodestatus=OFFLINE or asmmodestatus=UNUSED, and asmdeactivationoutcome=Yes for all grid disks
cellcli -e LIST GRIDDISK ATTRIBUTES name, asmmodestatus,asmdeactivationoutcome

#Confirm that the disks are offline
cellcli -e LIST GRIDDISK

#Shut down the cell services and ocrvottargetd service
cellcli -e ALTER CELL SHUTDOWN SERVICES ALL
service ocrvottargetd stop #on some images this services does not exists

To execute the ipconf on the old way we only need to call it can follow the prompts, but if you have a newer image you will need to provide its parameters as is shown in the documentation.

The documentation says that after it we could start the cell services back up but I would recommend validating the DNS prior to doing that, why is that you might say because mine did not work and I could have a bigger issue with a cell node without DNS trying to start the services.

So, how to test, use nslookup, dig and curl

nslookup dns_domain.com
curl -v 192.168.16.1:53
dig another_server_in_the_network

 

My tests did not work, I was able to ping the DNS servers but not to resolve any name, I had an SR on MOS but did not help much either, looking up as this is a production system I tried to see if the firewall was up on the Linux site, and to my surprise it was.

I tried to manually add rules to iptables but it did not work and then I came across this note Exadata: New DNS server is not accessible after changing using IPCONF (Doc ID 1581417.1)

And there it was, I needed to restart the cellwall service to recreate the iptables rules.

# Restart cellwall service
service cellwall restart
service cellwall status

One final point, check if ASM started the rebalance or not, if it did, do not start to bring down another cell node until the rebalance is finish, otherwise you may run into deeper issues.

 

I hope it helps!

Elisson Almeida