ORA-02019 While SELECT From A View Owned By Another User Using Dblink

Quick case today.

This week I had a client experiencing ORA-02019 while SELECT from a View with dblink and CONNECT BY PRIOR … START WITH into SELECT.
The situation involved Views on a DB which need to be accessed by a groups of users from another DB using proxy user and a DB link but encounter this ORA error.

The root cause?

ORA-02019 while performing select on a view or while selecting a view owned by another user, with dblink, is a match to Bug 26558437 – DATABASE LINK FAILS WITH ORA-2019 WHEN SELECT ANOTHER USER VIEW.

But MOS doesn’t have workaround besides applying patch, as usual. What we did and solved on our case?

We created a materialized view refreshed every 15 mins (solution supported this delay0 using the DBLink if the view owner.
In this case, the other users instead of executing the query on the view, will be actually querying the table created (and refreshed) by the mview code, which would be only be executed by the mview owner.

By the way, 2 good side effects:
1) Once view was executed more often then the period of refresh, this solution is also saving some efforts database wise, once executing select from mview is way better then the view code, besides not using the network on dblink
2) If the remote database get slow, or down, the data would be still available from last mview refresh.

Conclusions:
1) Use MATERIALIZED VIEW!
2) MOS not always give you all the steps. Sometimes you can easily solve your problem by thinking a little bit more on the root cause problem.

Cheers

12c Datapatch Failing on ORA-65108: Applying Datapatch Manually

Hi all,

So, in a new 12c environment datapatch failed on the following:

ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 1
ORA-65108: invalid use of a cursor belonging to another container
kpdbaKillPdbSessions: Starting kill.
2019-01-29 07:09:21.486584 :kjcipctxinit(): (pid|psn)=(25|2): initialised and linked pctx 0x00007FFB1A7B49C8 into process list
ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 1
ORA-65108: invalid use of a cursor belonging to another container

Point is that was failing for the PDB$SEED. When upgrading, there are phases where you need the seed opened read-write. But in general you don’t to that yourself.
The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.

So how to manually proceed with this?

1. Apply the datapatch first to other PDBs using the -pds clause:

./datapatch -verbose -pdbs PDB1,PDB2

2. Open PDB$SEED for Oracle Scripts:

SYS@CDB$ROOT SQL> alter session set "_oracle_script"=true;
 Session altered. 
 SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open read write force;
 Pluggable database PDB$SEED altered.

3. Apply the datapatch to the PDB$SEED

$ORACLE_HOME/OPatch/datapatch -verbose -pdbs ‘PDB$SEED’

4. Close PDB$SEED and Mount it back

 SYS@CDB$ROOT SQL> alter session set container=PDB$SEED
 Session altered. 
 SYS@CDB$ROOT SQL> shutdown immediate
 Pluggable Database closed
++ From Root
 SYS@CDB$ROOT SQL> alter pluggable database PDB$SEED open read only;

Hope it helps, cheers!

OMSPatcher finds that previous patching session is not yet completed – What to do?

Hey all,
As usual, a client reached out with this issue:

OMSPatcher finds that previous patching session is not yet completed.
Please refer log file "/u01/app/oracle/middleware/cfgtoollogs/omspatcher/28018178/omspatcher_2018-07-09_23-44-58PM_deploy.log" 
for the previous session and execute the script "/u01/app/oracle/middleware/.omspatcher_storage/oms_session/scripts_2018-07-09_23-44-39PM/run_script_singleoms_resume.sh"  to complete the previous session. OMSPatcher can proceed to execute new operations only if previous session is completed successfully.

Interesting, right?
This means a patch execution in July failed and it wasn’t noticed.

What to do? Point is, the error itself already say what needs to be done.
You just may want to make it properly. How? Here is a quick Action Plan:

ZER0) Check the Deploy log to understand the root cause for the failure on previous patch and fix it.

In my case?
Not all required components were down.

A simple “stop oms” stops only the OMS managed server, JVMD engine, and HTTP server but leaves Node Manager and Administration Server running.
However, a “stop oms -all” stops all Enterprise Manager processes including Administration Server, OMS, HTTP Server, Node Manager, Management Server, JVMD engine, and Oracle BI Publisher (if it is configured on the host). This was the fixing.

Step-by-Step:

1. Blackout targets to avoid unwanted pages.
– On OEM: Enterprise–>Monitoring–>Blackouts

2. Shutdown OMS and AGENT

cd $AGENT_HOME/bin
./emctl stop agent
cd $OMS_HOME/bin
./emctl stop oms -all

3. Resume Patching with issue (with provided command)
(in my case):

/u01/app/oracle/middleware/.omspatcher_storage/oms_session/scripts_218-07-09_23-44-39PM/run_script_singleoms_resume.sh

4. Verify patches got installed

$OMS_HOME/OPatch/opatch lsinventory
$OMS_HOME/OMSPatcher/omspatcher lspatches

5. Start the OMS and agent

cd $AGENT_HOME/bin
./emctl start agent
cd $OMS_HOME/bin
./emctl start oms
./emctl status oms -details

6. Sync EMCLI with server changes:

$OMS_HOME/bin/emcli login -username=sysman
Enter password : <-- sysman password
$OMS_HOME/bin/emcli sync

More“OMSPatcher finds that previous patching session is not yet completed – What to do?”

dba_registry_sqlpatch/cdb_registry_sqlpatch Empty after Patch

Hi all!
So, I was checking a new environment and noticed the dba_registry_sqlpatch was empty, when it actually shouldn’t:

SQL> select patch_id, patch_uid, version, action, action_time, status, description from dba_registry_sqlpatch;

no rows selected

SQL>

The expected output should be (from another CDB in same home):

 PATCH_ID  PATCH_UID VERSION		   ACTION	   ACTION_TIME								       STATUS	       DESCRIPTION
---------- ---------- -------------------- --------------- --------------------------------------------------------------------------- --------------- ----------------------------------------------------------------------------------------------------
  24917972   20791781 12.1.0.2		   APPLY	   37-APR-17 11.19.49.103261 AM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  24732082   20904347 12.1.0.2		   APPLY	   17-APR-17 11.19.49.322985 AM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.170117
  24917972   20791781 12.1.0.2		   ROLLBACK	 29-NOV-17 08.35.57.888426 PM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  26635845   21564421 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.890421 PM 					       SUCCESS	       Database PSU 12.1.0.2.171017, Oracle JavaVM Component (OCT2017)
  26713565   21602269 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.956378 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.171017
  27338041   22036385 12.1.0.2		   APPLY	   12-JUN-18 01.45.24.163558 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.180417

The result is basically the same if quering cdb_registry_sqlpatch.

Fist found the MOS dba_registry_sqlpatch or registry$sqlpatch View Is Not Reflecting the Complete Updated Information after Patching (Doc ID 2039738.1).
Problem is that is applies to 12.1 and it is caused by a bug in opatch version 12.1.0.1.6, but OPatch version is 12.2.0.1.8.

$ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.8

If this is a match for your, the proposed solution in that note is:

1. Download and use latest opatch version 12.1.0.1.8. (Patch 6880880)
2. Take the backup & delete the contents of dba_registry_sqlpatch to remove the invalid entries:

    SQL>delete 

3. Re-run the datapatch

But what was my problem then?
Well, after a while noticed the MOS Note Datapatch may skip the application of SQL payload for certain patches included in a given bundle in a RAC environment. (Doc ID 2069046.1).
It includes a PLSQL validation script, by the way. Have a look in case it’s a suspect.

And it was a match to me, seems the client used to had problems with opatchauto in the past and had to run the ‘datapatch -verbose’ manually.

The solution? To run this for every CDB contained in the cluster. The Registry$sqlpatch table is now reporting the correct patch history for all CDBs.

Hope it helps!

After Patch: MRP0: Background Media Recovery terminated with error 10485

Ok,
I had that some time ago after applying Patch 27475598 – Oracle JavaVM Component 11.2.0.4.180417 Database PSU.
Why? Well, this is Non RAC-Rolling Installable and also Not Data Guard Standby First Installable.

This means there downtime for this patch, no escape.

I had to (skipping all the standard opatch steps, you can see those on README):

  • Stop DG Replication:
dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-off';
show database mydg
  • Run postinstall.sql in upgrade mode with only 1 instance on (disable RAC):
cd $ORACLE_HOME/sqlpatch/27475598
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> alter system set cluster_database=false scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP UPGRADE
SQL> @postinstall.sql
SQL> alter system set cluster_database=true scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP

Ok, all good, seems all fine.

But now when starting my DG replication:

dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-on';
show database mydg

What I see is:

DGMGRL> show database mydg

Database - mydg

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       41 minutes 53 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    myprod

  Database Error(s):
    ORA-16766: Redo Apply is stopped

Database Status:
ERROR

DGMGRL>

And on Database Alert Log:

MRP0: Background Media Recovery terminated with error 10485
Errors in file /u01/app/oracle/diag/rdbms/axwest/greporaprod/trace/greporaprod_pr00_42628.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Recovery interrupted!
MRP0: Background Media Recovery process shutdown (greporaprod)

Well, in my case it happens because I use an Active Dataguard, in open read only. The solution? Start you DG in Mount Mode to apply the patching replication!

This is well described as per MOS: MRP process getting terminated with error ORA-10485 (Doc ID 1618485.1).

After getting sync, you can simple promote it to read only mode again.

Hope it helps!