ORA-02019 While SELECT From A View Owned By Another User Using Dblink

Quick case today.

This week I had a client experiencing ORA-02019 while SELECT from a View with dblink and CONNECT BY PRIOR … START WITH into SELECT.
The situation involved Views on a DB which need to be accessed by a groups of users from another DB using proxy user and a DB link but encounter this ORA error.

The root cause?

ORA-02019 while performing select on a view or while selecting a view owned by another user, with dblink, is a match to Bug 26558437 – DATABASE LINK FAILS WITH ORA-2019 WHEN SELECT ANOTHER USER VIEW.

But MOS doesn’t have workaround besides applying patch, as usual. What we did and solved on our case?

We created a materialized view refreshed every 15 mins (solution supported this delay0 using the DBLink if the view owner.
In this case, the other users instead of executing the query on the view, will be actually querying the table created (and refreshed) by the mview code, which would be only be executed by the mview owner.

By the way, 2 good side effects:
1) Once view was executed more often then the period of refresh, this solution is also saving some efforts database wise, once executing select from mview is way better then the view code, besides not using the network on dblink
2) If the remote database get slow, or down, the data would be still available from last mview refresh.

Conclusions:
1) Use MATERIALIZED VIEW!
2) MOS not always give you all the steps. Sometimes you can easily solve your problem by thinking a little bit more on the root cause problem.

Cheers

OEM Real-Time Data not Displayed

Hi all,
So I was facing the follwoing situation in a client:
– SYS as SYSDBA passoword working to connect on database.
– Connecting to targets with this credentials working fine.
– However when loading real-time graphs it was simply not working.
– Tried to create manually the named credentials, still not working now with the error:

Listener refused the connection with the following error: ORA-12514, TNS:listener does not currently know of the service requested in connect descriptor.

See what I was getting from database home:

NewScreenshot 2019-05-09 às 11.34.33.png

Weird, but hey, the solution was already posted in Grepora!

Ok, I discovered anyway from MOS BUG 28513706 – 13c2EM: DB home page shows Connection Cache with this Cache Name does not exist. it miht be related to connection pools.
Even thought this was closed as not a bug, a learned a new workaround for it:

1. [OMS_HOME]/bin/emctl set property -name use_pooled_target_connections -value false 

2. Restart oms 

[OMS_HOME]/bin/emctl stop oms -all 

[OMS_HOME]/bin/emctl start oms

Hope it helps!
Cheers!

12.2 Scheduler Job Disappeared After Creation

Hi all,
Just sharing an experience here. Recently in a migrated 12.2 environment I created a Scheduler Job and it simply disappeared. How come?
Well, seems since 12.2 the AUTO_DROP attribute is set to true by default. I absolutely don’t know why. So I had to recreate the job and set auto_drop to false, simple like this.

I couldn’t find any related bug on MOS, but I’d relate this as a bug, once the default behavior have changed with no prior information.
So, now on, I strongly suggest you to check on your job AUTO_DROP attribute after job creation.

Here is a test case that reproduces the problem:

In a 12.2 database:

begin 
  DBMS_SCHEDULER.create_job (
    job_name        => 'TEST_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN null; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'SYSTIMESTAMP + INTERVAL ''10'' SECOND',
    end_date        => NULL,
    enabled         => false,
    comments        => 'Test job');
end;
/
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';
-- Job should appear here

begin 
  dbms_scheduler.enable('TEST_JOB'); 
  dbms_scheduler.run_job('TEST_JOB',false); 
end;
/
 
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';

-- There is no job now in 12.2.

You can take the AUTO_DROP easily from the query above, or using getting the job_ddl and you’ll see the AUTO_DROP set to true, as in the bottom of this page by Tim Hall.

Hope it helps!
Cheers!

ORA-00600 [ipc_recreate_que_2]

Hi all,
So this week I started receiving this error in a client environment. This was happening due Bug 26803191 – Getting ORA-00600 [ipc_recreate_que_2] instead of ORA-27515 – superseded (Doc ID 26803191.8)
In summary, failures due to low memlock limit are leading ORA-00600 [ipc_recreate_que_2] instead of ORA-27515, patching ensure that ORA-27515 is raised on failures due to low memlock limit instead of this ORA-600.

As workaround, the memlock limited can be changed, as also described in MOS ORA-00600: Internal Error Code, Arguments: [ipc_recreate_que_2] When Running Exachk (Doc ID 2480088.1).

Before:

[boesing@exa1dbadm01 trace]$ grep memlock /etc/security/limits.conf |grep oracle
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock 237809520
oracle hard memlock 237809520

After:

[boesing@exa1dbadm01 ~]$ grep memlock /etc/security/limits.conf
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock unlimited
oracle hard memlock unlimited
* soft memlock 32768
* hard memlock 32768

Hope it helps,
Cheers!

OEM13c: Regions that display real-time data will not be displayed. Exception while loading RAC Database Home Page: null

Hi all,
So I was having this issue from a Database Home page on OEM 13c:

image_OEMBug.png

Actually, OMS log was presenting several null pointer exceptions… So, whats is the deal?

Everything seemed to match to MOS Bug 22957131 – OEM13C: Exception while loading RAC Database Home Page: null.

The solution?
– Patch 25197714 for the EM 13.2 OMS
– Patch 25155095 for the EM 13.1 OMS

Also, those fixes are included on following Boudle Patches:
– 13.2.1.0.161231
– 13.1.1.0.161220

Applied the patch and solved my case. Hope it helps you!

More Reference:
– EM 13C: Target Database Home Page Displays Message in Enterprise Manager 13c Cloud Control: Regions that display real-time data will not be displayed. Exception while loading RAC Database Home Page: null (Doc ID 2210123.1)
– Note 2219797.1 Enterprise Manager 13.2 Master Bundle Patch List
– Note 2124038.1 Enterprise Manager 13.1 Master Bundle Patch List for the Management Agent and Plug-ins

ORA 600 [kdt_bseg_srch_cbk PITL1] / ORA-00700 [PITL6] / ORA-00700 [PITL5]

Hi all,
So, right after a patching, a client environment started receiving entries on alert log like:

ORA-00700: soft internal error, arguments: [PITL6], [], [], [], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [11], [], [], [], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [kdt_bseg_srch_cbk PITL1], [2], [], [], [], [], [], [], [], [], [], []

First a quick word regarding ORA-700: An ORA-700 is a so-called ‘soft’ assert. Soft asserts are triggered when the caller wants to make a note of the fact that something unexpected has happened, but would like to continue on because the failure is not fatal to the process or the instance. This was introduced in 12c and got some of ORA-600 messages (the informative ones), to leave ORA-600 for more critical issues.

Now to the errors: This is a clear match to Bug 28483184 (Bug 28483184 – ORA-600[PITL1] ON UPDATE TO COMPRESSED BLOCK WITH FIX FOR BUG 28364411 INSTALLED), which consists in a known defect in the Oracle code allows continued insertion of non-header block rows past the number of block ITLs in data blocks of OLTP-compressed tables.

More info: MOS Errors Noted in 12.2 and Above During DML on Compressed Tables: ORA-00600 [PITL1] / ORA-00600 [kdt_bseg_srch_cbk PITL1] / ORA-00700 [PITL6] / ORA-00700 [kdt_bseg_srch_cbk PITL5] (Doc ID 2420831.1)

To fix it: Patch on RDBMS for bug 28483184 and reorganize on involved tables. Seems to me that the patch only fixes the incorrect creation of non-header blocks, but doesn’t fix those that already have the problem in place.

My twists: I’d recommend you to increase a little bit the PCTFREE during the table reorganization, also based on Oracle’s recommendation. Recreating the tables with a larger PERCENT FREE will enable more space for ITLs. This is advantageous since the fix changes out a block that has the maximum amount of ITLs for one that has less ITLs.

If arranging a patching window is a big problem, note this patch is online installable. I’d recommend to do it offline using RAC Rolling however, as this seems a more consistent process in general.

So in summary, recommended actions are:
– Apply Patch 28483184 (RAC Rolling, Standby-First and online installable).
– Reorganize table, using methods mentioned by Suresh (ex: Create table as select (CTAS), Alter table move, Data Pump export / import, Online redefinition)

Hope that helps you!

ORA-01450 on Index Rebuild

Hello all,
So, I got this:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS;
ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded

So, is this a bug? Yes it is.

From MOS ALTER INDEX REBUILD ONLINE Fails with ORA-01450 (Doc ID 236329.1):
“This is caused by issue Bug:2525767. The online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. Their total length is greater than number reported in ORA-01450 error message. This is a feature of online rebuild.
Maximum key length is calculated with respect to the database block size. It means that current value of the initialization parameter db_block_size is not large enough so that the internal journal IOT can be created without errors.”

Wanna know more, there is a great article on how this works exactly, by Alex Gorbachev: ORA-01450 During Online Index Rebuild

The solution?
It simply cannot be done online as explained, so doing it offline is the solution:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD TABLESPACE NEW_TBS;

Index altered.

Hope it helps, cheers!

dba_registry_sqlpatch/cdb_registry_sqlpatch Empty after Patch

Hi all!
So, I was checking a new environment and noticed the dba_registry_sqlpatch was empty, when it actually shouldn’t:

SQL> select patch_id, patch_uid, version, action, action_time, status, description from dba_registry_sqlpatch;

no rows selected

SQL>

The expected output should be (from another CDB in same home):

 PATCH_ID  PATCH_UID VERSION		   ACTION	   ACTION_TIME								       STATUS	       DESCRIPTION
---------- ---------- -------------------- --------------- --------------------------------------------------------------------------- --------------- ----------------------------------------------------------------------------------------------------
  24917972   20791781 12.1.0.2		   APPLY	   37-APR-17 11.19.49.103261 AM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  24732082   20904347 12.1.0.2		   APPLY	   17-APR-17 11.19.49.322985 AM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.170117
  24917972   20791781 12.1.0.2		   ROLLBACK	 29-NOV-17 08.35.57.888426 PM 					       SUCCESS	       Database PSU 12.1.0.2.170117, Oracle JavaVM Component (JAN2017)
  26635845   21564421 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.890421 PM 					       SUCCESS	       Database PSU 12.1.0.2.171017, Oracle JavaVM Component (OCT2017)
  26713565   21602269 12.1.0.2		   APPLY	   29-NOV-17 08.35.57.956378 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.171017
  27338041   22036385 12.1.0.2		   APPLY	   12-JUN-18 01.45.24.163558 PM 					       SUCCESS	       DATABASE PATCH SET UPDATE 12.1.0.2.180417

The result is basically the same if quering cdb_registry_sqlpatch.

Fist found the MOS dba_registry_sqlpatch or registry$sqlpatch View Is Not Reflecting the Complete Updated Information after Patching (Doc ID 2039738.1).
Problem is that is applies to 12.1 and it is caused by a bug in opatch version 12.1.0.1.6, but OPatch version is 12.2.0.1.8.

$ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.8

If this is a match for your, the proposed solution in that note is:

1. Download and use latest opatch version 12.1.0.1.8. (Patch 6880880)
2. Take the backup & delete the contents of dba_registry_sqlpatch to remove the invalid entries:

    SQL>delete 

3. Re-run the datapatch

But what was my problem then?
Well, after a while noticed the MOS Note Datapatch may skip the application of SQL payload for certain patches included in a given bundle in a RAC environment. (Doc ID 2069046.1).
It includes a PLSQL validation script, by the way. Have a look in case it’s a suspect.

And it was a match to me, seems the client used to had problems with opatchauto in the past and had to run the ‘datapatch -verbose’ manually.

The solution? To run this for every CDB contained in the cluster. The Registry$sqlpatch table is now reporting the correct patch history for all CDBs.

Hope it helps!

Orphan AWR Data Consuming SYSAUX Space

Hi all!
Some time ago I faced the this situation in a client. SYSAUX was simply growing with no reason, but retention and everything has not changed.

After some research, could map issue to Bug 14084247: STBH: ORA-01555 DUE TO WRH$_ACTIVE_SESSION_HISTORY NOT PURGED.

There is an one-ooff patch, but needs also extra steps to make it work, as stated on MOS: Excessive AWR Growth From Partitioned Objects Such as SYS.WRH$_EVENT_HISTOGRAM Causing Sysaux to Grow (Doc ID 1912201.1):

However, even after applying bug patch 14084247, this fix will not have an immediate effect and may take some time to "catch up".

The good new is that this can also be applied in on-line patching mode. I did another post documenting this patch applying: Online Data Patch Apply with multiple Databases on same Oracle Home: OPatch failed with error code 26

Some other relevant MOS Documents under this subject are:
WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (Doc ID 387914.1)
Suggestions if Your SYSAUX Tablespace Grows Rapidly or Too Large (Doc ID 1292724.1)

Ok, but waht can I do right now?
You can purge the top AWR tables, as described on MOS: WRH$_LATCH, WRH$_SYSSTAT, and WRH$_PARAMETER Consume the Majority of Space within SYSAUX (Doc ID 2099998.1).

Check steps below on how it can be done:

More“Orphan AWR Data Consuming SYSAUX Space”