ORA-02019 While SELECT From A View Owned By Another User Using Dblink

Quick case today.

This week I had a client experiencing ORA-02019 while SELECT from a View with dblink and CONNECT BY PRIOR … START WITH into SELECT.
The situation involved Views on a DB which need to be accessed by a groups of users from another DB using proxy user and a DB link but encounter this ORA error.

The root cause?

ORA-02019 while performing select on a view or while selecting a view owned by another user, with dblink, is a match to Bug 26558437 – DATABASE LINK FAILS WITH ORA-2019 WHEN SELECT ANOTHER USER VIEW.

But MOS doesn’t have workaround besides applying patch, as usual. What we did and solved on our case?

We created a materialized view refreshed every 15 mins (solution supported this delay0 using the DBLink if the view owner.
In this case, the other users instead of executing the query on the view, will be actually querying the table created (and refreshed) by the mview code, which would be only be executed by the mview owner.

By the way, 2 good side effects:
1) Once view was executed more often then the period of refresh, this solution is also saving some efforts database wise, once executing select from mview is way better then the view code, besides not using the network on dblink
2) If the remote database get slow, or down, the data would be still available from last mview refresh.

Conclusions:
1) Use MATERIALIZED VIEW!
2) MOS not always give you all the steps. Sometimes you can easily solve your problem by thinking a little bit more on the root cause problem.

Cheers

ORA-00600: internal error code, arguments: [kghfrempty:ds]

Hi all,
Annoying thing, just because that sounded veeery unsual to me. I would expect a different error.

I have seen this “[kghfrempty:ds]” in past a couple times, and if you did see it too you may have noticed, this is almost as much generic as the ORA-600 itself. It happens because this is usually in the top of KGH errors in general. Some examples:

1.

kghnerror <- kghfrempty <- kghfrempty_ex <- qerhjFreeSpace
<- qerhjFetch <- qersoProcessULS <- qersoFetch <- opifch2 <- kpoal8
<- opiodr <- kpoodrc <- rpiswu2 <- kpoodr <- upirtrc
<- kpurcsc <- kpufch0 <- kpufch <- OCIStmtFetch2 <- qksanExecSql <- qksanAnalyzeSql

2.

kghfrempty <- kghfrempty_partial <- kghgex <- kghfnd <- kghalo <- kghgex <- kghalf <- ktmrProcessCRClone 
<- ktmbRead <- ktmrget_int <- ktmrget <- kdmsTransGet <- kdst_fetch_imc  <- kdsttgr  <- qertbFetch

And as being it can be related either to analyze bugs (as per stack 1), In-Memory Column Store – IMC (as per stack 2), create views, rollback over cluster, trigger creation, use function based indexes, or several other generic situations (there is no pattern, as you can see).

So what do to if you get this message? Well, generically, read all the MOS notes available and try to match you case. IF you match, pray to have a bug for your current version, or open an SR to either have a backport or start a new investigation.

A few examples of things to check:

  • ALERT: Bug 13605839 ORA-600 [ktbsdp1] ORA-600 [kghfrempty:ds] ORA-600 [kdBlkCheckError]. Corruption in Rollback with Clusterwide Global Transactions in RAC (Doc ID 1527740.1)
  • SQL Analyze Fails with ORA-600[kghfrempty:ds] (Doc ID 2247180.1)
    Bug 21556276 – dataguide createviewonpath hit ora 600 [kghfrempty:ds] (Doc ID 21556276.8)
  • Bug 20878625 – ORA-600[kghfrempty:ds] with IMC enabled (Doc ID 20878625.8)
  • Bug 19212166 – ORA-600 [KGHFREMPTY:DS] With Cache Line Alignment Enabled (Doc ID 19212166.8)
  • Bug 6797925 – ORA-600 [kghfrempty:ds] During Creation Of Trigger In XML Schema (Doc ID 6797925.8)
  • 11.1: ORA-600 [17147] AND [kghfrempty:ds] On Select Using Functional Indexes (Doc ID 884882.1)
  • ORA-600 [kghfrempty:ds] And ORA-600 [kghrcdepth:ds] While Gathering Statistics (Doc ID 1480132.1)

BUUT I just noticed some undocumented scenario causing this and it might help you on fixing this as well.
I noticed I was getting this error for different queries but always from same application. So I addressed a complete review on application layer and noticed the JDBC 7 on this application. Happens we had just upgraded from 12.1 to 19c.

SO, after upgrading jdbc, it started to work again properly and no MOS SR was required. So lesson learned: When you receive this kghfrempty:ds], speacially after migration, also check for your application JDBC version.

For compatibility Matrix: Starting With Oracle JDBC Drivers – Installation, Certification, and More! ( Doc ID 401934.1 )

Hope it helps, cheers!

ORA-65532: cannot alter or drop automatically created indexes

Hello all,
Quickly walking through this, as it was new to me (as for everybody, I guess): So what if I’m not happy with Oracle algorithm and I want to drop and AUTO Index on 19c?

Should be easy, like “drop index XXXX;” right? Wrong. See the struggle:

SQL> drop index GREPORA."SYS_AI_9xu652x5fyu5i";
drop index GREPORA."SYS_AI_9xu652x5fyu5i"
                 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

OK, so let’s have a look on my auto created indexes. You can see them with flag AUTO as YES on query below:

SQL> select owner, index_name, auto, tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_9xu652x5fyu5i    YES     GREPORA
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

Ok, so I cannot drop, let me alter it and set it as unusable, then, as it would archive my intend anyway:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable;
alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable
 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

Hmmm, so it means I cannot alter it as well?
Well, kind of, there is something I can do:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" rebuild tablespace GREPORA2 online;
Index altered.

Noice, thanks for nothing. What is changed?
Well, you can use a workaround to move it to a new tablespace and then drop the tablespace:

SQL> drop tablespace GREPORA2 including contents;
Tablespace dropped.

And index will be gone:

SQL> select owner,index_name,auto,tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

OK, so that’s bad, right… What the worse part? If you simply drop it, Oracle will probably recreate it, based on algorithm, right?
Gee, thanks for nothing (again)!

This, though, is easier to change: Simply alter the Audo Indexing to report mode:

exec dbms_auto_index.configure('AUTO_INDEX_MODE','REPORT');

This is interesting right?
Ok, we have more options, we can even exclude specific schemas from AUTO_INDEX, choose a default tablespace, define retentions and a lot of things.

As usual, the best reference I could find: https://oracle-base.com/articles/19c/automatic-indexing-19c

Just one thing you might want to consider and it’s not on Tim Hall’s post. Specially if the index is bad for one specific query, is to avoid using it during that specific query. We can do it with session/system parameter:

“_optimizer_use_auto_indexes”=OFF

Well, hope it helps you. Cheers!

MySQL 8 requested authentication method unknown

Hey folks,
Be aware when you’re migrating from MySQL 5.7 to 8.0 on how you’re doing and what kind of applications have access to it. The version 8.0 introduced a new authentication plugin called caching_sha2_password and it’s the default auth plugin now.  Here is the list of compatible connectors to check if your app is ok with that or if you should call the dev team.

Example: If, for some reason, you get stuck with the following error (PHP sample), you have to do a few changes.

Connect Error: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client

Edit your config file and set the option default-authentication-plugin with mysql_native_password value, restart the server and set the password again with the following command.

Config File:

[mysqld]
/... other configs .../
default-authentication-plugin=mysql_native_password

SQL:

ALTER USER 'adv'@'localhost' IDENTIFIED WITH mysql_native_password BY '1AB@8CD#E91F22!';

New users should be created the same way. Be aware that you have to do that just for users that the application will use. Regular users for DBAs and queries you don’t have to do that change.

So, look out before just doing a update on your server, or you could break your app 😉

Everything Stopped on ORA-02002!

Hi all,
So, I got contacted by a client in emergency because no more sessions were being opened on database with error ORA-02002. Even worse, everything stopped working and started raising this error.

Unusual, right?

So , checking on this trace generated for error here is what I found:

ORA-02002: error while writing to audit trail
ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8 in tablespace AUDIT_TBS

Seems client has moved the AUD$ to another tablespace to avoid filling SYSTEM, just like described in this post. The tablespace got full, however, due bad monitoring, it got full.

As the auditing facility was unable to write to the AUDIT_TRAIL table, SQL statements that were currently being audited also failed.

What to do?

  1. Turn off auditing for the database
  2. Restart database
  3. Add space to tablespace
  4. Re-enable auditing, if required.

To avoid application back and forth, I did it in restrict mode, until get all fixed.

To avoid this sort of issue:

  • Be sure you are monitoring properly the new tablespace.
  • Place the audit tablespace on a reliable disk location and perhaps mirrored for protection.

Hope it helps,
Cheers!

12.2 Scheduler Job Disappeared After Creation

Hi all,
Just sharing an experience here. Recently in a migrated 12.2 environment I created a Scheduler Job and it simply disappeared. How come?
Well, seems since 12.2 the AUTO_DROP attribute is set to true by default. I absolutely don’t know why. So I had to recreate the job and set auto_drop to false, simple like this.

I couldn’t find any related bug on MOS, but I’d relate this as a bug, once the default behavior have changed with no prior information.
So, now on, I strongly suggest you to check on your job AUTO_DROP attribute after job creation.

Here is a test case that reproduces the problem:

In a 12.2 database:

begin 
  DBMS_SCHEDULER.create_job (
    job_name        => 'TEST_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN null; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'SYSTIMESTAMP + INTERVAL ''10'' SECOND',
    end_date        => NULL,
    enabled         => false,
    comments        => 'Test job');
end;
/
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';
-- Job should appear here

begin 
  dbms_scheduler.enable('TEST_JOB'); 
  dbms_scheduler.run_job('TEST_JOB',false); 
end;
/
 
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';

-- There is no job now in 12.2.

You can take the AUTO_DROP easily from the query above, or using getting the job_ddl and you’ll see the AUTO_DROP set to true, as in the bottom of this page by Tim Hall.

Hope it helps!
Cheers!

ORA-00600 [ipc_recreate_que_2]

Hi all,
So this week I started receiving this error in a client environment. This was happening due Bug 26803191 – Getting ORA-00600 [ipc_recreate_que_2] instead of ORA-27515 – superseded (Doc ID 26803191.8)
In summary, failures due to low memlock limit are leading ORA-00600 [ipc_recreate_que_2] instead of ORA-27515, patching ensure that ORA-27515 is raised on failures due to low memlock limit instead of this ORA-600.

As workaround, the memlock limited can be changed, as also described in MOS ORA-00600: Internal Error Code, Arguments: [ipc_recreate_que_2] When Running Exachk (Doc ID 2480088.1).

Before:

[boesing@exa1dbadm01 trace]$ grep memlock /etc/security/limits.conf |grep oracle
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock 237809520
oracle hard memlock 237809520

After:

[boesing@exa1dbadm01 ~]$ grep memlock /etc/security/limits.conf
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock unlimited
oracle hard memlock unlimited
* soft memlock 32768
* hard memlock 32768

Hope it helps,
Cheers!

ORA-00600: internal error code, arguments: [13011], [7907] on SYS.DBMS_AQ_INV

Hi all,

So a couple weeks ago I was facing the following on a client environment. It was appearing every 10 mins on alert.log:

ORA-00600: internal error code, arguments: [13011], [7907], [12679954], [1], [13461738], [0], [], [], [], [], [], []
ORA-01403: no data found
ORA-06512: at "SYS.DBMS_AQ_INV", line 1248

I could find several references to ORA-600 [13011], always related to some internal corruption. As this table is related to Advanced Queueing, decided to check on Scheduler Job table structures:

SQL> analyze table SYS.SCHEDULER$_EVENT_LOG validate structure cascade; 

Table analyzed. 

SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade; 
analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade 
* 
ERROR at line 1: 
ORA-01499: table/index cross reference failure - see trace file

SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure;

Table analyzed.

Ahá!
Also on the generated trace file:

2019-03-01 22:26:37.736 
SESSION ID:(39.32751) 2019-03-01 22:26:37.736 
CLIENT ID) 2019-03-01 22:26:37.736 
SERVICE NAME:(SYS$USERS) 2019-03-01 22:26:37.736 
MODULE NAME:(sqlplus.exe) 2019-03-01 22:26:37.736 
CLIENT DRIVER:(SQL*PLUS) 2019-03-01 22:26:37.736 
ACTION NAME) 2019-03-01 22:26:37.736 
CONTAINER ID:(1) 2019-03-01 22:26:37.736 
Table/Index row count mismatch 
table 273184 : index 275017, 1832 
Index root = tsn: 1 rdba: 0x00c0128a

So ANALYZE on table SCHEDULER$_JOB_RUN_DETAILS fails with CASCADE but succeeds without CASCADE. This means that there is a problem with one of the index of this table. It has gone out of sync with the table.

SQL> select 'alter index '||owner||'.'||index_name||' rebuild;' from dba_indexes where table_name='SCHEDULER$_JOB_RUN_DETAILS';

'ALTERINDEX'||OWNER||'.'||INDEX_NAME||'REBUILD;'
--------------------------------------------------------------------------------
alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild;

SQL> alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild online;

Index altered.

After this, error solved and no more recurrence of that ORA-600.

Also note I couldn’t find any document about this on MOS, so this is kind of exclusive by now. 🙂

Cheers!

Starting ASM: ORA-29701: unable to connect to Cluster Synchronization Service

Hey all,
So, I bet you have seen this error already, as this is quite common when messing up with Cluster configuration, which DBAs love to do…. no?

Well, here is what you may be facing:

SQL> startup
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service
SQL>

The error is kind of clear: Cluster Synchronization Service (CSS) is not available. So, let’s start it from ASM Cluster (or HAS).

$GRID_HOME/bin/crsctl start resource -all

Or, for Standalone:

$GRID_HOME/bin/crsctl start has

To check on status:

$GRID_HOME/bin/crsctl status resource -t

Complete example (attention to CSSD):

[root@greporasrv1 ~]# crsctl start has
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@greporasrv1 ~]# crsctl start resource -all
CRS-5702: Resource ‘ora.evmd’ is already running on ‘greporasrv1’
CRS-2501: Resource ‘ora.ons’ is disabled
CRS-2672: Attempting to start ‘ora.cssd’ on ‘greporasrv1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘greporasrv1’
CRS-2676: Start of ‘ora.diskmon’ on ‘greporasrv1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘greporasrv1’ succeeded
CRS-4000: Command Start failed, or completed with errors.
[root@greporasrv1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ons
OFFLINE OFFLINE greporasrv1 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE greporasrv1 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE greporasrv1 STABLE
--------------------------------------------------------------------------------
[root@greporasrv1 ~]#

Hope that worked! 😀

Oh, it didn’t? Did you changed hostname name or something? In this case, you may want to deconfig HAS and reconfigure using root.sh (part regular installation):

cd $ORACLE_HOME
./crs/install/roothas.pl -deconfig -force
./crs/install/roothas.pl -delete -force
./root.sh

 

Hey! Be careful with that, it might be unrecoverable. 😉


			

12cR1 – Stop CRS: OC4J Fails to Stop

Hi all!
So, I started to to see this error to stop a 12.1.0.2 CRS :

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'grepora-srv-01'
CRS-2673: Attempting to stop 'ora.crsd' on 'grepora-srv-01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'grepora-srv-01'
CRS-2679: Attempting to clean 'ora.oc4j' on 'grepora-srv-01'
CRS-2680: Clean of 'ora.oc4j' on 'grepora-srv-01' failed
CRS-2799: Failed to shut down resource 'ora.oc4j' on 'grepora-srv-01'
CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'grepora-srv-01' has failed
CRS-2675: Stop of 'ora.crsd' on 'grepora-srv-01' failed
CRS-2799: Failed to shut down resource 'ora.crsd' on 'grepora-srv-01'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'grepora-srv-01' has failed
CRS-4687: Shutdown command has completed with errors.

When running a crsctl stat res -t it will show something like this

ora.oc4j 1 ONLINE UNKNOWN grpora-srv-01 CLEANING

Solution? Follow The rootupgrade.sh fails because the oc4j resource fails to stop (Doc ID 1921629.1):

1) Stop the OC4J resource, as grid user:

 srvctl stop oc4j -f

Note: Even if the oc4j does not stop, continue with step 2
2) Disable the OC4J resource, as grid user:

 srvctl disable oc4j

3) Confirm the OC4J resource is NOT running, as grid user:

 srvctl status oc4j

If the OC4J resource is still running, go to the node where oc4j is running and recycle GI by issuing the following as root:

 crsctl stop crs -f
 crsctl start crs

Why?
oc4j is used only for Quality of Service Management (QoS feature), as per:
https://docs.oracle.com/database/121/APQOS/install_config.htm#APQOS151
http://docs.oracle.com/cd/E11882_01/server.112/e24611.pdf

Searched on MOS and only found notes saying we cannot remove default services (it is not supported). But as a DBA, in the middle of the night with environment down, this may be a solution for you, specially if you are not using QoS.

Hope it helps. Cheers!