ORA-02056: 2PC: k2lcom: bad two-phase command number rdonly from coord

Hi all,

So, you found this error?

Well, I received this in a client environment. After checking MOS ORA-02056: 2PC: K2gCheckGlobalCommit: Bad Two-phase Command Number 1 From Coord (Doc ID 2498134.1), which points to a known issue when dealing with committing transactions as per Bug 22016049 I noticed my problem could be different…

My real root cause is my transaction being done on a remote object and when checking on remote database I found pending distributed transactions.

For more informations about distributed transactions, please check: https://docs.oracle.com/database/121/ADMIN/ds_txns.htm#ADMIN12211

GUID-D521B4E9-E916-4B02-8B0E-4FAF4DC61423-default

So, if you getting this situation, before planning and applying the patch I suggest you:

  • Check if your process is not working on remote databases.
  • Check for any possible distributed remote transactions.

How? Easy check (on remote db):

select * from dba_2pc_pending;

I hope it helps!

PS NOTE: I just found this very good and complete post by Rodrigo Mufalani on the same topic: ORA-01591: lock held by in-doubt distributed transaction

Have a look for an Action Plan!
Cheers!

ORA-04030: out of process memory when trying to allocate % bytes (kkoutlCreatePh,ub1 : kkoabr)

Hi all,

So, I started seen this on a client environment. Researching on the  case after no crear reference on MOS, I noticed some high PGA allocation as per below.

SYS@proddb AS SYSDBA PROD> select pid, serial#,category, allocated/1024/1024 MB, used/1024/1024 MB_used, max_allocated/1024/1024 MB_MAX_ALLOCATED_ON_PGA
2 from v$process_memory where pid=852;

PID SERIAL# CATEGORY MB MB_USED MB_MAX_ALLOCATED_ON_PGA
---------- ---------- --------------- ---------- ---------- -----------------------
852 91 SQL .086807251 .00806427 .672416687
852 91 PL/SQL .087730408 .078926086 .126182556
852 91 Freeable .5625 0
852 91 Other 2.25187302 2.25187302

Seems a match MOS ORA-04030 Error With High “kkoutlCreatePh” (Doc ID 1618444.1).

The solution? Simply disabled the following parameter:

"_b_tree_bitmap_plans"=false

 

Trace for additional info:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1
System name: Linux
Node name: proddb.local
Release: 4.1.12-94.8.3.el7uek.x86_64
Version: #2 SMP Wed Apr 25 19:57:32 PDT 2018
Machine: x86_64
Instance name: proddb
Redo thread mounted by this instance: 1
Oracle process number: 854
Unix process pid: 28899, image: oracle@proddb.local

*** 2019-01-09 11:20:09.130
*** SESSION ID:(5429.55092) 2019-01-09 11:20:09.130
*** CLIENT ID:() 2019-01-09 11:20:09.130
*** SERVICE NAME:(XXXXX) 2019-01-09 11:20:09.130
*** MODULE NAME:(JDBC Thin Client) 2019-01-09 11:20:09.130
*** CLIENT DRIVER:(jdbcthin) 2019-01-09 11:20:09.130
*** ACTION NAME:() 2019-01-09 11:20:09.130

[TOC00000]
Jump to table of contents
Dump continued from file: /u01/app/oracle/diag/rdbms/proddb/proddb/trace/proddb_ora_28899.trc
[TOC00001]
ORA-04030: out of process memory when trying to allocate 34392040 bytes (kkoutlCreatePh,ub1 : kkoabr)

[TOC00001-END]
[TOC00002]
========= Dump for incident 2710965 (ORA 4030) ========
[TOC00003]
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
72% 18 GB, 1356 chunks: "free memory " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
27% 6809 MB, 3366 chunks: "permanent memory " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
0% 100 MB, 894 chunks: "permanent memory " SQL
kkoutlCreatePh ds=0x7fa8cce16708 dsprt=0x7fa8df330220
0% 23 MB, 589515 chunks: "chedef : qcuatc "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 18 MB, 150124 chunks: "opndef: qcopCreateOpnViaM "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 16 MB, 214829 chunks: "logdef: qcopCreateLog "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 11 MB, 241 chunks: "free memory "
top call heap ds=0x7fa8df49dbe0 dsprt=(nil)
0% 9643 KB, 4623 chunks: "qkkele " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
0% 6534 KB, 58578 chunks: "optdef: qcopCreateOptInte "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 4134 KB, 15399 chunks: "kccdef : qcsvwsci "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0

=======================================
PRIVATE MEMORY SUMMARY FOR THIS PROCESS
---------------------------------------
******************************************************
PRIVATE HEAP SUMMARY DUMP
25 GB total:
25 GB commented, 794 KB permanent
12 MB free (0 KB in empty extents),
24 GB, 1 heap: "kxs-heap-c " 67 KB free held
------------------------------------------------------
Summary of subheaps at depth 1
25 GB total:
203 MB commented, 6809 MB permanent
18 GB free (0 KB in empty extents),
6398 MB, 9243 chunks: "allocator state " 6398 MB free held
3758 MB, 4623 chunks: "qkkele " 3749 MB free held
3650 MB, 4623 chunks: "qkkkey " 3650 MB free held

=========================================
REAL-FREE ALLOCATOR DUMP FOR THIS PROCESS
-----------------------------------------

Dump of Real-Free Memory Allocator Heap [0x7fa8df317000]
mag=0xfefe0001 flg=0x5000003 fds=0x0 blksz=65536
blkdstbl=0x7fa8df317010, iniblk=524288 maxblk=524288 numsegs=318
In-use num=2965 siz=641597440, Freeable num=0 siz=0, Free num=254 siz=3586195456

==========================================
INSTANCE-WIDE PRIVATE MEMORY USAGE SUMMARY
------------------------------------------

Dumping Work Area Table (level=1)
=====================================

Global SGA Info
---------------

global target: 102400 MB
auto target: 62376 MB
max pga: 2048 MB
pga limit: 4096 MB
pga limit known: 0
pga limit errors: 0

pga inuse: 33104 MB
pga alloc: 35238 MB
pga freeable: 1225 MB
pga freed: 433664026 MB
pga to free: 0 %
broker request: 0

pga auto: 12 MB
pga manual: 0 MB

pga alloc (max): 35238 MB
pga auto (max): 12284 MB
pga manual (max): 2 MB

# workareas : 0
# workareas(max): 551

ORA-02019 While SELECT From A View Owned By Another User Using Dblink

Quick case today.

This week I had a client experiencing ORA-02019 while SELECT from a View with dblink and CONNECT BY PRIOR … START WITH into SELECT.
The situation involved Views on a DB which need to be accessed by a groups of users from another DB using proxy user and a DB link but encounter this ORA error.

The root cause?

ORA-02019 while performing select on a view or while selecting a view owned by another user, with dblink, is a match to Bug 26558437 – DATABASE LINK FAILS WITH ORA-2019 WHEN SELECT ANOTHER USER VIEW.

But MOS doesn’t have workaround besides applying patch, as usual. What we did and solved on our case?

We created a materialized view refreshed every 15 mins (solution supported this delay0 using the DBLink if the view owner.
In this case, the other users instead of executing the query on the view, will be actually querying the table created (and refreshed) by the mview code, which would be only be executed by the mview owner.

By the way, 2 good side effects:
1) Once view was executed more often then the period of refresh, this solution is also saving some efforts database wise, once executing select from mview is way better then the view code, besides not using the network on dblink
2) If the remote database get slow, or down, the data would be still available from last mview refresh.

Conclusions:
1) Use MATERIALIZED VIEW!
2) MOS not always give you all the steps. Sometimes you can easily solve your problem by thinking a little bit more on the root cause problem.

Cheers

ORA-00600: internal error code, arguments: [kghfrempty:ds]

Hi all,
Annoying thing, just because that sounded veeery unsual to me. I would expect a different error.

I have seen this “[kghfrempty:ds]” in past a couple times, and if you did see it too you may have noticed, this is almost as much generic as the ORA-600 itself. It happens because this is usually in the top of KGH errors in general. Some examples:

1.

kghnerror <- kghfrempty <- kghfrempty_ex <- qerhjFreeSpace
<- qerhjFetch <- qersoProcessULS <- qersoFetch <- opifch2 <- kpoal8
<- opiodr <- kpoodrc <- rpiswu2 <- kpoodr <- upirtrc
<- kpurcsc <- kpufch0 <- kpufch <- OCIStmtFetch2 <- qksanExecSql <- qksanAnalyzeSql

2.

kghfrempty <- kghfrempty_partial <- kghgex <- kghfnd <- kghalo <- kghgex <- kghalf <- ktmrProcessCRClone 
<- ktmbRead <- ktmrget_int <- ktmrget <- kdmsTransGet <- kdst_fetch_imc  <- kdsttgr  <- qertbFetch

And as being it can be related either to analyze bugs (as per stack 1), In-Memory Column Store – IMC (as per stack 2), create views, rollback over cluster, trigger creation, use function based indexes, or several other generic situations (there is no pattern, as you can see).

So what do to if you get this message? Well, generically, read all the MOS notes available and try to match you case. IF you match, pray to have a bug for your current version, or open an SR to either have a backport or start a new investigation.

A few examples of things to check:

  • ALERT: Bug 13605839 ORA-600 [ktbsdp1] ORA-600 [kghfrempty:ds] ORA-600 [kdBlkCheckError]. Corruption in Rollback with Clusterwide Global Transactions in RAC (Doc ID 1527740.1)
  • SQL Analyze Fails with ORA-600[kghfrempty:ds] (Doc ID 2247180.1)
    Bug 21556276 – dataguide createviewonpath hit ora 600 [kghfrempty:ds] (Doc ID 21556276.8)
  • Bug 20878625 – ORA-600[kghfrempty:ds] with IMC enabled (Doc ID 20878625.8)
  • Bug 19212166 – ORA-600 [KGHFREMPTY:DS] With Cache Line Alignment Enabled (Doc ID 19212166.8)
  • Bug 6797925 – ORA-600 [kghfrempty:ds] During Creation Of Trigger In XML Schema (Doc ID 6797925.8)
  • 11.1: ORA-600 [17147] AND [kghfrempty:ds] On Select Using Functional Indexes (Doc ID 884882.1)
  • ORA-600 [kghfrempty:ds] And ORA-600 [kghrcdepth:ds] While Gathering Statistics (Doc ID 1480132.1)

BUUT I just noticed some undocumented scenario causing this and it might help you on fixing this as well.
I noticed I was getting this error for different queries but always from same application. So I addressed a complete review on application layer and noticed the JDBC 7 on this application. Happens we had just upgraded from 12.1 to 19c.

SO, after upgrading jdbc, it started to work again properly and no MOS SR was required. So lesson learned: When you receive this kghfrempty:ds], speacially after migration, also check for your application JDBC version.

For compatibility Matrix: Starting With Oracle JDBC Drivers – Installation, Certification, and More! ( Doc ID 401934.1 )

Hope it helps, cheers!

ORA-65532: cannot alter or drop automatically created indexes

Hello all,
Quickly walking through this, as it was new to me (as for everybody, I guess): So what if I’m not happy with Oracle algorithm and I want to drop and AUTO Index on 19c?

Should be easy, like “drop index XXXX;” right? Wrong. See the struggle:

SQL> drop index GREPORA."SYS_AI_9xu652x5fyu5i";
drop index GREPORA."SYS_AI_9xu652x5fyu5i"
                 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

OK, so let’s have a look on my auto created indexes. You can see them with flag AUTO as YES on query below:

SQL> select owner, index_name, auto, tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_9xu652x5fyu5i    YES     GREPORA
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

Ok, so I cannot drop, let me alter it and set it as unusable, then, as it would archive my intend anyway:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable;
alter index ADMIN."SYS_AI_9xu652x5fyu5i" unusable
 *
ERROR at line 1:
ORA-65532: cannot alter or drop automatically created indexes

Hmmm, so it means I cannot alter it as well?
Well, kind of, there is something I can do:

SQL> alter index ADMIN."SYS_AI_9xu652x5fyu5i" rebuild tablespace GREPORA2 online;
Index altered.

Noice, thanks for nothing. What is changed?
Well, you can use a workaround to move it to a new tablespace and then drop the tablespace:

SQL> drop tablespace GREPORA2 including contents;
Tablespace dropped.

And index will be gone:

SQL> select owner,index_name,auto,tablespace_name from dba_indexes natural where auto='YES';
OWNER              INDEX_NAME    AUTO    TABLESPACE_NAME
________ _______________________ _______ __________________
GREPORA  SYS_AI_few32swe423dw    YES     GREPORA
GREPORA  SYS_AI_94osd824n202f    YES     GREPORA

OK, so that’s bad, right… What the worse part? If you simply drop it, Oracle will probably recreate it, based on algorithm, right?
Gee, thanks for nothing (again)!

This, though, is easier to change: Simply alter the Audo Indexing to report mode:

exec dbms_auto_index.configure('AUTO_INDEX_MODE','REPORT');

This is interesting right?
Ok, we have more options, we can even exclude specific schemas from AUTO_INDEX, choose a default tablespace, define retentions and a lot of things.

As usual, the best reference I could find: https://oracle-base.com/articles/19c/automatic-indexing-19c

Just one thing you might want to consider and it’s not on Tim Hall’s post. Specially if the index is bad for one specific query, is to avoid using it during that specific query. We can do it with session/system parameter:

“_optimizer_use_auto_indexes”=OFF

Well, hope it helps you. Cheers!

MySQL 8 requested authentication method unknown

Hey folks,
Be aware when you’re migrating from MySQL 5.7 to 8.0 on how you’re doing and what kind of applications have access to it. The version 8.0 introduced a new authentication plugin called caching_sha2_password and it’s the default auth plugin now.  Here is the list of compatible connectors to check if your app is ok with that or if you should call the dev team.

Example: If, for some reason, you get stuck with the following error (PHP sample), you have to do a few changes.

Connect Error: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client

Edit your config file and set the option default-authentication-plugin with mysql_native_password value, restart the server and set the password again with the following command.

Config File:

[mysqld]
/... other configs .../
default-authentication-plugin=mysql_native_password

SQL:

ALTER USER 'adv'@'localhost' IDENTIFIED WITH mysql_native_password BY '1AB@8CD#E91F22!';

New users should be created the same way. Be aware that you have to do that just for users that the application will use. Regular users for DBAs and queries you don’t have to do that change.

So, look out before just doing a update on your server, or you could break your app 😉

Everything Stopped on ORA-02002!

Hi all,
So, I got contacted by a client in emergency because no more sessions were being opened on database with error ORA-02002. Even worse, everything stopped working and started raising this error.

Unusual, right?

So , checking on this trace generated for error here is what I found:

ORA-02002: error while writing to audit trail
ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8 in tablespace AUDIT_TBS

Seems client has moved the AUD$ to another tablespace to avoid filling SYSTEM, just like described in this post. The tablespace got full, however, due bad monitoring, it got full.

As the auditing facility was unable to write to the AUDIT_TRAIL table, SQL statements that were currently being audited also failed.

What to do?

  1. Turn off auditing for the database
  2. Restart database
  3. Add space to tablespace
  4. Re-enable auditing, if required.

To avoid application back and forth, I did it in restrict mode, until get all fixed.

To avoid this sort of issue:

  • Be sure you are monitoring properly the new tablespace.
  • Place the audit tablespace on a reliable disk location and perhaps mirrored for protection.

Hope it helps,
Cheers!

12.2 Scheduler Job Disappeared After Creation

Hi all,
Just sharing an experience here. Recently in a migrated 12.2 environment I created a Scheduler Job and it simply disappeared. How come?
Well, seems since 12.2 the AUTO_DROP attribute is set to true by default. I absolutely don’t know why. So I had to recreate the job and set auto_drop to false, simple like this.

I couldn’t find any related bug on MOS, but I’d relate this as a bug, once the default behavior have changed with no prior information.
So, now on, I strongly suggest you to check on your job AUTO_DROP attribute after job creation.

Here is a test case that reproduces the problem:

In a 12.2 database:

begin 
  DBMS_SCHEDULER.create_job (
    job_name        => 'TEST_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN null; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'SYSTIMESTAMP + INTERVAL ''10'' SECOND',
    end_date        => NULL,
    enabled         => false,
    comments        => 'Test job');
end;
/
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';
-- Job should appear here

begin 
  dbms_scheduler.enable('TEST_JOB'); 
  dbms_scheduler.run_job('TEST_JOB',false); 
end;
/
 
select ENABLED, AUTO_DROP, STATE, RUN_COUNT, FAILURE_COUNT,DEFERRED_DROP from dba_scheduler_jobs where job_name = 'TEST_JOB';

-- There is no job now in 12.2.

You can take the AUTO_DROP easily from the query above, or using getting the job_ddl and you’ll see the AUTO_DROP set to true, as in the bottom of this page by Tim Hall.

Hope it helps!
Cheers!

ORA-00600 [ipc_recreate_que_2]

Hi all,
So this week I started receiving this error in a client environment. This was happening due Bug 26803191 – Getting ORA-00600 [ipc_recreate_que_2] instead of ORA-27515 – superseded (Doc ID 26803191.8)
In summary, failures due to low memlock limit are leading ORA-00600 [ipc_recreate_que_2] instead of ORA-27515, patching ensure that ORA-27515 is raised on failures due to low memlock limit instead of this ORA-600.

As workaround, the memlock limited can be changed, as also described in MOS ORA-00600: Internal Error Code, Arguments: [ipc_recreate_que_2] When Running Exachk (Doc ID 2480088.1).

Before:

[boesing@exa1dbadm01 trace]$ grep memlock /etc/security/limits.conf |grep oracle
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock 237809520
oracle hard memlock 237809520

After:

[boesing@exa1dbadm01 ~]$ grep memlock /etc/security/limits.conf
# - memlock - max locked-in-memory address space (KB)
oracle soft memlock unlimited
oracle hard memlock unlimited
* soft memlock 32768
* hard memlock 32768

Hope it helps,
Cheers!

ORA-00600: internal error code, arguments: [13011], [7907] on SYS.DBMS_AQ_INV

Hi all,

So a couple weeks ago I was facing the following on a client environment. It was appearing every 10 mins on alert.log:

ORA-00600: internal error code, arguments: [13011], [7907], [12679954], [1], [13461738], [0], [], [], [], [], [], []
ORA-01403: no data found
ORA-06512: at "SYS.DBMS_AQ_INV", line 1248

I could find several references to ORA-600 [13011], always related to some internal corruption. As this table is related to Advanced Queueing, decided to check on Scheduler Job table structures:

SQL> analyze table SYS.SCHEDULER$_EVENT_LOG validate structure cascade; 

Table analyzed. 

SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade; 
analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure cascade 
* 
ERROR at line 1: 
ORA-01499: table/index cross reference failure - see trace file

SQL> analyze table SYS.SCHEDULER$_JOB_RUN_DETAILS validate structure;

Table analyzed.

Ahá!
Also on the generated trace file:

2019-03-01 22:26:37.736 
SESSION ID:(39.32751) 2019-03-01 22:26:37.736 
CLIENT ID) 2019-03-01 22:26:37.736 
SERVICE NAME:(SYS$USERS) 2019-03-01 22:26:37.736 
MODULE NAME:(sqlplus.exe) 2019-03-01 22:26:37.736 
CLIENT DRIVER:(SQL*PLUS) 2019-03-01 22:26:37.736 
ACTION NAME) 2019-03-01 22:26:37.736 
CONTAINER ID:(1) 2019-03-01 22:26:37.736 
Table/Index row count mismatch 
table 273184 : index 275017, 1832 
Index root = tsn: 1 rdba: 0x00c0128a

So ANALYZE on table SCHEDULER$_JOB_RUN_DETAILS fails with CASCADE but succeeds without CASCADE. This means that there is a problem with one of the index of this table. It has gone out of sync with the table.

SQL> select 'alter index '||owner||'.'||index_name||' rebuild;' from dba_indexes where table_name='SCHEDULER$_JOB_RUN_DETAILS';

'ALTERINDEX'||OWNER||'.'||INDEX_NAME||'REBUILD;'
--------------------------------------------------------------------------------
alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild;

SQL> alter index SYS.I_SCHEDULER_JOB_RUN_DETAILS rebuild online;

Index altered.

After this, error solved and no more recurrence of that ORA-600.

Also note I couldn’t find any document about this on MOS, so this is kind of exclusive by now. 🙂

Cheers!