Advise: Setting underScore/Hidden Parameters

Please be aware no hidden parameters (parameters starting with “_”) should not be set unless recommended by Oracle explicitly or in case of a perfect match to the case.

This is explained on MOS Best Practices and Guidelines for Setting underScore/Hidden Parameter to Workaround and/or Resolve Customers Problems (Doc ID 2524119.1)

So, in case of any diagnostics you have to lead to setting it, SUGGESTED:

  • Get the explicit approval/agreement from Oracle.
  • Include evidence on the root cause for this setting. A comment should work, something like:
alter system set "_replace_virtual_columns"=false Comment= 'Workaround Doc ID 1512347.1' SID= '*' scope= both;

Cheers!

ORA-04030: out of process memory when trying to allocate % bytes (kkoutlCreatePh,ub1 : kkoabr)

Hi all,

So, I started seen this on a client environment. Researching on the  case after no crear reference on MOS, I noticed some high PGA allocation as per below.

SYS@proddb AS SYSDBA PROD> select pid, serial#,category, allocated/1024/1024 MB, used/1024/1024 MB_used, max_allocated/1024/1024 MB_MAX_ALLOCATED_ON_PGA
2 from v$process_memory where pid=852;

PID SERIAL# CATEGORY MB MB_USED MB_MAX_ALLOCATED_ON_PGA
---------- ---------- --------------- ---------- ---------- -----------------------
852 91 SQL .086807251 .00806427 .672416687
852 91 PL/SQL .087730408 .078926086 .126182556
852 91 Freeable .5625 0
852 91 Other 2.25187302 2.25187302

Seems a match MOS ORA-04030 Error With High “kkoutlCreatePh” (Doc ID 1618444.1).

The solution? Simply disabled the following parameter:

"_b_tree_bitmap_plans"=false

 

Trace for additional info:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1
System name: Linux
Node name: proddb.local
Release: 4.1.12-94.8.3.el7uek.x86_64
Version: #2 SMP Wed Apr 25 19:57:32 PDT 2018
Machine: x86_64
Instance name: proddb
Redo thread mounted by this instance: 1
Oracle process number: 854
Unix process pid: 28899, image: oracle@proddb.local

*** 2019-01-09 11:20:09.130
*** SESSION ID:(5429.55092) 2019-01-09 11:20:09.130
*** CLIENT ID:() 2019-01-09 11:20:09.130
*** SERVICE NAME:(XXXXX) 2019-01-09 11:20:09.130
*** MODULE NAME:(JDBC Thin Client) 2019-01-09 11:20:09.130
*** CLIENT DRIVER:(jdbcthin) 2019-01-09 11:20:09.130
*** ACTION NAME:() 2019-01-09 11:20:09.130

[TOC00000]
Jump to table of contents
Dump continued from file: /u01/app/oracle/diag/rdbms/proddb/proddb/trace/proddb_ora_28899.trc
[TOC00001]
ORA-04030: out of process memory when trying to allocate 34392040 bytes (kkoutlCreatePh,ub1 : kkoabr)

[TOC00001-END]
[TOC00002]
========= Dump for incident 2710965 (ORA 4030) ========
[TOC00003]
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
72% 18 GB, 1356 chunks: "free memory " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
27% 6809 MB, 3366 chunks: "permanent memory " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
0% 100 MB, 894 chunks: "permanent memory " SQL
kkoutlCreatePh ds=0x7fa8cce16708 dsprt=0x7fa8df330220
0% 23 MB, 589515 chunks: "chedef : qcuatc "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 18 MB, 150124 chunks: "opndef: qcopCreateOpnViaM "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 16 MB, 214829 chunks: "logdef: qcopCreateLog "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 11 MB, 241 chunks: "free memory "
top call heap ds=0x7fa8df49dbe0 dsprt=(nil)
0% 9643 KB, 4623 chunks: "qkkele " SQL
kxs-heap-c ds=0x7fa8df330220 dsprt=0x7fa8df49dbe0
0% 6534 KB, 58578 chunks: "optdef: qcopCreateOptInte "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0
0% 4134 KB, 15399 chunks: "kccdef : qcsvwsci "
TCHK^2a9688d9 ds=0x7fa8df33feb8 dsprt=0x7fa8df49c9e0

=======================================
PRIVATE MEMORY SUMMARY FOR THIS PROCESS
---------------------------------------
******************************************************
PRIVATE HEAP SUMMARY DUMP
25 GB total:
25 GB commented, 794 KB permanent
12 MB free (0 KB in empty extents),
24 GB, 1 heap: "kxs-heap-c " 67 KB free held
------------------------------------------------------
Summary of subheaps at depth 1
25 GB total:
203 MB commented, 6809 MB permanent
18 GB free (0 KB in empty extents),
6398 MB, 9243 chunks: "allocator state " 6398 MB free held
3758 MB, 4623 chunks: "qkkele " 3749 MB free held
3650 MB, 4623 chunks: "qkkkey " 3650 MB free held

=========================================
REAL-FREE ALLOCATOR DUMP FOR THIS PROCESS
-----------------------------------------

Dump of Real-Free Memory Allocator Heap [0x7fa8df317000]
mag=0xfefe0001 flg=0x5000003 fds=0x0 blksz=65536
blkdstbl=0x7fa8df317010, iniblk=524288 maxblk=524288 numsegs=318
In-use num=2965 siz=641597440, Freeable num=0 siz=0, Free num=254 siz=3586195456

==========================================
INSTANCE-WIDE PRIVATE MEMORY USAGE SUMMARY
------------------------------------------

Dumping Work Area Table (level=1)
=====================================

Global SGA Info
---------------

global target: 102400 MB
auto target: 62376 MB
max pga: 2048 MB
pga limit: 4096 MB
pga limit known: 0
pga limit errors: 0

pga inuse: 33104 MB
pga alloc: 35238 MB
pga freeable: 1225 MB
pga freed: 433664026 MB
pga to free: 0 %
broker request: 0

pga auto: 12 MB
pga manual: 0 MB

pga alloc (max): 35238 MB
pga auto (max): 12284 MB
pga manual (max): 2 MB

# workareas : 0
# workareas(max): 551

Quick ways to transfer files to Oracle Support – MOS directly from the Database Server

Hi all,

This will be a quick one but helped me a lot last week.

I was working on an Ora-0600 issue on a 2 node RAC cluster. Working with Oracle Support I was asked to transfer a TFA data as well ADRCI data so they could move forward with the SR, very common process right?

After I generated all the files, it was over 3gb of data to be sent over. If I was to get the file from the database server to my laptop I would need to transfer these files multiple times due to the jump servers in the middle.

This time I was luck as the database server was able to ping the site https://transport.oracle.com so I tried sending the files directly to MOS.

I will show you how I did it. This is not a mistery but can really save you time

You have 2 options which are straight forward

Using curl

curl -T FILE_YOU_WANT_TO_SEND -u MOS_USER https://transport.oracle.com/upload/issue/SR_NUMBER/

Using tfacfl

tfactl upload -sr SR_NUMBER -user MOS_USER FILE_YOU_WANT_TO_SEND

On both you need to provide your MOS credentials the SR number and the file you want o upload

After the upload is done, you will see in the attachments in the SR that the files is with the status like “Transferring to the SR” after a few minutes later you will see the update on the SR saying that the file was uploaded to the SR.

Hope this helps!

Elisson Almeida

ORA 600 [kdt_bseg_srch_cbk PITL1] / ORA-00700 [PITL6] / ORA-00700 [PITL5]

Hi all,
So, right after a patching, a client environment started receiving entries on alert log like:

ORA-00700: soft internal error, arguments: [PITL6], [], [], [], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [11], [], [], [], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [kdt_bseg_srch_cbk PITL1], [2], [], [], [], [], [], [], [], [], [], []

First a quick word regarding ORA-700: An ORA-700 is a so-called ‘soft’ assert. Soft asserts are triggered when the caller wants to make a note of the fact that something unexpected has happened, but would like to continue on because the failure is not fatal to the process or the instance. This was introduced in 12c and got some of ORA-600 messages (the informative ones), to leave ORA-600 for more critical issues.

Now to the errors: This is a clear match to Bug 28483184 (Bug 28483184 – ORA-600[PITL1] ON UPDATE TO COMPRESSED BLOCK WITH FIX FOR BUG 28364411 INSTALLED), which consists in a known defect in the Oracle code allows continued insertion of non-header block rows past the number of block ITLs in data blocks of OLTP-compressed tables.

More info: MOS Errors Noted in 12.2 and Above During DML on Compressed Tables: ORA-00600 [PITL1] / ORA-00600 [kdt_bseg_srch_cbk PITL1] / ORA-00700 [PITL6] / ORA-00700 [kdt_bseg_srch_cbk PITL5] (Doc ID 2420831.1)

To fix it: Patch on RDBMS for bug 28483184 and reorganize on involved tables. Seems to me that the patch only fixes the incorrect creation of non-header blocks, but doesn’t fix those that already have the problem in place.

My twists: I’d recommend you to increase a little bit the PCTFREE during the table reorganization, also based on Oracle’s recommendation. Recreating the tables with a larger PERCENT FREE will enable more space for ITLs. This is advantageous since the fix changes out a block that has the maximum amount of ITLs for one that has less ITLs.

If arranging a patching window is a big problem, note this patch is online installable. I’d recommend to do it offline using RAC Rolling however, as this seems a more consistent process in general.

So in summary, recommended actions are:
– Apply Patch 28483184 (RAC Rolling, Standby-First and online installable).
– Reorganize table, using methods mentioned by Suresh (ex: Create table as select (CTAS), Alter table move, Data Pump export / import, Online redefinition)

Hope that helps you!

ORA-01450 on Index Rebuild

Hello all,
So, I got this:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS;
ALTER INDEX MATHEUS.INDEX_1 REBUILD online TABLESPACE NEW_TBS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded

So, is this a bug? Yes it is.

From MOS ALTER INDEX REBUILD ONLINE Fails with ORA-01450 (Doc ID 236329.1):
“This is caused by issue Bug:2525767. The online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. Their total length is greater than number reported in ORA-01450 error message. This is a feature of online rebuild.
Maximum key length is calculated with respect to the database block size. It means that current value of the initialization parameter db_block_size is not large enough so that the internal journal IOT can be created without errors.”

Wanna know more, there is a great article on how this works exactly, by Alex Gorbachev: ORA-01450 During Online Index Rebuild

The solution?
It simply cannot be done online as explained, so doing it offline is the solution:

SQL> ALTER INDEX MATHEUS.INDEX_1 REBUILD TABLESPACE NEW_TBS;

Index altered.

Hope it helps, cheers!

ORA-00932 to Export Statistics After 12c Upgrade – Undocumented

Hello All,
I have a client that use to Export and Import Dictionary Statistics using a stats table. However, after upgrading database to 12c, is started to raise error below to export data to a stats table (created in 11g) or even to update stats in this table.

SQL> exec dbms_stats.upgrade_stat_table('GREPORA','MYSTATSTABLE');
DECLARE
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected BINARY got NUMBER
ORA-06512: at "SYS.DBMS_STATS", line 40183
ORA-06512: at line 28

SQL> EXEC DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid  => 'TEST_MATHEUS',statown => 'GREPORA' );
BEGIN DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid  => 'TEST_MATHEUS',statown => 'GREPORA' ); END;

*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected BINARY got NUMBER
ORA-06512: at "SYS.DBMS_STATS", line 37085
ORA-06512: at line 1

I reviewed several references (all very interesting, by the way, recommend you to take a look):
– ORA-20002 on Importing Statistics using DBMS_STATS Procedures (Doc ID 740345.1)
– Datapump Export Fails With Ora-00932 (Doc ID 1300803.1)
– Error “ORA-00932 – Inconsistent Datatypes: Expected BINARY Got NUMBER” When Executing the Primavera Gather-Statistics Scripts on an Oracle 12c Database (Doc ID 2111114.1)
https://oracle-base.com/articles/12c/concurrent-statistics-collection-12cr1
– http://www.morganslibrary.org/reference/pkgs/dbms_stats.html
– How To Export and Import Dictionary Statistics (Doc ID 1450820.1)
– Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations (Doc ID 798257.1)

But no reference found to this specific issue, not even in MOS.

After some tests, I realized that maybe columns order in stats table can be changed in implementation between 11g and 12c. Bingo! The same columns and datatypes are in place, but in different order.

This is not documented in MOS neither on internet/independent blogs, but seems EXPORT% on DBMS_STATS presume stats table in it’s internal code without specified column orders.
As we know, a good development practice is to specify column names when performing any operation. Bad on you, Oracle!

More“ORA-00932 to Export Statistics After 12c Upgrade – Undocumented”

Oracle ASR: Communication Issues

Hello all,
Are you having notifications like this one from you ASR?

ALERT: Oracle Auto Service Request (ASR) has detected a heartbeat failure for these assets.

[list with "Serial"; "Hostnam"; "Information" of affected targets]

IMPACT: ASR would not be able to create a Service Request (SR) if a fault were to occur.
ACTION: Determine why the heartbeat has failed for these assets and resolve the issue.

This is only a notification saying that ASR not able to reach a target.
For detailed information on how to troubleshoot, you can access MOS Oracle Auto Service Request (ASR) No Heartbeat Issue – How to Resolve (Doc ID 1346328.2)

In general why, things to test are:
– Access from ASR server to transport.oracle.com, via https, using port 443.
– Access from ASR server to ASR assets, via http, using port 6481.

More“Oracle ASR: Communication Issues”