This days I found this in a client’s 12c Database when trying to create a Materialized View:
ORA-00600: internal error code, arguments: [qkswcWithQbcRefdByMain4]
A perfect match to MOS ORA-00600 [qkswcWithQbcRefdByMain4] when Create MV “WITH” clause (Doc ID 2232872.1).
The root cause is documented on BUG 22867413 – ORA-600 CALLING DBMS_ADVISOR.TUNE_MVIEW.
The given solution is to apply Patch 22867413.
After applying patch, issue solved. 🙂
I had faced some occourrences of this error in a 184.108.40.206 database recently.
ORA-07445: exception encountered: core dump [nstimexp()+45] [SIGSEGV] [ADDR:0x58] [PC:0x7F42ABB] [Address not mapped to object] 
After some investigation I found a match to Bug 3934729.
This issue is originally to matched to Bug 6918493, that is a reintroduction of Bug 2752985 but it’s fixed in 220.127.116.11.
However, on upgrading to 18.104.22.168 it’s a hit on Bug 3934729 which is fixed in 22.214.171.124.
Recommended actions are:
– Upgrade databases do 126.96.36.199 or higher. (best solution, but may require more efforts to validate the upgrade).
– Apply Patch 3934729: RANDOM ORA-07445 CORE DUMPS FROM DATABASE AND ORA-3113 FROM APPLICATION
– Set sqlnet.expire_time=0 (workaround)
– Ignore error.
After some research I decided to apply workaround, based on recommended usage of sqlnet.expire_time (Next weeks post is about this parameter :)).
This might be the root cause for the ORA-03135: connection lost contact and the actual value of this parameter on environment was 1, which is a very low value.
So, check which action is more suitable for your environment!
Hope it helps 🙂
Below some additional informations on my situation:
Having this error? Probably something like this:
ORA-00600: internal error code, arguments: , [0x7F0315C53788], , , , , , , , , , 
This is a known error. The best match on MOS is GoldenGate 12c IntRep Crash W/ ORA-600 [kghfrmrg:nxt] ;ORA-600[kghfrh:ds]; ORA 600 ;  (Doc ID 1666909.1)
Don’t create so much expectations on this post.
This is because I don’t exactly fixed the issue, but workarounded…
The thing is: This error is caused in catalog database, so the workaround is simple: do a RMAN-nocatalog, I mean, simply don’t connect in catalog to perform the backup.
After completing the backup, I’d suggest you to force a synchronization with command “RESYNC CATALOG“. In worst case, on next execution the implicit resync will fix everything. 🙂
There is no bigger explanations on this, but you can same workaround in MOS Bug 12588237 – RMAN-3002 ORA-1: unique constraint (ckp_u1) violated after dataguard switchover (Doc ID 12588237.8).
And this is it for today!
See you next week!
This is because the error is generated by an unpublished bug 17891564, as per described in MOS ORA-7445 [ocl_lock_get_waitobj_owner] on an Exadata storage cell (Doc ID 1906366.1).
It affects Exadata storage cell with image version between 188.8.131.52.0 and 184.108.40.206.0. The CELLSRV process crash with this error as per:
Cellsrv encountered a fatal signal 11
Errors in file /opt/oracle/cell220.127.116.11.0_LINUX.X64_131014.1/log/diag/asm/cell//trace/svtrc_11711_27.trc (incident=257):
ORA-07445: exception encountered: core dump [ocl_lock_get_waitobj_owner()+26]  [0x000000000]   
Incident details in: /opt/oracle/cell18.104.22.168.0_LINUX.X64_131014.1/log/diag/asm/cell//incident/incdir_257/svtrc_11711_27_i257.trc
The CELLSRV process should auto restart after this error.
Yeah, these days I got some errors and when validating the server found the fllowing error:
su: cannot set user id: Resource temporarily unavailable
As you can imagine, in order to fix the issue, I adjusted the /etc/security/limits.conf increasing oracle nprocs to:
oracle soft nproc 4047
oracle hard nproc 20384
Ok, turns out that after a while I got the same errors again…
After some investigating I find that the EM Agent process was with 5020 threads!
Take a look:
Quick one today!
Having message below in your 22.214.171.124 on AIX like this?
WARNING: Heavy swapping observed on system in last 5 mins.
pct of memory swapped in [31.28%] pct of memory swapped out [3.81%].
Please make sure there is no memory pressure and the SGA and PGA are configured correctly.
Look at DBRM trace file for more details.
Stand down, this issue is caused by unpublished Bug 11801934, mentioned in MOS False Swap Warning Messages Printed To Alert.log On AIX (Doc ID 1508575.1).
Basically happens because the v$osstat does not reflect proper stats for the swap space paging.
So, stay calm and see you next week!