Having issues to map some informations from you backup made on SnapManager for Oracle (more info here)?
Here it goes some quick tips for first mapping:
Listing Profiles in Server
[root@server-db ~]# smo profile list Profile name: PRODDB
GoldenGate Datapump or Extract Abend with OGG-01235?
ERROR OGG-01235 Command not allowed by receiving manager process. ERROR OGG-01668 PROCESS ABENDING.
Sounds remote GoldenGate client are denying Datapump / Extract remote connection.
Suggest check and edit GoldenGate manager parameter file.
--IPs addr from source GoldenGate Pump ACCESSRULE, PROG *, IPADDR your.ip.addr.here, ALLOW
Oracle documentation is good here.
Valid for Manager
Use ACCESSRULE to control connection access to the Manager process and the processes under its control. You can establish multiple rules by specifying multiple ACCESSRULE statements in the parameter file and control their priority. There is no limit to the number of rules that you can specify. To establish priority, you can either list the rules in order from most important to least important, or you can explicitly set the priority of each rule with the PRI option.
Don’t create so much expectations on this post.
This is because I don’t exactly fixed the issue, but workarounded…
The thing is: This error is caused in catalog database, so the workaround is simple: do a RMAN-nocatalog, I mean, simply don’t connect in catalog to perform the backup.
After completing the backup, I’d suggest you to force a synchronization with command “RESYNC CATALOG“. In worst case, on next execution the implicit resync will fix everything. 🙂
There is no bigger explanations on this, but you can same workaround in MOS Bug 12588237 – RMAN-3002 ORA-1: unique constraint (ckp_u1) violated after dataguard switchover (Doc ID 12588237.8).
And this is it for today!
See you next week!
GoldenGate replicat issuing OGG-01296 Error mapping then abbending?
Review replicat report error.
GGSCI> view report rep01
If you found something like this below, you probably should review partition in target table
This is because the error is generated by an unpublished bug 17891564, as per described in MOS ORA-7445 [ocl_lock_get_waitobj_owner] on an Exadata storage cell (Doc ID 1906366.1).
It affects Exadata storage cell with image version between 184.108.40.206.0 and 220.127.116.11.0. The CELLSRV process crash with this error as per:
Cellsrv encountered a fatal signal 11 Errors in file /opt/oracle/cell18.104.22.168.0_LINUX.X64_131014.1/log/diag/asm/cell//trace/svtrc_11711_27.trc (incident=257): ORA-07445: exception encountered: core dump [ocl_lock_get_waitobj_owner()+26]  [0x000000000]    Incident details in: /opt/oracle/cell22.214.171.124.0_LINUX.X64_131014.1/log/diag/asm/cell//incident/incdir_257/svtrc_11711_27_i257.trc
The CELLSRV process should auto restart after this error.
Need to configure first GoldenGate data replication?
Should be hard to configure?
Which strategies should I check?
What step may I do?
Yeah, these days I got some errors and when validating the server found the fllowing error:
su: cannot set user id: Resource temporarily unavailable
As you can imagine, in order to fix the issue, I adjusted the /etc/security/limits.conf increasing oracle nprocs to:
oracle soft nproc 4047 oracle hard nproc 20384
Ok, turns out that after a while I got the same errors again…
After some investigating I find that the EM Agent process was with 5020 threads!
Take a look:
Struggling with that, right?
As you know, in Oracle Database 9i the view V$SESSION doesn’t have SQL_ID column…
So how to map SQLs in my database? And, for example, how to get the SQLs causing a lock?
In the end of the day, the SQL_ID is only a representation of the hash_value of an SQL. You can even make the translation from SQL_ID to Hash Value as you can check on this post by Tanel Poder.
Ok, but I have to map which sql is causing the lock in my 9i database, how can I do that?
Here it goes:
If session status is ACTIVE:
SELECT s1.sql_text from v$sqlarea s1,v$session s2 where s2.SID=&sid and s2.SQL_ADDRESS = s1.ADDRESS
If session status is INACTIVE:
SELECT s1.sql_text from v$sqlarea s1,v$session s2 where s2.SID=&sid and s2.prev_sql_addr = s1.ADDRESS
You’re welcome! 😉
See you next week!
If you search solutions to this Error, you will perceived only one documented root cause:
OGG-01411 Cannot convert input file ./dirdat/xx000549 with format RELEASE 9.0/9.5 to output file ./dirdat/zz000034 with format RELEASE 12.1.
“The output trail of the data pump has a different format (version) than the input trail of the data pump”
If you are using GG 12.1 version, and all trails (rmttrail and exttrail) are correctly set with “format release”, you fell into a bug.
Oracle recommend to do upgrade to GG 12.2.
To work around this issue and start process, you need write a new trail, perform etrollover and reposition pump process.
On the Target System, process works fine, but not receive new trails, because pump process are abended.
GGSCI (lab2.grepora.net) 001> info rep01 REPLICAT rep01 Last Started 2017-03-20 12:08 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:09 ago) Process ID 13563 Log Read Checkpoint File ./dirdat/zz000034 2017-03-22 09:53:31.004144 RBA 30683971