GoldenGate Datapump or Extract Abend with OGG-01235?
ERROR OGG-01235 Command not allowed by receiving manager process.
ERROR OGG-01668 PROCESS ABENDING.
Sounds remote GoldenGate client are denying Datapump / Extract remote connection.
Suggest check and edit GoldenGate manager parameter file.
--IPs addr from source GoldenGate Pump
ACCESSRULE, PROG *, IPADDR your.ip.addr.here, ALLOW
Oracle documentation is good here.
Valid for Manager
Use ACCESSRULE to control connection access to the Manager process and the processes under its control. You can establish multiple rules by specifying multiple ACCESSRULE statements in the parameter file and control their priority. There is no limit to the number of rules that you can specify. To establish priority, you can either list the rules in order from most important to least important, or you can explicitly set the priority of each rule with the PRI option.
GoldenGate replicat issuing OGG-01296 Error mapping then abbending?
Review replicat report error.
GGSCI> view report rep01
If you found something like this below, you probably should review partition in target table
Need to configure first GoldenGate data replication?
Should be hard to configure?
Which strategies should I check?
What step may I do?
Have you read last ‘Failure unregister integrated extract’ and still impossible to unregister Integrated Extract on Oracle Database?
Is it’s ‘Impossible mission’ to unregister extract?
If you search solutions to this Error, you will perceived only one documented root cause:
OGG-01411 Cannot convert input file ./dirdat/xx000549 with format RELEASE 9.0/9.5 to output file ./dirdat/zz000034 with format RELEASE 12.1.
“The output trail of the data pump has a different format (version) than the input trail of the data pump”
If you are using GG 12.1 version, and all trails (rmttrail and exttrail) are correctly set with “format release”, you fell into a bug.
Oracle recommend to do upgrade to GG 12.2.
To work around this issue and start process, you need write a new trail, perform etrollover and reposition pump process.
On the Target System, process works fine, but not receive new trails, because pump process are abended.
GGSCI (lab2.grepora.net) 001> info rep01
REPLICAT rep01 Last Started 2017-03-20 12:08 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:09 ago)
Process ID 13563
Log Read Checkpoint File ./dirdat/zz000034
2017-03-22 09:53:31.004144 RBA 30683971
After deploy GoldenGate with downstream database option archives from downstream database not cleaning ? All database transaction are gracefull, but archives (from downstream) are not cleaning?
Are no log transactions registered on GG Extract?
GGSCI (dbcloud) 3> send ETL01 showtrans
Sending SHOWTRANS request to EXTRACT ETL01 ...
Oldest redo log files necessary to restart Extract are:
Redo Thread: 1
Start Time: 2017-00-00:00:00:00
SCN: 1682.4049305132 (7228184297004)
Redo Seq: 8612
Redo RBA: 20965491728
If you on same case, make this:
It should reach (and delete) JUST archived log that have already read by GoldenGate.
set serveroutput on size unlimited
set line 1000
set trimsp on
set feed off
set pages 5000
set pagesize 0
spool [[ some_dir ]]/delete_archives__dowstream_goldengate.sh
-- SR 3-14409179111 - Golden Gate Configuration, How To Delete Archive Logs From Downstream Database (Doc ID 2011174.1)
SELECT 'rm ' || r.NAME
-- case when (r.next_scn > c.required_checkpoint_scn) then 'NO' else 'YES' end purgable
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and r.source_database = c.source_database and r.next_scn < ( select min(required_checkpoint_scn) from dba_capture where captur
order by modified_time;
Schedule the output in automation tool (crontab / dba_job / windows job scheduler, (whatever)):
GoldenGate Extract / Replicat abend with bellow error:
Source Context :
SourceModule : [ggdb.ora.sess]
SourceID : [/scratch/aime/adestore/views/aime_stuya22/oggcore/OpenSys/src/gglib/ggdbora/ocisess.c]
SourceFunction : [OCISESS_context_def::oci_try(int, const char *, ...)]
SourceLine : 
2017-04-19 07:52:07 ERROR OGG-00665 OCI Error executing single row select (status = 3113-ORA-03113: end-of-file on communication channel)
It’s is a common stuff in Oracle Database not stable or it’s taking some 00600.
- Oracle Database alert log
- If user process has been killed
- Network communication (IPTABLES stuff)
- Server crash
Suggest you to review ‘Master Note: Troubleshooting ORA-03113 (Doc ID 1506805.1)‘ on Oracle support services and keep database away from ORA-03113.