GGate: STOP request pending. There are open, long-running transactions.

Hey all!
So, these days I was about to stop the GGate extract to add a new table to replication and saw this:

GGSCI (greporasrvr) 2> stop extract GREPORA_EXT

Sending STOP request to EXTRACT GREPORA_EXT ...

STOP request pending. There are open, long-running transactions.
Before you stop Extract, make the archives containing data for those transactions available for when Extract restarts.
To force Extract to stop, use the SEND EXTRACT GREPORA_EXT, FORCESTOP command.
Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 616, RBA 135673360.

2018-04-12 15:55:41 WARNING OGG-01742 Command sent to EXTRACT GREPORA_EXT returned with an invalid response.

Interesting, right?
Let’s discover what is causing the situation:

GGSCI (greporasrvr) 2> send extract GREPORA_EXT, showtrans duration 20 MIN

Sending showtrans request to EXTRACT GREPORA_EXT ...

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 616, RBA 135673360

------------------------------------------------------------
XID: 92.1.1460254
Items: 1
Extract: GREPORA_EXT
Redo Thread: 1
Start Time: 2018-04-12:08:27:03
SCN: 22.120909138 (94610189650)
Redo Seq: 616
Redo RBA: 135673360
Status: Running

------------------------------------------------------------
XID: 69.7.1771671
Items: 1
Extract: GREPORA_EXT
Redo Thread: 1
Start Time: 2018-04-12:13:27:29
SCN: 22.121387178 (94610667690)
Redo Seq: 629
Redo RBA: 1457168
Status: Running

15:59:12 SYS@GREPDB>select * from gv$transaction where xidusn=92;
1 000000018FEA2170 92 1 1460254 121 548870 24255 48 ACTIVE 04/12/18 08:27:02 120909138 22 168 2
131578 24097 43 0000000198A49D68 7683 NO NO NO NO

0 0 0 0 0 0 0 0 81875 4581610 23108750 1415217 -1.675E+09 197 12-APR-2018 08:27:02 0 0
9.4610E+10 0 5C0001001E481600 0000000000000000 0000000000000000

1 row selected.

16:00:08 SYS@GREPDB>select sid,serial#,event,machine,sql_id,seconds_in_wait,prev_sql_id,module,program,action from gv$session where taddr='000000018FEA2170';
885 55577 db file sequential read feedprocs12 39za7ttrfjbr1 55 6x9pg77q982g3
JDBC Thin Client JDBC Thin Client

1 row selected.

16:00:37 SYS@GREPDB>set verify off
set pagesize 999
col username format a13
col prog format a22
col sql_text format a140
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col etime format 9,999,999.99
col execs for 99999,999
select sql_id, child_number, plan_hash_value plan_hash, executions execs, elapsed_time/1000000 etime,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime, u.username,
sql_text
from v$sql s, dba_users u
where
sql_id like nvl('&sql_id',sql_id)
and u.user_id = s.parsing_user_id
/
Enter value for sql_id: 39za7ttrfjbr1

SQL_ID CHILD PLAN_HASH EXECS ETIME AVG_ETIME USERNAME
------------- ------ ---------- ---------- ------------- ------------- -------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------
39za7ttrfjbr1 0 72089250 0 25,434.97 25,434.97 GREPORA
--SQL Name:summary/envision/insertMyTable INSERT INTO mytable ( .......) 

1 row selected.

Nice!
Now we know it’s being caused by this long insert in mytable, but how can I proceed?
There are basically two options:
1. Kill the session on DB: Long rollback predicted, be aware of the impact and rollback activity, plus application effects if any.
2. Force stop of Extract with SEND EXTRACT GREPORA_EXT, FORCESTOP: In this case there is no effect for the database and no rollback, HOWEVER, when extract start again it will need to start from this cancelled sequence. First question, do I have this sequence still available? As you could see before, sequence 616 on would be needed, lets check on our availability:

16:01:41 SYS@GREPDB>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +FRA_GREPDB
Oldest online log sequence 619
Next log sequence to archive 634
Current log sequence 634

Ouch!
So, in case it be needed, be aware you’ll need your archivelogs and, if already removed, restore may be needed.

There is also third option: 3. Wait until transaction be completed. 🙂

If 1 or 2 need to be chosen, choose wisely. 🙂

Hope this helps.
Cheers!

OEM 13C Patching Agent: [ERROR- Failed to Update Target Type Metadata]

While applying Patch to OEM13c, specifically to OMS Agent, I got this error when trying to start it back:

Collection Status                            : Collections enabled
Heartbeat Status       : OMS responded illegally [ERROR- Failed to Update Target Type Metadata]
Last attempted heartbeat to OMS              : 2018-04-17 12:12:51
Last successful heartbeat to OMS             : (none)
Next scheduled heartbeat to OMS              : 2018-04-17 12:13:21
---------------------------------------------------------------
Agent is Running and Ready
[oracle@oem13c oms]$ ./emctl upload agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)
[oracle@oem13c oms]$ ./emctl pingOMS
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD pingOMS error: OMS sent an invalid response: “ERROR- Failed to Update Target Type Metadata”

Nice hãn?
I Found to MOS EM 13c Agent : pingOMS error: OMS sent an invalid response: “ERROR- Failed to Update Target Type Metadata” (Doc ID 2318564.1) saying:

“This particular issue is caused when any Agent Plugin is upgraded to a higher level than the OMS plugin.”

The solution according to MOS Doc is to rollback the Agent Plugins ahead to the OMS version. Checking it:
More“OEM 13C Patching Agent: [ERROR- Failed to Update Target Type Metadata]”

Opening New PDB 12.1.0.2 – Warning: PDB altered with errors

Hi,
Some time ago, after creating a pluggable database from seed, it was simply not open, check it out:

SQL> alter pluggable database mypdb open;

Warning: PDB altered with errors.

When checking for the pdb status on v$pdb, pdb, I found it was in restricted mode. Checking the alert.log, not a single error to help to solving the issue, as you can see:

CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
Fri Feb 02 16:06:05 2018
APEX_040200.WWV_FLOW_TEMPLATES (JAVASCRIPT_CODE_ONLOAD) - CLOB populated
Fri Feb 02 16:06:37 2018
****************************************************************
Pluggable Database mypdb with pdb id - 7 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
This instance was first to open pluggable database mypdb (container=7)
Database Characterset for mypdb is WE8MSWIN1252
Deleting old file#2 from file$
Deleting old file#4 from file$
Adding new file#108 to file$(old file#2)
Adding new file#109 to file$(old file#4)
Successfully created internal service mypdb at open
ALTER SYSTEM: Flushing buffer cache inst=1 container=7 local
****************************************************************
Post plug operations are now complete.
Pluggable database mypdb with pdb id - 7 is now marked as NEW.
****************************************************************
Completed: CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
alter pluggable database mypdb open
Fri Feb 02 16:06:52 2018
This instance was first to open pluggable database mypdb (container=7)
Pluggable database mypdb dictionary check beginning
Pluggable Database mypdb Dictionary check complete
Database Characterset for mypdb is WE8MSWIN1252
Fri Feb 02 16:07:07 2018
Opening pdb mypdb (7) with no Resource Manager plan active
Fri Feb 02 16:07:08 2018
Logminer Bld: Build started
Resize operation completed for file# 108, old size 296960K, new size 307200K
Resize operation completed for file# 108, old size 307200K, new size 317440K
Fri Feb 02 16:07:15 2018
Logminer Bld: Done
Pluggable database mypdb opened read write
Completed: alter pluggable database mypdb open

In the end of the day, discovered this is a match to MOS Bug 19174942 – After upgrade from 12.1.0.2 new cloned PDB opens with warnings in restricted mode (Doc ID 19174942.8).

To fix the problem you need create the missing default tablespaces for each common user (you can see that from DBA_USERS view in ROOT). In my case, it was the tablespace USERS (didn’t existing by looking up on dba_data_files view on the pdb). Seems it was not created correctly during create of the pluggable. So, to fix it:

create tablespace users datafile '+DGDATA/MYCDB/DATAFILE/user_01.dbf' size 100m;

Then you just have to restart your PDB and it will open without problems:

SYS@MYCDB>alter session set container=CDB$ROOT;

SQL> alter pluggable database mypdb close;

Pluggable database altered.

SQL> alter pluggable database mypdb open;

Pluggable database altered.

See you in the next post!

Scheduler Job Start Time Changed After DST Change

Hi all,
Some time ago a client asked me to check why a Job started running in a different time. This was a week after the DST change. Easy guess, right?

The job was scheduled using timezone offset (PM-04:00) instead of timezone region. Considering the DST change, the offset changed as well (-5:00), but not the job schedule.
To fix the job time definitely, I changed schedule to use timezone region that database is in. Check below the investigation and solution:

SQL> select owner,job_name, repeat_interval, last_start_date, next_run_date from dba_scheduler_jobs where job_name='MYJOB';

OWNER			 JOB_NAME    REPEAT_INTERVAL	LAST_START_DATE                      NEXT_RUN_DATE
---------- ---------- ----------------- -----------------------------------  --------------------
GREPORA		 MYJOB      FREQ=DAILY;			   08-NOV-17 09.00.00.330739 PM -04:00 08-NOV-17 09.00.00.300000 PM -04:00
							  
												  
SQL> SELECT * FROM   dba_scheduler_global_attribute  WHERE  attribute_name = 'DEFAULT_TIMEZONE';

ATTRIBUTE_NAME	  VALUE
----------------- ---
DEFAULT_TIMEZONE  EST5EDT


SQL> SELECT tzabbrev, TZ_OFFSET(tzname), tzname FROM V$TIMEZONE_NAMES 
WHERE tzname IN ('EST', 'EDT') OR tzabbrev IN ('EST', 'EDT') ORDER BY 1,2;

TZABBREV	TZ_OFFS TZNAME
--------- ------- ---------------------------
EDT       -04:00  America/Santo_Domingo
EDT       -05:00  America/Fort_Wayne
EDT       -05:00  America/Grand_Turk
EDT       -05:00  America/Indiana/Indianapolis
EDT       -05:00  America/Indiana/Marengo
EDT       -05:00  America/Indiana/Petersburg
EDT       -05:00  US/Michigan
EDT       -05:00  America/Detroit
EDT       -05:00  US/Eastern
EDT       -05:00  US/East-Indiana
EDT       -05:00  Jamaica
EDT       -05:00  EST5EDT
EDT       -05:00  Canada/Eastern
EDT       -05:00  America/Toronto
EDT       -05:00  America/Thunder_Bay
EDT       -05:00  America/Port-au-Prince
EDT       -05:00  America/Pangnirtung
EDT       -05:00  America/Nipigon
EDT       -05:00  America/New_York
EDT       -05:00  America/Nassau
EDT       -05:00  America/Montreal
EDT       -05:00  America/Louisville
EDT       -05:00  America/Kentucky/Monticello
EDT       -05:00  America/Kentucky/Louisville
EDT       -05:00  America/Jamaica
EDT       -05:00  America/Iqaluit
EDT       -05:00  America/Indiana/Vevay
EDT       -05:00  America/Indiana/Vincennes
EDT       -05:00  America/Indiana/Winamac
EDT       -05:00  America/Indianapolis
EDT       -06:00  America/Indiana/Tell_City
EDT       -06:00  America/Cancun
EST       +10:00  Australia/Queensland
EST       +10:00  Australia/Lindeman
EST       +10:00  Australia/Brisbane
EST       +10:30  Australia/Broken_Hill
EST       +10:30  Australia/Yancowinna
EST       +11:00  Australia/Currie
EST       +11:00  Australia/Canberra
EST       +11:00  Australia/ACT
EST       +11:00  Antarctica/Macquarie
EST       +11:00  Australia/Hobart
EST       +11:00  Australia/LHI
EST       +11:00  Australia/Lord_Howe
EST       +11:00  Australia/Melbourne
EST       +11:00  Australia/NSW
EST       +11:00  Australia/Sydney
EST       +11:00  Australia/Tasmania
EST       +11:00  Australia/Victoria
EST       -04:00  America/Antigua
EST       -04:00  America/Moncton
EST       -04:00  America/Santo_Domingo
EST       -05:00  US/Eastern
EST       -05:00  America/Grand_Turk
EST       -05:00  America/Atikokan
EST       -05:00  Jamaica
EST       -05:00  EST5EDT
EST       -05:00  EST
EST       -05:00  America/Indiana/Indianapolis
EST       -05:00  Canada/Eastern
EST       -05:00  America/Toronto
EST       -05:00  America/Thunder_Bay
EST       -05:00  America/Resolute
EST       -05:00  America/Indiana/Marengo
EST       -05:00  America/Indiana/Petersburg
EST       -05:00  America/Cayman
EST       -05:00  America/Indiana/Vevay
EST       -05:00  America/Indiana/Vincennes
EST       -05:00  America/Indiana/Winamac
EST       -05:00  America/Indianapolis
EST       -05:00  America/Iqaluit
EST       -05:00  America/Jamaica
EST       -05:00  America/Kentucky/Louisville
EST       -05:00  America/Kentucky/Monticello
EST       -05:00  US/Michigan
EST       -05:00  America/Louisville
EST       -05:00  America/Coral_Harbour
EST       -05:00  America/Detroit
EST       -05:00  America/Fort_Wayne
EST       -05:00  America/Montreal
EST       -05:00  America/Nassau
EST       -05:00  America/New_York
EST       -05:00  America/Nipigon
EST       -05:00  America/Panama
EST       -05:00  America/Pangnirtung
EST       -05:00  America/Port-au-Prince
EST       -05:00  US/East-Indiana
EST       -06:00  US/Central
EST       -06:00  CST
EST       -06:00  America/Rankin_Inlet
EST       -06:00  America/Merida
EST       -06:00  America/Menominee
EST       -06:00  America/Managua
EST       -06:00  America/Knox_IN
EST       -06:00  America/Indiana/Tell_City
EST       -06:00  America/Indiana/Knox
EST       -06:00  America/Chicago
EST       -06:00  America/Cancun
EST       -06:00  US/Indiana-Starke
EST       -07:00  America/Cambridge_Bay

100 rows selected.


SQL> BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
   name                 => 'GREPORA.MYJOB',
   attribute            => 'start_date',
   value                => TO_TIMESTAMP_TZ('09-11-2017 21:00:00 EST5EDT' ,'DD-MM-YYYY HH24:MI:SS TZR'));
END;
/

PL/SQL procedure successfully completed.

SQL> select owner,job_name, repeat_interval, last_start_date, next_run_date from dba_scheduler_jobs where job_name='MYJOB';

OWNER			 JOB_NAME    REPEAT_INTERVAL	LAST_START_DATE                  NEXT_RUN_DATE
---------- ---------- ----------------- -------------------------------  --------------------
GREPORA		 MYJOB      FREQ=DAILY;			  08-NOV-17 09.00.00.3 09-NOV-17   09-NOV-17 09.00.00.0

Hope this helps you!
Cheers!

OGG-02077 Extract encountered a read error in the asynchronous reader thread and is abending: Error code 1343

It’s been a long time since my last post here.
Well, the time has arrived. New GoldenGate runtime errors (lol):

ERROR   OGG-02077  Extract encountered a read error in the asynchronous reader thread and is abending: Error code 1343, error message: 
ORA-01343: LogMiner encountered corruption in the logstream.

OK then, don’t worry.

Check database alert regarding messages about RFS retries. Wait until it stops, then try to restart GoldenGate Extracts.

It appears to be GG 12.2 bug and seems it fixes itself. There is no published MOS DOC regarding that so far.

Hope it solve your Extract errors too… \o/

Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database

Hi all!
So, running Exacheck on a recently created database, found this error:

 FAIL => The bundle patch version installed does not match the bundle patch version registered in the database: [host]:[sid],...

This means that a boundle patch with sqlpatch was applied to OH and not to this database. Happens because Exacheck try to match the patch info stored in oraInventory with the patch info stored in dba_registry_sqlpatch.

Also note in some situations, running datapatch may require the database to be in upgrade mode and if you are patching Exadata , which is generally a RAC based environment, you need to set the cluster_database=false and at least 1 job_queue_process before starting the database using startup upgrade command. This should be described in readme on related patch.

When checking for this, I found a really interesting validation script here. As per:

opatch_bp=$($ORACLE_HOME/OPatch/opatch lspatches 2>/dev/null|grep -iwv javavm|grep -wi database|head -1|awk -F';' '{print $1}') 
database_bp_status=$(echo -e "set heading off feedback off timing off \n select STATUS from dba_registry_sqlpatch where PATCH_ID = $opatch_bp;"|$ORACLE_HOME/bin/sqlplus -s " / as sysdba" | sed -e '/^ *$/d')
if [ "$database_bp_status" == SUCCESS ]
then
      echo "SUCCESS: Bundle patch installed in the database matches the software home and is installed successfully."
else
      echo "FAILURE: Bundle patch installed in the database does not match the software home, or is installed with errors." 
 fi

To fix, just set environment variables to correct database, go to $ORACLE_HOME/OPatch and run:

More“Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database”

Application Named Variables Returning ORA-01008: not all variables bound

Hi all,
I saw it a long time ago and last week I was involved in a discussion for this same error. So, this deserve a post for future reference. 🙂

Imagine the situation: You are building an application or a module to perform queries on database. You want to use variables in the query and fill the values using text fields on application. Sounds easy and works fine for SQLServer and others, but Oracle database is returning:

ORA-01008: not all variables bound

What to do?

Fist let’s clear the issue: this is not related to database layer or oracle interpreter/parser. This error happens when a bind variable being used on SQL have no value. The official reasoning is:

Cause: A SQL statement containing substitution variables was executed without all variables bound. All substitution variables must have a substituted value before the SQL statement is executed.

In my case ODP.Net with C# (but apply to other languages). Interesting fact:
“ODP.Net provider from oracle uses bind by position as default. To change the behavior to bind by name, set property BindByName to true.”

This means Oracle may be waiting for “:1”, “:2” as bind variables and this can also not being set correctly by application.
In this case, please try to set BindByName to true in application code for Oracle command. Example below:

using(OracleCommand cmd = con.CreateCommand()) {
    ...
    cmd.BindByName = true;
    ...
}

Now, try again. 🙂

Hope this helps you!
Cheers!

RAC: ORA-01265: Unable to delete ARCHIVED LOG

Weird?
Found that some time ago. This error was reported because a run condition given by the 2 instances was trying to delete the same archivelog file. This can vary to more than 2 instances, depending on your RAC configuration, of course.

This is a quick way to identify if you are in presence of the same situation and ignore the error, using the sequence number reported on the error (SEQ) and the last line number reported by grep (LINE):

# Node1:

grep -n SEQ /oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb1.log | tail
tail -n +(LINE-10) oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb1.log| head

# Node2:

grep -n SEQ /oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb2.log | tail
tail -n +(LINE-10) oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb2.log| head

The result should be the archivelog sequence deleted on one of the nodes and the error reported on the other, at the same timestamp.

Hope it helps!

ERROR OGG-03533: Conversion from character set {???} of source column {???}

Hi.

If you need to replicate data using Goldengate, between different databases types, you may get the following error.

” ERROR OGG-03533: Conversion from character set {???} of source column {???} to character set {???} of target column {???} failed because the source column contains a character ‘{?}’ at offset {?}  that is not available in the target character set. “

To replace charset not accepted on the target, try using the replication parameter: REPLACEBADCHAR.

In my case, I set up a data replication between MS-SQLServer and ORACLE, this bases uses charset windows-1252 and US-ASCII respectively.

” ERROR OGG-03533 Conversion from character set windows-1252 of source column CATEGORY to character set US-ASCII of target column CATEGORY failed because the source column contains a character ‘d3’ at offset 2 that is not available in the target character set. “

I am using the following parameter in replicat:
REPLACEBADCHAR ESCAPE
Replication works and no data loss.

See you!

ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB

Hi.

When you try to start the Goldengate Extraction Process in MSQL server, and you receive the following error.

” ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB Create the Oracle GoldenGate CDC cleanup job prior to starting the capture process. “

   To create cleanUP job for Goldengate SQL Server, use the .bat script in the GOLDENGATE home directory.

Comand Sintax

ogg_cdc_cleanup_setup.bat createJob [goldengate user] [goldengate password] [database name] [database host] [instance]

Exemple

ogg_cdc_cleanup_setup.bat createJob GGATE welcome1 Msql_DB msql-db01.net dbo

 

In some cases, it may return an error, stating that the process already exists.

” Msg 50000, Level 16, State 1, Server msql-db01, Line 34 The specified @name (‘OracleGGCleanup_Msql_DB_Job’) already exists. “

In this case, you may drop and recreate the Job. just change “createJob” for “dropJob”

The following is the success message of job creation

” INFO OGG-05281 Current OGG cleanup Job Settings – Job Name: OracleGGCleanup_Msql_DB_Job, JobSchedRec: , JobSchedFreq: , DatabaseName: Msql_DB, Tranlogoption managecdccleanup: 1, threshold: 500, retention: 4.320. “

Hope this helps!