GGATE ABBENDED: ORA-00308: Cannot Open Archived Log

Hi all!
Ok, so this is one of the most common issues for GGate administration. How to solve it? Easy thing.

First let’s understand what it means: It means the redologs don’t have the required information (assuming integrated) and you have already deleted archivelogs the extract needs. Why? Probably because you already backed up those archivelogs and they were not needed for the database anymore.

Unfortunately we don’t have any kind of ARCHIVELOG DELETION POLICY to guarantee extracts had already read it, like we have for Dataguard. So, what can we do?

Restore the missing archivelogs.

But first let’s confirm on the errors. Some examples:

ERROR OGG-00446 Oracle GoldenGate Capture for Oracle, ext1.prm: Opening ASM file +ARCH/2_11055_742755632.dbf in DBLOGREADER mode: (308) ORA-00308: cannot open archived log '+ARCH/2_11055_742755632.dbf' ORA-17503.

or

ERROR OGG-01028 Oracle GoldenGate Capture for Oracle,ext1.prm: Getting attributes for ASM file +ARCH/2_86720_716466928.dbf,

SQL : (15056)

ORA-15056: additional error message ORA-15173: entry '2_86720_716466928.dbf' does not exist in directory '/...


SOLUTION
:

Restore all archive logs starting from recovery checkpoint until the current checkpoint and restart the extract:
Continue reading

Scheduler Job for OS Audit Trail Cleanup Routine

Hello all!
As you all know, most of OS files generated by oracle, like traces and logs can be cleared/managed using ADRCI. However, more than once, I saw filesystem get full of Audit Trail files.
In general, I see DBAs and companies implementing shell scripts to house keep those files. Usually some variation of the find with rm I posted some weeks ago. However, we have a very good “official” solition for this, using the DBMS_AUDIT_MGMT.

Basically we can create Scheduler Jobs in our databases to keep track on this. The advantages are seveal. The job clear in both nodes, this keep all Oracle information correctly updated, you may receive OEM notification for failure if monitoring jobs, etc.
You can see more detaiils here in MOS Note 731908.1 New Feature DBMS_AUDIT_MGMT to Manage and Purge Audit Information.

How I did?

1. Initial Cleanup to clear files older than a week.
2. Set last archive to a week ago.
3. Created Purge Job, purging older than a week

As per:

Continue reading

Database in Cloud: Quickest Procedure to Clone a Schema

Hey all!
So, we all know that operating with files/dump files can be tricky when using DBaaS in Public Cloud. In some situations, like Amazon RDS service, we simply don’t have access to SO.

In this scenario, how can we quickly clone a schema in the database? Using IMPDP with Database Link.

Also note that when working on AWS environments, avoiding to use dumpfiles when dealing with expdp/impdp is also encouraged to save IOPS from local disks (it is capped based on machine type).

This is, of course, also valid for On-Premise environments with limited area for dump files.

For this to work, we only need to create a database link pointing to the same database.
Also, of course, it is mandatory to use remap_schema, with optional clause remap_tablespace.

In the following example the link name is loop and proceed with impdp from a jumpbox with an Oracle client and tns configuration to RDS database.

1. Creating Database Link:

SQL> create database link loop connect to my_sysdba identified by "***" using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=service-name)))';

Database link created.

SQL> select * from dual@loop;

D
-
X

2. Running IMPDP:

[oracle@jumpbox ~]$ impdp schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log

Import: Release 11.2.0.4.0 - Production on Wed Fev 7 21:03:54 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYS"."SYS_IMPORT_SCHEMA_02": /******** AS SYSDBA schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.481 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NEW_SCHEMA" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
...
. . imported "NEW_SCHEMA"."TABLE1" 0 rows
. . imported "NEW_SCHEMA"."TABLE2" 0 rows
. . imported "NEW_SCHEMA"."TABLE3" 0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Wed Fev 7 21:08:52 2018 elapsed 0 00:04:54

Hope it helps,
Cheers!

DFS Lock Handle During RMAN Backup on Standby Database

Hi all!
Some time ago I faced some session on wait event DFS Lock Handle during a RMAN Backup on Standby Database. Btw, running backup on a Standby is a very interesting approach to avoid running backups on primary, so all nodes can be fully dedicated to application ends.

Turn that in my situation I noticed there was a lock with my apply process. The fix? Quite simple: Cancel apply process, run backup, and restart apply again. In my case, using DG Broker:

dgmgrl
connect / 
show configuration
show database 
edit database  set state = 'apply-off';

— Run Backup

edit database set state = 'apply-on';

 

Hope it helps!
Cheers!

ORA-00001: unique constraint (RMAN.CKP_U1) violated

Hey,
Don’t create so much expectations on this post.

This is because I don’t exactly fixed the issue, but workarounded…
The thing is: This error is caused in catalog database, so the workaround is simple: do a RMAN-nocatalog, I mean, simply don’t connect in catalog to perform the backup.

After completing the backup, I’d suggest you to force a synchronization with command “RESYNC CATALOG“. In worst case, on next execution the implicit resync will fix everything. 🙂

There is no bigger explanations on this, but you can same workaround in MOS Bug 12588237 – RMAN-3002 ORA-1: unique constraint (ckp_u1) violated after dataguard switchover (Doc ID 12588237.8).

And this is it for today!
See you next week!

Backup Not Backuped Archivelogs and Delete Input

Hi all!
Sometimes you are caught in a situation where your database is not backuping archivelogs and need to generate a quick backup commands for those are not backuped yet and deleting it, right?
I saw this situation in this archived discussion at OTN. Unfortunately I couldn’t give my answer… But it’s how I do:

select
'backup archivelog from sequence '||sequence#||' until sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time
from v$archived_log where backup_count=0 and name is not null
order by first_time desc;

It generates an output like:

greporadb> select
  2  'backup archivelog from sequence '||sequence#||' until sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time
  3  from v$archived_log where backup_count=0 and name is not null
  4  order by first_time desc;
'BACKUPARCHIVELOGFROMSEQUEN                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
backup archivelog from sequence 152153 until sequence 152153 thread 1 filesperset=1 delete input;                                                                       
backup archivelog from sequence 152152 until sequence 152152 thread 1 filesperset=1 delete input;                                                                       
backup archivelog from sequence 152151 until sequence 152151 thread 1 filesperset=1 delete input;                                                                       
3 rows selected.

And be happy!

But an observation! It not works this way for databases with dataguard. For these cases you’ll need to add “and name’&dgname’” at select where clause… 😉

See ya!
Matheus.