Segment Creation on Demand or Deferred Segment Creation

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Segment Creation on Demand or Deferred Segment Creation

I also talked about it in post EXP Missing Tables on 11.2.

Segment creation on demand, or deferred segment creation as it is also known, is a space saving feature of Oracle Database 11g Release 2. When non-partitioned tables are created, none of the associated segments (table, implicit index and LOB segments) are created until rows are inserted into the table. For systems with lots of empty tables, this can represent a large space saving. This functionality can be controlled by the DEFERRED_SEGMENT_CREATION initialization parameter, which is set to TRUE by default.

Continue reading

/bin/rm: cannot execute [Argument list too long]

Hey all!

Just a quickie and useful thing today. How many times you found this?

/bin/rm: cannot execute [Argument list too long]

Ok, so, first thing: Is it related to Oracle logs? If so, you may want to ADCRI. Check this post for more info: ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases.

If not, you may solve this using find with rm. Ok, but want to keep the most recent files?

Some examples for you, removing audit files:

# Remove older then 1 day:

find /oracle/greporadb/admin/greporadb/adump -name "*.aud" -mtime +1 -exec rm {} \;

# Remove older then 1 hour:

find /oracle/greporadb/admin/greporadb/adump -name "*.aud" -cmin +60 -exec rm {} \;

Continue reading

GGATE ABBENDED: ORA-00308: Cannot Open Archived Log

Hi all!
Ok, so this is one of the most common issues for GGate administration. How to solve it? Easy thing.

First let’s understand what it means: It means the redologs don’t have the required information (assuming integrated) and you have already deleted archivelogs the extract needs. Why? Probably because you already backed up those archivelogs and they were not needed for the database anymore.

Unfortunately we don’t have any kind of ARCHIVELOG DELETION POLICY to guarantee extracts had already read it, like we have for Dataguard. So, what can we do?

Restore the missing archivelogs.

But first let’s confirm on the errors. Some examples:

ERROR OGG-00446 Oracle GoldenGate Capture for Oracle, ext1.prm: Opening ASM file +ARCH/2_11055_742755632.dbf in DBLOGREADER mode: (308) ORA-00308: cannot open archived log '+ARCH/2_11055_742755632.dbf' ORA-17503.

or

ERROR OGG-01028 Oracle GoldenGate Capture for Oracle,ext1.prm: Getting attributes for ASM file +ARCH/2_86720_716466928.dbf,

SQL : (15056)

ORA-15056: additional error message ORA-15173: entry '2_86720_716466928.dbf' does not exist in directory '/...


SOLUTION
:

Restore all archive logs starting from recovery checkpoint until the current checkpoint and restart the extract:
Continue reading

Scheduler Job for OS Audit Trail Cleanup Routine

Hello all!
As you all know, most of OS files generated by oracle, like traces and logs can be cleared/managed using ADRCI. However, more than once, I saw filesystem get full of Audit Trail files.
In general, I see DBAs and companies implementing shell scripts to house keep those files. Usually some variation of the find with rm I posted some weeks ago. However, we have a very good “official” solition for this, using the DBMS_AUDIT_MGMT.

Basically we can create Scheduler Jobs in our databases to keep track on this. The advantages are seveal. The job clear in both nodes, this keep all Oracle information correctly updated, you may receive OEM notification for failure if monitoring jobs, etc.
You can see more detaiils here in MOS Note 731908.1 New Feature DBMS_AUDIT_MGMT to Manage and Purge Audit Information.

How I did?

1. Initial Cleanup to clear files older than a week.
2. Set last archive to a week ago.
3. Created Purge Job, purging older than a week

As per:

Continue reading

Database in Cloud: Quickest Procedure to Clone a Schema

Hey all!
So, we all know that operating with files/dump files can be tricky when using DBaaS in Public Cloud. In some situations, like Amazon RDS service, we simply don’t have access to SO.

In this scenario, how can we quickly clone a schema in the database? Using IMPDP with Database Link.

Also note that when working on AWS environments, avoiding to use dumpfiles when dealing with expdp/impdp is also encouraged to save IOPS from local disks (it is capped based on machine type).

This is, of course, also valid for On-Premise environments with limited area for dump files.

For this to work, we only need to create a database link pointing to the same database.
Also, of course, it is mandatory to use remap_schema, with optional clause remap_tablespace.

In the following example the link name is loop and proceed with impdp from a jumpbox with an Oracle client and tns configuration to RDS database.

1. Creating Database Link:

SQL> create database link loop connect to my_sysdba identified by "***" using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=service-name)))';

Database link created.

SQL> select * from dual@loop;

D
-
X

2. Running IMPDP:

[oracle@jumpbox ~]$ impdp schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log

Import: Release 11.2.0.4.0 - Production on Wed Fev 7 21:03:54 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYS"."SYS_IMPORT_SCHEMA_02": /******** AS SYSDBA schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.481 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NEW_SCHEMA" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
...
. . imported "NEW_SCHEMA"."TABLE1" 0 rows
. . imported "NEW_SCHEMA"."TABLE2" 0 rows
. . imported "NEW_SCHEMA"."TABLE3" 0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Wed Fev 7 21:08:52 2018 elapsed 0 00:04:54

Hope it helps,
Cheers!

DFS Lock Handle During RMAN Backup on Standby Database

Hi all!
Some time ago I faced some session on wait event DFS Lock Handle during a RMAN Backup on Standby Database. Btw, running backup on a Standby is a very interesting approach to avoid running backups on primary, so all nodes can be fully dedicated to application ends.

Turn that in my situation I noticed there was a lock with my apply process. The fix? Quite simple: Cancel apply process, run backup, and restart apply again. In my case, using DG Broker:

dgmgrl
connect / 
show configuration
show database 
edit database  set state = 'apply-off';

— Run Backup

edit database set state = 'apply-on';

 

Hope it helps!
Cheers!

ORA-00001: unique constraint (RMAN.CKP_U1) violated

Hey,
Don’t create so much expectations on this post.

This is because I don’t exactly fixed the issue, but workarounded…
The thing is: This error is caused in catalog database, so the workaround is simple: do a RMAN-nocatalog, I mean, simply don’t connect in catalog to perform the backup.

After completing the backup, I’d suggest you to force a synchronization with command “RESYNC CATALOG“. In worst case, on next execution the implicit resync will fix everything. 🙂

There is no bigger explanations on this, but you can same workaround in MOS Bug 12588237 – RMAN-3002 ORA-1: unique constraint (ckp_u1) violated after dataguard switchover (Doc ID 12588237.8).

And this is it for today!
See you next week!

Backup Not Backuped Archivelogs and Delete Input

Hi all!
Sometimes you are caught in a situation where your database is not backuping archivelogs and need to generate a quick backup commands for those are not backuped yet and deleting it, right?
I saw this situation in this archived discussion at OTN. Unfortunately I couldn’t give my answer… But it’s how I do:

select
'backup archivelog from sequence '||sequence#||' until sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time
from v$archived_log where backup_count=0 and name is not null
order by first_time desc;

It generates an output like:

greporadb> select
  2  'backup archivelog from sequence '||sequence#||' until sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time
  3  from v$archived_log where backup_count=0 and name is not null
  4  order by first_time desc;
'BACKUPARCHIVELOGFROMSEQUEN                                                                                                                                             
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
backup archivelog from sequence 152153 until sequence 152153 thread 1 filesperset=1 delete input;                                                                       
backup archivelog from sequence 152152 until sequence 152152 thread 1 filesperset=1 delete input;                                                                       
backup archivelog from sequence 152151 until sequence 152151 thread 1 filesperset=1 delete input;                                                                       
3 rows selected.

And be happy!

But an observation! It not works this way for databases with dataguard. For these cases you’ll need to add “and name’&dgname’” at select where clause… 😉

See ya!
Matheus.

RMAN Raise ORA-19913 ORA-28365 On Restore from Cloud Backup

First I think was some error with Database Backup To Cloud, when testing. Then I realized it was a simple mistake by myself.

Let me show you. First trying to restore datafile:

[oracle@mydbsrvr archivelogs]$ rman target /
RMAN> restore datafile 6;
Starting restore at 03-MAY-2016 20:00:30
using channel ORA_SBT_TAPE_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=178 device type=DISK
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00006 to /db/u1001/test/cloud_test/test_restore.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 0sr4mdun_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/03/2016 20:00:34
ORA-19870: error while restoring backup piece 0sr4mdun_1_1
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open

Ok, it might happen because I forgot to set encryption password:

RMAN> SET ENCRYPTION ON IDENTIFIED BY "matheusdba" only;
executing command: SET encryption
RMAN> restore datafile 6;
Starting restore at 03-MAY-2016 20:00:30
using channel ORA_SBT_TAPE_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=178 device type=DISK
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00006 to /db/u1001/test/cloud_test/test_restore.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 0sr4mdun_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/03/2016 20:00:34
ORA-19870: error while restoring backup piece 0sr4mdun_1_1
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open

It hapen again?
This point I suspect some kind of bug… But it was my mistake and is not related to Cloud, but to Encryption use. To undestand:
For Backup: Use ENCRYPTION
For Restore/Recover: Use DECRYPTION

Obviously, but take me a minute to realize…

Setting decryption, and problem solved:

RMAN> set DECRYPTION identified by "matheusdba";
executing command: SET decryption
RMAN> restore datafile 6;
Starting restore at 03-MAY-2016 20:00:58
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00006 to /db/u1001/test/cloud_test/test_restore.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 0sr4mdun_1_1
channel ORA_SBT_TAPE_1: piece handle=0sr4mdun_1_1 tag=TAG20160503T193239
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:03
Finished restore at 03-MAY-2016 20:01:02

See ya!
Matheus.