Script: Copy Large Table Through DBLink

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

 

Watching the inserts running…

targetdb>@sess
User:MATHEUS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Matheus.

Advertisements

ORA-19751: could not create the change tracking file

Let’s make it simple to solve the problem:

# Error:

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DGDATA/mydb/changetracking/ctf.470.859997781'
ORA-17502: ksfdcre:1 Failed to create file +DGDATA/mydb/changetracking/ctf.470.859997781
ORA-17501: logical block size 4294967295 is invalid
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted
ORA-17503: ksfdopn:2 Failed to open file +DGDATA/mydb/changetracking/ctf.470.859997781
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted

# Solution:

SQL> alter database disable BLOCK CHANGE TRACKING;
Database altered.
SQL> alter database open;
Database altered.

Then, after everything be OK, you fix the situation recrating a BCTF:

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '+DGDATANEW';

MTFBWU!

Matheus.

Database Migration/Move with RMAN: Are you sure nothing is missing?

Forced by the destiny to make a migration using backup/restore (with a little downtime), how to be sure nothing will be lost during the migration?
Here is a way: Create your own data just before migrating. 🙂

Seems like a kludge and it is.. haha.. But it works. Take a look:

# Original Database

SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup restrict;
ORACLE instance started.
Total System Global Area 2689060864 bytes
Fixed Size 2229520 bytes
Variable Size 1996491504 bytes
Database Buffers 671088640 bytes
Redo Buffers 19251200 bytes
Database mounted.
Database opened.
SQL> create table matheus_boesing.migration (text varchar2(10));
Table created.
SQL> insert into matheus_boesing.migration values ('well done!');
1 row created.
SQL> commit;
Commit complete.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> shu immediate;
SQL> exit;
$ rman target /
connect catalog rman_mydb/password@catalogdb
run { backup archivelog all;}

# Destination Database

$ rman target /
connect catalog rman_mydb/password@catalogdb
run { recover database;}
$ sqlplus / as sysdba
SQL> select count(1), to_char(CHECKPOINT_TIME, 'DD/MM/YYYY HH24:MI:SS') from V$DATAFILE_HEADER t
group by to_char(CHECKPOINT_TIME, 'DD/MM/YYYY HH24:MI:SS') order by 2;
COUNT(1) TO_CHAR(CHECKPOINT_
---------- -------------------
51 27/06/2015 22:15:28
-- All datafiles with synchronized headers...
SQL> alter database open read only; 
-- If needed, you can do more recover, this way...
Database altered.
SQL> select * from matheus_boesing.migration;
TEXT
----------
well done!
-- Means no more recover is needed :)
SQL> shutdown immediate;
SQL> alter database open resetlogs;
Database altered.

And be Happy! 😀

Matheus.

Windows: “ORA-12514” After Database Migration/Moving (Using DNS Alias)

It’s usual to use DNS Aliases pointing to scanlistener. This way, we create an abstraction/layer bewteen clients/application and the cluster where database is. Some activities like tierization/consolidation and database moving between clusters (converting to Pluggable, etc), would be much more transparent.

Buuuut, if after a database migration, all the services online and listening, your client is stucking with:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Remmember you are using DNS to make this layer. Have you tried to flush DNS Cache?
I faced this problem with a Windows Application. The solution:

C:\Users\matheus_boesing>ipconfig /flushdns
Windows IP Configuration
Successfully flushed the DNS Resolver Cache.

All working fine after that. 🙂

Matheus.

Create SPFILE on ASM from PFILE on Filesystem

Some basics, right?
Another thing that is not usual and everytime I do, someone be surprised: “shu” alias for “shutdown”:

SQL> create spfile='+DGDATA/MYDB/spfilemydb.ora' from pfile='/oracle/product/11.2/dbs/init_mydb.ora';
File created.
SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

The Bourleson Master also wrote about it. Take a look on a better detailed post about this subject: http://www.dba-oracle.com/concepts/pfile_spfile.htm.

Matheus.

ORA-29760: instance_number parameter not specified

I felt myself stupid when I lost a few minutes to undestand this error:

SQL> startup pfile=init_corpdb.ora
ORA-29760: instance_number parameter not specified

Do you belive the solution was simply to set a number in ORACLE_SID?
Take a look:

dbsrvr>echo $ORACLE_SID
corpdb
dbsrvr>export ORACLE_SID=corpdb_1
dbsrvr>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sun Jun 28 00:18:05 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup pfile=init_corpdb.ora
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2220200 bytes
Variable Size 889196376 bytes
Database Buffers 3372220416 bytes
Redo Buffers 12144640 bytes
Database mounted.
Database opened.

I hope neve miss time with this again… 😛

Matheus.