Automating APEX Applications Backup

Hi all,

As you might be aware and already posted by me here, o export APEX Applications the APEXExport Tool should be used.
Here we’ll also see some alternatives to accomplish that.

Plan A) Using same as discussed in the previous post for exporting all Applications in an instance:

As being, for any backup script, 3 parts need to be backed up

1. With database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) dump your APEX applications need to run:

expdp matheusdba schemas=MY_APP_SCHEMA directory=DIR_BKP dumpfile=APEX_APP_SCHEMA.dmp logfile=APEX_APP_SCHEMA.log

2. Run the APEXExport using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

3. Run the APEXExport using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Plan B) Take separate Backups for each Application or Workspace

Here is the interesting part: You can take also backups by applicationid or workspace id, with the following:

1. Exporting by ApplicationID:

APEXExport -db hostname:listenerport:dbservicename -user -password -applicationid 31500

2. Exporting by WorkspaceID:

APEXExport -db hostname:listenerport:dbservicename -user -password -workspaceid 9999

But how take all the workspace IDs? You can use the following SQL statement in SQLWORKSHOP:

select wwv_flow_api.get_security_group_id from dual;

Now you have all the basic exporting steps, it’s up to you to build the script as it fits better for you.

I hope it helps!

Additional notes:

  • The workspace export should export all of the shared components from the workspaces.
  • This does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

Additional Reference:

  • APEXExport Gives: Java.lang.NoClassDefFoundError: Oracle/apex/APEXExport (Doc ID 2265534.1)

Oracle cloud computing hand on

Oracle and others cloud providers are doing nice stuff building their cloud strategies.

Oracle is granting U$ 300 to quickly deploy OLTP /DW and other technology platforms.

With cloud and distributed infrastructure we can code, test, deploy and quickly build any lab purpose as a free* offer.

Let’s review and run the fastest & free sandbox machines for cloud improvements purpose.

cloud 1.PNG

It runs good enogth 🙂

cloud 2

Quickly deploy machines. And receive classic middleware connection files.

\o/

cloud 3

It run’s so easy,  but it may cost money/digital value as it ‘soon it burn’ credit at Oracle.

Why not to test it and build muilt-cloud solution?

Don’t forget security and data privacy standards and Oracle availability.

With this solution at cloud is so simple to run a ‘DB as a button’, it spent 2 minutes in my lab. It actually available on What is amazing time for typical lab or tech hand.

Let’s test it and CHEERS.

Script: Copy Large Table Through DBLink

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

 

Watching the inserts running…

targetdb>@sess
User:MATHEUS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Matheus.

ORA-19751: could not create the change tracking file

Let’s make it simple to solve the problem:

# Error:

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DGDATA/mydb/changetracking/ctf.470.859997781'
ORA-17502: ksfdcre:1 Failed to create file +DGDATA/mydb/changetracking/ctf.470.859997781
ORA-17501: logical block size 4294967295 is invalid
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted
ORA-17503: ksfdopn:2 Failed to open file +DGDATA/mydb/changetracking/ctf.470.859997781
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted
ORA-15001: diskgroup "DGDATA" does not exist or is not mounted

# Solution:

SQL> alter database disable BLOCK CHANGE TRACKING;
Database altered.
SQL> alter database open;
Database altered.

Then, after everything be OK, you fix the situation recrating a BCTF:

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '+DGDATANEW';

MTFBWU!

Matheus.

Database Migration/Move with RMAN: Are you sure nothing is missing?

Forced by the destiny to make a migration using backup/restore (with a little downtime), how to be sure nothing will be lost during the migration?
Here is a way: Create your own data just before migrating. 🙂

Seems like a kludge and it is.. haha.. But it works. Take a look:

# Original Database

SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup restrict;
ORACLE instance started.
Total System Global Area 2689060864 bytes
Fixed Size 2229520 bytes
Variable Size 1996491504 bytes
Database Buffers 671088640 bytes
Redo Buffers 19251200 bytes
Database mounted.
Database opened.
SQL> create table matheus_boesing.migration (text varchar2(10));
Table created.
SQL> insert into matheus_boesing.migration values ('well done!');
1 row created.
SQL> commit;
Commit complete.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> shu immediate;
SQL> exit;
$ rman target /
connect catalog rman_mydb/password@catalogdb
run { backup archivelog all;}

# Destination Database

$ rman target /
connect catalog rman_mydb/password@catalogdb
run { recover database;}
$ sqlplus / as sysdba
SQL> select count(1), to_char(CHECKPOINT_TIME, 'DD/MM/YYYY HH24:MI:SS') from V$DATAFILE_HEADER t
group by to_char(CHECKPOINT_TIME, 'DD/MM/YYYY HH24:MI:SS') order by 2;
COUNT(1) TO_CHAR(CHECKPOINT_
---------- -------------------
51 27/06/2015 22:15:28
-- All datafiles with synchronized headers...
SQL> alter database open read only; 
-- If needed, you can do more recover, this way...
Database altered.
SQL> select * from matheus_boesing.migration;
TEXT
----------
well done!
-- Means no more recover is needed :)
SQL> shutdown immediate;
SQL> alter database open resetlogs;
Database altered.

And be Happy! 😀

Matheus.

Windows: “ORA-12514” After Database Migration/Moving (Using DNS Alias)

It’s usual to use DNS Aliases pointing to scanlistener. This way, we create an abstraction/layer bewteen clients/application and the cluster where database is. Some activities like tierization/consolidation and database moving between clusters (converting to Pluggable, etc), would be much more transparent.

Buuuut, if after a database migration, all the services online and listening, your client is stucking with:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Remmember you are using DNS to make this layer. Have you tried to flush DNS Cache?
I faced this problem with a Windows Application. The solution:

C:\Users\matheus_boesing>ipconfig /flushdns
Windows IP Configuration
Successfully flushed the DNS Resolver Cache.

All working fine after that. 🙂

Matheus.

Create SPFILE on ASM from PFILE on Filesystem

Some basics, right?
Another thing that is not usual and everytime I do, someone be surprised: “shu” alias for “shutdown”:

SQL> create spfile='+DGDATA/MYDB/spfilemydb.ora' from pfile='/oracle/product/11.2/dbs/init_mydb.ora';
File created.
SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

The Bourleson Master also wrote about it. Take a look on a better detailed post about this subject: http://www.dba-oracle.com/concepts/pfile_spfile.htm.

Matheus.

ORA-29760: instance_number parameter not specified

I felt myself stupid when I lost a few minutes to undestand this error:

SQL> startup pfile=init_corpdb.ora
ORA-29760: instance_number parameter not specified

Do you belive the solution was simply to set a number in ORACLE_SID?
Take a look:

dbsrvr>echo $ORACLE_SID
corpdb
dbsrvr>export ORACLE_SID=corpdb_1
dbsrvr>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sun Jun 28 00:18:05 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup pfile=init_corpdb.ora
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2220200 bytes
Variable Size 889196376 bytes
Database Buffers 3372220416 bytes
Redo Buffers 12144640 bytes
Database mounted.
Database opened.

I hope neve miss time with this again… 😛

Matheus.

Service Detected on OEM but not in SRVCTL or SERVICE_NAMES Parameter?

Okey, it happens.
To me, after a database moving from a cluster to another. The service was registered by SRVCTL in the old cluster but is not needed. So, was not registered in the new cluster.
But OEM insists to list, for example, the “service3” as offline. The problem is that you can not remove it by SRVCTL, because you had not registered, right? See the example below:

Listing services:
srvdatabase1:/home/oracle>srvctl status service -d systemdb
Service service1_systemdb is running on nodes: srvdatabase1
Service service2 is running on nodes: srvdatabase1
Service service2_systemdb is running on nodes: srvdatabase1

In the service_name parameter:
srvdatabase1:/home/oracle>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 8 15:21:00 2015
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> show parameters service;
NAME                                 TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
service_names                        string
service2,test,systemdb

And the offline alarm goes to “service3“?
The easiest fix:

SQL> exec dbms_service.DELETE_SERVICE('service3');
PL/SQL procedure successfully completed.

Matheus.