Automating APEX Applications Backup

Hi all,

As you might be aware and already posted by me here, o export APEX Applications the APEXExport Tool should be used.
Here we’ll also see some alternatives to accomplish that.

Plan A) Using same as discussed in the previous post for exporting all Applications in an instance:

As being, for any backup script, 3 parts need to be backed up

1. With database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) dump your APEX applications need to run:

expdp matheusdba schemas=MY_APP_SCHEMA directory=DIR_BKP dumpfile=APEX_APP_SCHEMA.dmp logfile=APEX_APP_SCHEMA.log

2. Run the APEXExport using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

3. Run the APEXExport using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Plan B) Take separate Backups for each Application or Workspace

Here is the interesting part: You can take also backups by applicationid or workspace id, with the following:

1. Exporting by ApplicationID:

APEXExport -db hostname:listenerport:dbservicename -user -password -applicationid 31500

2. Exporting by WorkspaceID:

APEXExport -db hostname:listenerport:dbservicename -user -password -workspaceid 9999

But how take all the workspace IDs? You can use the following SQL statement in SQLWORKSHOP:

select wwv_flow_api.get_security_group_id from dual;

Now you have all the basic exporting steps, it’s up to you to build the script as it fits better for you.

I hope it helps!

Additional notes:

  • The workspace export should export all of the shared components from the workspaces.
  • This does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

Additional Reference:

  • APEXExport Gives: Java.lang.NoClassDefFoundError: Oracle/apex/APEXExport (Doc ID 2265534.1)

Moving APEX Applications Repository

Hello,
Most likely you land here because you need to migrate APEX Applications/Workspaces from one database to another, correct? You are in the right place!

We’ll use the APEXExport for this end.

Here you have a quick summary of the steps to use the tool, assuming:

  • The source APEX instance is at least 4.2.4.
  • The target instance must be 4.2.4 or higher.

Also, be aware that the APEX installation (the APEX and FLOWS_FILES schemas) cannot be exported in this manner or in any other manner.
So the APEX itself must pre-exist, what we’ll do is migrate the workspaces from one installation to another.

To Export:

1. Use database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) to generate a dumpfile with all DB objects and data that your APEX applications need to run.
This will normally be the objects in the schemas that your APEX workspaces are dependent upon.

2. Run the APEXExport twice as follows:

2.1 First run it using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

2.2 Now run it using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Note that that workspace export should export all of the shared components from the workspaces.
Note that this does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

To Import:

1. Import the dumpfile generated for the regular database schemas your APEX Application use.
2. Import the workspaces via sqlplus as per:
2.1 connect sys / as sysdba
2.2 alter session set current_schema = APEX_040200;
2.3 run the scripts to create the workspaces

@<script_generated>.sql

This will create the workspaces with the same workspace IDs as the source DB.
This also prevents the need to modify the workspace ID contained in each of the application exports.

3. From the same session as above, accomplish the import of each of the application exports.

SQL> @.sql 
SQL> @.sql [...] 
SQL> @.sql

I hope it helps!

Retrieve the SQL Server Version from a Backup File

Have you ever been in the need to retrieve the SQL Server version that was used on a backup file?

Well, if for some weird reason that happens to you, the following SQL can help you. It won’t restore the database, it’ll just retrieve some basic info about it.

With that, you’ll have the DatabaseVersion (Internal Number Version) where the backup was from. You can also grab some useful information like the Server Name, Creation Date, and more.

RESTORE HEADERONLY FROM DISK = N'b:\backup\data_backup.bak'

Below we have a table of  Versions x Internal Number.

Version Internal Number Version Compat. Level
SQL Server 2019 895 – 904 150
SQL Server 2017 868 – 869 140
SQL Server 2016 852 130
SQL Server 2014 782 120
SQL Server 2012 706 110

I hope it helps!

Microsoft Ignite – Certification Voucher

Hey folks,

Do you enjoy learning new things? Between September 22 and September 24, Microsoft will be hosting their yearly Microsoft Ignite event. Of course, it’ll be online this time and free of charge :).

This year, you can earn a certification voucher and that’s awesome. You’ll be able to choose one certification from a specific list. Check the list of available certifications and the Terms and Conditions accessing the following link: https://docs.microsoft.com/pt-br/learn/certifications/microsoft-ignite-cloud-skills-challenge-2020-free-certification-exam.

Apache Airflow Schedule: The scheduler does not appear to be running. Last heartbeat was received % seconds ago.

Hello everyone,

Are you facing the same?

Well, after opening some tasks to check Apache Airflow test environment for some investigation, I decided to check Apache Airflow configuration files to try to found something wrong to cause this error. I noticed every time the error happens, the Apache Airflow Console shows a message like this:

The scheduler does not appear to be running. Last heartbeat was received 14 seconds ago.

The DAGs list may not update, and new tasks will not be scheduled.

In general, we see this message when the environment doesn’t have resources available to execute a DAG. But in this case, it is different because CPU usage was 2%, memory usage was 50%, no swap, no disk at 100% usage. I checked the DAGs logs from the last hours and there were no errors in the logs. I also checked on the airflow.cfg file, I checked the database connection parameter, task memory, and max_paralelism. Nothing wrong. Long history short: everything was fine!

I then searched for the message in Apache Airflow Git and found a very similar bug: AIRFLOW-1156 BugFix: Unpausing a DAG with catchup=False creates an extra DAG run . In summary, it seems this situation happened when the parameter catchup_by_default is set to False in airflow.cfg file.

This parameter means for Apache Airflow to ignore pass execution time and start the schedule now. To confirm the case I checked with change management if we had some change in this environment. For my surprise, the same parameter was changed one month ago.

I then changed the Apache Airflow configuration file and set the parameter catchup_by_default to true again. The environment was released to the developers team to check everything is alright. One week later and we don’t have any issues reported.

Conclusion?

This issue showed us that the development environment is a no man’s land. The change management process exists alone without an approval process to support it. The lack of an approval process leads us to a 4 hours outage and 2 teams unable to work.

I hope you enjoy it!

And please be responsible on your environments!

Apache Airflow Rest API

Hello everyone,

Today I’ll talk about Apache Airflow usage, a REST API.

I frequently have customers asking about Apache Airflow’s integration with their own applications. “How can I execute a job from my application?” or “how can I get my job status in my dashboard?” are good examples of the questions I receive the most.

I’ll use the following question from a customer to show this great feature in Apache Airflow:

“ I would like to call one specific job orchestrated in Apache Airflow environment  in my application, is it possible?”

Quick answer: “Yes, all that you need to do is to call the Airflow DAG using REST API …..“

Details:

The simplest way to show how to achieve this is by using curl to call my Apache Airflow environment. I had one DAG to execute this from a bash operator. Quick example:

curl -X POST \

  http://localhost:8080/api/experimental/dags/my_bash_oeprator/dag_runs \

  -H ‘Cache-Control: no-cache’ \

  -H ‘Content-Type: application/json’ \

  -d ‘{“conf”:”{\”key\”:\”value\”}”}’

The curl execution returns the execution date id, with this ID you can use to get an execution status. Like this:

curl -X GET  http://localhost:8080/api/experimental/dags/my_bash_oeprator/dag_runs/2020-04-05T00:26:35

{“state”:”running”}

This command can also return other status {“state”:”failed”} or {“state”:”success”}.

I hope you enjoy it!

Orphan ASM File Cleanup Script

Hi all,

So I got asked by a client to perform a checking on ASM for orphan files, as they have some frequent create/drop database on this environment, as being a development env.

Also, lots of databases shared the same data diskgroup, so I had to work this out for all databases and also for possible inexistent databases.

Some basic approaches I raised:

1) Locating uncatalloged files in ASM per database.
– Source: https://oraganism.wordpress.com/2012/09/09/orphaned-files-in-asm/
– This approach assumes the files on ASM uncatalogged to any database are the Orphaned ones. Which is a fair assumption.
– But I understand that files can be catalloged and unmonted, which would brake this approach.

2) Listing files in ASM but not in database (v$datafile, v$datafile_copy, v$controlfile, v$tempfile, v$logfile) by database.
– Source: https://oracledba.blogspot.com/2018/11/orphaned-files-in-asm.html
– This seems a fair assumption. Would need to be ran from each database.
– There is not guarantee if this is working properly or not.
– Not clear also if PDB files are included.
– There is another similar one: https://dbaliveblog.wordpress.com/asm-orphaned-file-identification-script/
– Also this one: https://anjo.pt/wp/keyword-oracle/2013/02/26/find-orphan-asm-files/

3) MOS: Query That Can Be Used to Find Orphaned Datafiles on a 12c ASM Instance (Doc ID 2228573.1)
– From MOS, seems the most recommended approach.
– Attention point: PDB$SEED may not be shown as per: PDB$SEED Datafiles Not Appear In CDB_DATA_FILES (Doc ID 1940806.1)
— On 12.1.0.2, recommended to use “EXCLUDE_SEED_CDB_VIEW”. To check if it can be done on session level.

I downloaded and ran referred script on MOS Script to report the list of files stored in ASM and CURRENTLY NOT OPENED (Doc ID 552082.1) on the environment.
But the results didn’t sound correct.

After a while, I ended up building my own script based on all mentioned approaches and it worked very fine.

After approved I dropped all the listed files, freed a several TBs of space and no database affected. So I’d assume it as correct and would really recommend it for you.

So what did I used:

SQL to Check ASM Space per Database:

set pages 350 timing on
col gname form a10
col dbname form a10
col file_type form a16
break on gname skip 2 on dbname skip 1
compute sum label total_db of gb on dbname
compute sum label total_diskg of gb on gname  
SELECT
    gname,
    dbname,
    file_type,
    round(SUM(space)/1024/1024) mb,
    round(SUM(space)/1024/1024/1024) gb,
    COUNT(*) "#FILES"
FROM
    (
        SELECT
            gname,
            regexp_substr(full_alias_path, '[[:alnum:]_]*',1,4) dbname,
            file_type,
            space,
            aname,
            system_created,
            alias_directory
        FROM
            (
                SELECT
                    concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path,
                    system_created,
                    alias_directory,
                    file_type,
                    space,
                    level,
                    gname,
                    aname
                FROM
                    (
                        SELECT
                            b.name            gname,
                            a.parent_index    pindex,
                            a.name            aname,
                            a.reference_index rindex ,
                            a.system_created,
                            a.alias_directory,
                            c.type file_type,
                            c.space
                        FROM
                            v$asm_alias a,
                            v$asm_diskgroup b,
                            v$asm_file c
                        WHERE
                            a.group_number = b.group_number
                        AND a.group_number = c.group_number(+)
                        AND a.file_number = c.file_number(+)
                        AND a.file_incarnation = c.incarnation(+) ) START WITH (mod(pindex, power(2, 24))) = 0
                AND rindex IN
                    (
                        SELECT
                            a.reference_index
                        FROM
                            v$asm_alias a,
                            v$asm_diskgroup b
                        WHERE
                            a.group_number = b.group_number
                        AND (
                                mod(a.parent_index, power(2, 24))) = 0
                    ) CONNECT BY prior rindex = pindex )
        WHERE
            NOT file_type IS NULL
            and system_created = 'Y' )
GROUP BY
    gname,
    dbname,
    file_type
ORDER BY
    gname,
    dbname,
    file_type
/

Expected Output:

SQL> @asm_sizebydb

GNAME	   DBNAME     FILE_TYPE 	       MB	  GB	 #FILES
---------- ---------- ---------------- ---------- ---------- ----------
DATAC1	   DATABSE1   CONTROLFILE	     2316	   2	      1
		      DATAFILE		  7620756	7442	     49
		      DATAGUARDCONFIG	       16	   0	      2
		      ONLINELOG 	    82536	  81	     14
		      PARAMETERFILE		8	   0	      1
	   **********				  ----------
	   total_db					7525

	   DATABSE2   CONTROLFILE	      492	   0	      1
		      DATAFILE		  3081604	3009	     47
		      ONLINELOG 	      416	   0	      4
		      PARAMETERFILE	       16	   0	      2
		      PASSWORD			0	   0	      2
		      TEMPFILE		    83372	  81	      3
	   **********				  ----------
	   total_db					3090

	   DATABSE3   CONTROLFILE	      588	   1	      1
		      DATAFILE		  1430712	1397	      8
		      DATAGUARDCONFIG	       16	   0	      2
		      ONLINELOG 	   147816	 144	     18
		      PARAMETERFILE		8	   0	      1
	   **********				  ----------
	   total_db					1542
[...]

**********					  ----------
total_disk					       76868

SQL To list Orphan files per Database:

SET VERIFY OFF
SET LINESIZE 200
SET SERVEROUTPUT ON
SET PAGESIZE 50000

DECLARE
   cmd   CLOB;
BEGIN
   FOR c IN (SELECT name Diskgroup
               FROM V$ASM_DISKGROUP)
   LOOP
      FOR l
         IN (SELECT 'rm ' || files files
               FROM
                    (SELECT '+' || c.Diskgroup || files files, TYPE
                       FROM (    SELECT UPPER
                                        (
                                           SYS_CONNECT_BY_PATH (aa.name, '/')
                                        )
                                           files
                                      , aa.reference_index
                                      , b.TYPE
                                   FROM (SELECT file_number
                                              , alias_directory
                                              , name
                                              , reference_index
                                              , parent_index
                                           FROM v$asm_alias) aa
                                      , (SELECT parent_index
                                           FROM (SELECT parent_index
                                                   FROM v$asm_alias
                                                  WHERE     group_number =
                                                               (SELECT group_number
                                                                  FROM v$asm_diskgroup
                                                                 WHERE name =
                                                                          c.Diskgroup)
                                                        AND alias_index = 0)) a
                                      , (SELECT file_number, TYPE
                                           FROM (SELECT file_number, TYPE
                                                   FROM v$asm_file
                                                  WHERE group_number =
                                                           (SELECT group_number
                                                              FROM v$asm_diskgroup
                                                             WHERE name =
                                                                      c.Diskgroup)))
                                        b
                                  WHERE     aa.file_number = b.file_number(+)
                                        AND aa.alias_directory = 'N'
                                        AND b.TYPE IN
                                               ('DATAFILE'
                                              , 'ONLINELOG'
                                              , 'CONTROLFILE'
                                              , 'TEMPFILE')
                             START WITH aa.PARENT_INDEX = a.parent_index
                             CONNECT BY PRIOR aa.reference_index =
                                           aa.parent_index)
                      WHERE SUBSTR
                            (
                               files
                             , INSTR (files, '/', 1, 1)
                             ,   INSTR (files, '/', 1, 2)
                               - INSTR (files, '/', 1, 1)
                               + 1
                            ) =
                               (SELECT dbname
                                  FROM (SELECT    '/'
                                               || UPPER (db_unique_name)
                                               || '/'
                                                  dbname
                                          FROM v$database))
                     MINUS
                     (SELECT UPPER (name) files, 'DATAFILE' TYPE
                        FROM v$datafile
                      UNION ALL
                      SELECT UPPER (name) files, 'TEMPFILE' TYPE
                        FROM v$tempfile
                      UNION ALL
                      SELECT UPPER (name) files, 'CONTROLFILE' TYPE
                        FROM v$controlfile
                       WHERE name LIKE '+' || c.Diskgroup || '%'
                      UNION ALL
                      SELECT UPPER (name), 'CONTROLFILE' TYPE
                        FROM v$datafile_copy
                       WHERE deleted = 'NO'
                      UNION ALL
                      SELECT UPPER (MEMBER) files, 'ONLINELOG' TYPE
                        FROM v$logfile
                       WHERE MEMBER LIKE '+' || c.Diskgroup || '%')))
      LOOP
         DBMS_OUTPUT.put_line (l.files);
      END LOOP;
   END LOOP;
END;
/

Expected Output:

rm +DATA/XPTODB/CONTROLFILE/CURRENT.4928.955985765
rm +DATA/XPTODB/CONTROLFILE/CURRENT.4934.955986589
rm +DATA/XPTODB/CONTROLFILE/CURRENT.4962.955998825
rm +DATA/XPTODB/CONTROLFILE/CURRENT.5063.956480113
rm +DATA/XPTODB/CONTROLFILE/CURRENT.6374.955984145
rm +DATA/XPTODB/CONTROLFILE/CURRENT.7547.955968953
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4936.955985803
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4966.955998847
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7540.955968995
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7574.955984177
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4937.955985803
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4967.955998847
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7542.955968995
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7558.955984177
rm +DATA/XPTODB/DATAFILE/SYSAUX.4935.955986599
rm +DATA/XPTODB/DATAFILE/SYSAUX.4963.955998847
rm +DATA/XPTODB/DATAFILE/SYSAUX.6286.955984161
rm +DATA/XPTODB/DATAFILE/SYSAUX.7544.955968963
rm +DATA/XPTODB/DATAFILE/SYSTEM.4930.955986599
rm +DATA/XPTODB/DATAFILE/SYSTEM.4964.955998847
rm +DATA/XPTODB/DATAFILE/SYSTEM.7536.955968965

To run this for all databases on server (RAC Databases):

export ORAENV_ASK=NO
for DBSID in `ps -ef | grep ora_pmon | grep -v grep | awk -F_ '{ print $3}'` 
do
echo "######" ${DBSID}
export ORACLE_SID=${DBSID}
. oraenv
sqlplus / as sysdba
@script.sql
exit
done

Hope it helps you!

Issues with Email attachments for Crontab Jobs

Hello ALL,

Last days I faced a strange issue with a shell script running as crontab job.

I had a Shell Script with working properly when executing manually, it functions simply sent an email with embedded attachments.  However, when it scheduled in Crontab, it does not send attachments ( I received the email but with BLANK attachments – even if code running fine and sending attachments when executed manually).

It made my brain burn for some time, then I found a lot of people facing the same issue when scripts in crontab.

It looks for some crontab relation with SH/Bash scripts.

Here are the workarounds I found and deployed in my script. Now, the email attachments are being included in emails when running as crontab jobs.

  • Include: ‘#!/bin/sh’ or ‘#!/bin/bash’ in the first line of your shell script.
  • Insert full path of commands you using in statements. E.g: Are you using ‘mailx’? so, execute ‘whereis mailx’, then, all statements where mailx is running will be like ‘/usb/bin/mailx -s “SUBJECT” myemail@domain.com’.
  • Source your sh/bash profile before the email statement. This statement solved my error. I found some posts explanations, but not sure why
    • ‘. ~/.bash_profile’.

Here is my code:

#!/bin/sh
################################################################################
# Email alerting to
################################################################################
EMAILTO="youremail@company.com"      # The email to send the alert to
SUBJECT="My shell script in crontab" # email subject

. ~/.bash_profile

sqlplus /nolog < /dev/null 2>&1
connect / as sysdba
spool /tmp/temp_out.txt
select * from dual
spool off
EOF

cat /tmp/temp_out.txt |grep "no rows selected"
if [ $? == 0 ]; then
echo "Nothing to do"
/usr/bin/echo "Nothing to do"| /usr/bin/mail -s "${SUBJECT}" ${EMAILTO}
else
/usr/bin/cat /tmp/temp_out.txt | /usr/bin/mail -s "${SUBJECT}" ${EMAILTO}
fi
rm /tmp/temp_out.txt

 

Checking Basic Licensing Info on a DB Server

Hi all!
So, I got a new client and started checking on his licensing and hardware. Then I realized how “non-standard” this is and Oracle should probably provide a better way to do it. So I decided to share a few things:

# Checking Oracle Version Installed:

[oracle@greporaSRV inventory]$ cd /opt/oraInventory/logs
[oracle@greporaSRV logs]$  grep "\- Database edition" installActions*.log
INFO: - Database edition : Standard Edition One (Create and configure a database)

# Checking number of Sockets

[root@greporaSRV ~]# cat /proc/cpuinfo | grep "physical id" | sort -u | wc -l
1

# Checking number of CPU Cores per Socket

[root@greporaSRV ~]# lscpu | grep 'socket'
Core(s) per socket:    4

Parallel file transfer on Linux

Hi all,

I had a request to copy a ton of files from one file system to another,  I know that there are tools that can help with that like rsync but due to some requirements and me wanted to do some scripting I put something together  to help with this request. This is not the 1st time I do something like this but it is the 1st time I share 🙂

What I’m sharing is now what I did for the request I mentioned but you will get an idea

The script will copy a list of files from one server to another. This list I usually create by using find like this

find /Directory_which_I_want_to_copy -type f > file_list.txt

The script will receive some parameters as listed below

parallel_xfer.ksh    

Also a requirement for this to work is that you can ssh to the target server without a password.

It will keep X parallel sessions running at all times until there are new files to start copying it, After all copies are started, it will monitor them until completion. Also the script assumes that the source and target directory destination is the same but this is easily changed if needed.

The logging needs to be improved but it will show the file it started as well their processes count

Hope it helps

Elisson

#!/bin/ksh
DBLIST=${1}
DEST_DIR=${2}
SERVER=${3}
NUM_SESS=${4}
STARTED_COUNT=0
RUN_COUNT=0

trim() {
    local var=$@
    var="${var#"${var%%[![:space:]]*}"}"   # remove leading whitespace characters
    var="${var%"${var##*[![:space:]]}"}"   # remove trailing whitespace characters
    echo "$var"
}


FILE_COUNT="$(cat ${DEST_DIR}/$DBLIST | wc -l)"
cd ${DEST_DIR}
for FILE in $(cat $DBLIST)
do
 STARTED_COUNT=$((${STARTED_COUNT}+1))
 if [ ${RUN_COUNT} -le ${NUM_SESS} ]
 then
   sftp -Cq USER@${SERVER}:${FILE} ${DEST_DIR}/. >/dev/null 2>/dev/null &
   echo "`date` - Transferring file ${FILE} to ${DEST_DIR} - ${STARTED_COUNT}/$(trim ${FILE_COUNT})"
   sleep 5
 fi
 echo "\n"

 RUN_COUNT=$(ps -ef | grep " ${$} " | grep sftp | grep -v grep | wc -l)

 while [ ${RUN_COUNT} -ge ${NUM_SESS} ]
 do
  RUN_COUNT=$(ps -ef | grep " ${$} " | grep sftp | grep -v grep | wc -l)
  echo "`date` - $(trim ${RUN_COUNT}) transfer processes running"
  echo "`date` - Amount of GB transferred `du -sg ${DEST_DIR}`\n"
  sleep 60
 done
done

while [ $(ps -ef | grep " ${$} " | grep sftp | grep -v grep | wc -l) -gt 0  ]
do
 RUN_COUNT=$(ps -ef | grep " ${$} " | grep sftp | grep -v grep | wc -l)
 echo "`date` - $(trim ${RUN_COUNT}) transfer processes running"
 echo "`date` - Amount of GB transferred - `du -sg ${DEST_DIR}`\n"
 sleep 60
done
echo "`date` - Transfered completed"