What is Oracle APEX Quick SQL?

If you’ve browsed Oracle APEX’s features you must have seen the Quick SQL in the SQL Workshop. But have you ever used it? Do you know the shortcut? If not, have a look at this post to understand better this amazing tool.

 

What is Quick SQL?

According to the Oracle Documentation

Quickly develop a script for simple tables and views.

Quick SQL provides a quick way to generate the SQL required to create a relational data model from an indented text document.

Quick SQL is designed to reduce the time and effort required to create SQL tables, triggers, and index structures. This tool is not designed to be a replacement for data modeling. Instead, Quick SQL is simply a quick way to develop a script for simple tables and views. Once you have generate the SQL, you can then tweak and expand it.

From this quote, you can see that it is not design to replace traditional SQL Development, it is just a tool to help speed up the development of simple routines that sometimes take longer than they should.

Where can I test it?

If you have Oracle APEX,  you can open it in SQL Workshop -> Utilities -> Quick SQL

If you don’t have it, then you should go get it, it’s free 🙂 . But if you really in a hurry, then try https://apex.oracle.com/quicksql

Basic Syntax

As you’ll see, quick SQL is easy to use, you just add some text, indentation and shorthands for stuff you want. Take a look at the example below

On the left side is what I’ve wrote and on the right side is the actual script being generated. Can you see from this simple example how much writing quick sql did for me? I’ve only wrote the table name, indented to write the columns with some shorthands for things like data types and not null. And when I wrote “date_of_birth” i didn’t have to write anything. And the Primary key came as a bonus.

By pressing Enter or “Generate SQL” it keeps updating the right side, so I knew what was happening all the time.

Let’s add a bit more

Ok, that may have been a bit too easy, let’s add

  • check constraints
  • a child table
  • audit columns (auditing in the trigger)

This simple addition made it difficult to fit the right side content, so I’m adding it bellow

 Where can I learn more?

The good news is that Oracle provides a lot of content to study.

The bad news is that, as far as i know, only Oracle has content about it

Quick SQL Documentation: https://docs.oracle.com/en/database/oracle/application-express/20.2//aeutl/using-quick-sql.html#GUID-21EE36C2-F814-48C0-90EA-7D464E9014FD

Shorthands and Datatype: I don’t have a link for that, I have a path. with Quick SQL opened, click on help and you’ll get all the datatypes, table and column directives that you can use (check image below)

One final tip

Take a look at the sample files, you won’t believe how much you’ll learn from them.

That’s it, good studies and good luck

The power of Dynamic Actions in APEX

In this post I would like to talk about a few configuration items in the Dynamic Actions that could make reduce your need to code basic interactions even further.

Selection Type: jQuery Selector and Javascript Expression

  • Do you have multiple elements that need the same action?
  • Are your page’s elements not accessible by the items, buttons, regions or columns selection type?
  • Does your page region have plain HTML and you want it to use Dynamic Actions?

If you said yes, then I highly suggest you take a look below:

jQuery Selector:

The easiest of the 2. You just need to add the selector that will capture the element(s) you need.

You don’t know how to use a selector? Here are a few tips

  • If your apex item has a static ID with the value element-x , you can write #element-x (yes, hashtag include)
  • If you want create a global action that will do or check something, you can write just body

Selecting elements is a powerful tool and to understand them better, I recommend reading more about it in https://learn.jquery.com/using-jquery-core/selecting-elements/.

JavaScript Expression:

Alright, let’s say that your search criteria is so complex that selectors won’t do it. Or maybe you have to validate something in the elements beforehand. If that is the case you can fallback to the JavaScript Expression which allow you to grab DOM elements, apply logic and return processed items to be selected.


Event Scope

The event scope is an awesome tool that allow us to do dynamic binding, which, to the best of my knowledge remove the need for adding event directly into the HTML code because it continues to work even after Pagination, Refresh adding new items, etc…

I am 99% sure that this comes from the jQuery ‘on’ event with delegate. Anyways, here are some tips

  • By selecting it, the Static Container (jQuery Selector) will become available. Please make sure it is an HTML element that DOES NOT get updates with AJAX. Usually you are safe with regions (give them a static id) and in the last case the <main> or <body> tags. But always try the region first
  • IMPORTANT: In the ‘when’ section, if you are using a jQuery Selector as well, make it not including the container itself.

Client-side vs Server-side Conditions

I’ve seen this with many clients, they are not the same!!!! The only thing they have in common is the name.

Client-side Condition: Is the equivalent of adding an IF clause to you Dynamic Action this allows you to better leverage the low code side of APEX. When adding a condition, you can redirect to the TRUE or FALSE actions.

Server-side Condition: The server-side condition does the same thing to DAs that it does for other components, if true the server will render/send it to the user’s machine, otherwise it won’t. There no true and false and the actions won’t work because the Dynamic Action doesn’t exist to be bound to the element.

Conclusion

As you can see, with APEX you probably don’t need to add external JavaScript or bind events directly on the HTML tag which helps to keep your low code also clean from mundane taks.

Oracle APEX List of Values and Popup Lov

Most people don’t know, but Oracle APEX List of Values (LoV) are very flexible components with that allows us to perform a series of actions using a centralized and reusable data source.

In this post, I’ll be briefly talking about some of the amazing functionalities this tool provides us.

Multi column view

After creating the LoV you can scroll down to Additional Columns and add all columns available in your original select. This columns will become available Popup LoV when the user clicks on it to find an item.

The nice thing about this feature is that you can toggle if the column is visible, searchable and even add a format mask.

And don’t worry, if you use the same LoV for Select Lists items, it will still get its configuration from the Display / Return Value fields.

Subscription

If you have one or just a few Apps in you workspace, you probably never did a subscription. But this is a great way to have a “Master List App” that all other apps listen to get updates.

The interesting thing to note is that the subscription needs to be refreshed by the Publisher or Subscriber, it doesn’t happen automatically. For this reason it allows you test it before letting the code propagate to other applications.

JavaScript Initialization

For those who know JavaScript, you can modify the initialization object and return it with a new configuration.

Although Oracle APEX has dropped jQuery UI, it still uses the same OPTIONS object that you can learn more about in here https://api.jqueryui.com/dialog/

 

Additional Outputs

Going back to my first tip when I mentioned multiple columns, in addition to that, you can set the value of multiple page items by using said columns. The interface here is a bit too compact, but from the example you can see the pattern. Column name : page item, another column : another page item.

 

Conclusion

As you can see, the List of Value and Popup LoV are 2 powerful features that allow us to quickly develop without gaving to worry too much about standard list components (and some not so standard as well). I could probably write much more on the subject, but that will probably stay for another post 😉

Moving APEX Applications Repository

Hello,
Most likely you land here because you need to migrate APEX Applications/Workspaces from one database to another, correct? You are in the right place!

We’ll use the APEXExport for this end.

Here you have a quick summary of the steps to use the tool, assuming:

  • The source APEX instance is at least 4.2.4.
  • The target instance must be 4.2.4 or higher.

Also, be aware that the APEX installation (the APEX and FLOWS_FILES schemas) cannot be exported in this manner or in any other manner.
So the APEX itself must pre-exist, what we’ll do is migrate the workspaces from one installation to another.

To Export:

1. Use database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) to generate a dumpfile with all DB objects and data that your APEX applications need to run.
This will normally be the objects in the schemas that your APEX workspaces are dependent upon.

2. Run the APEXExport twice as follows:

2.1 First run it using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

2.2 Now run it using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Note that that workspace export should export all of the shared components from the workspaces.
Note that this does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

To Import:

1. Import the dumpfile generated for the regular database schemas your APEX Application use.
2. Import the workspaces via sqlplus as per:
2.1 connect sys / as sysdba
2.2 alter session set current_schema = APEX_040200;
2.3 run the scripts to create the workspaces

@<script_generated>.sql

This will create the workspaces with the same workspace IDs as the source DB.
This also prevents the need to modify the workspace ID contained in each of the application exports.

3. From the same session as above, accomplish the import of each of the application exports.

SQL> @.sql 
SQL> @.sql [...] 
SQL> @.sql

I hope it helps!

Automating APEX Applications Backup

Hi all,

As you might be aware and already posted by me here, o export APEX Applications the APEXExport Tool should be used.
Here we’ll also see some alternatives to accomplish that.

Plan A) Using same as discussed in the previous post for exporting all Applications in an instance:

As being, for any backup script, 3 parts need to be backed up

1. With database Export utilities (Datapump or Legacy Export, be aware of the limitations of each) dump your APEX applications need to run:

expdp matheusdba schemas=MY_APP_SCHEMA directory=DIR_BKP dumpfile=APEX_APP_SCHEMA.dmp logfile=APEX_APP_SCHEMA.log

2. Run the APEXExport using “-expWorkspace” to export all workspaces (This will generate a w*.sql script for each workspace)

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -expWorkspace

3. Run the APEXExport using “-instance” which will generate a f*.sql script for every application and shared component.

java oracle.apex.APEXExport -db localhost:1521:MYDB -user system -password systems_password -instance

Plan B) Take separate Backups for each Application or Workspace

Here is the interesting part: You can take also backups by applicationid or workspace id, with the following:

1. Exporting by ApplicationID:

APEXExport -db hostname:listenerport:dbservicename -user -password -applicationid 31500

2. Exporting by WorkspaceID:

APEXExport -db hostname:listenerport:dbservicename -user -password -workspaceid 9999

But how take all the workspace IDs? You can use the following SQL statement in SQLWORKSHOP:

select wwv_flow_api.get_security_group_id from dual;

Now you have all the basic exporting steps, it’s up to you to build the script as it fits better for you.

I hope it helps!

Additional notes:

  • The workspace export should export all of the shared components from the workspaces.
  • This does not mention RESTful services but if using the APEXExport from 4.2.4 or higher, they will be included.

Additional Reference:

  • APEXExport Gives: Java.lang.NoClassDefFoundError: Oracle/apex/APEXExport (Doc ID 2265534.1)

Retrieve the SQL Server Version from a Backup File

Have you ever been in the need to retrieve the SQL Server version that was used on a backup file?

Well, if for some weird reason that happens to you, the following SQL can help you. It won’t restore the database, it’ll just retrieve some basic info about it.

With that, you’ll have the DatabaseVersion (Internal Number Version) where the backup was from. You can also grab some useful information like the Server Name, Creation Date, and more.

RESTORE HEADERONLY FROM DISK = N'b:\backup\data_backup.bak'

Below we have a table of  Versions x Internal Number.

Version Internal Number Version Compat. Level
SQL Server 2019 895 – 904 150
SQL Server 2017 868 – 869 140
SQL Server 2016 852 130
SQL Server 2014 782 120
SQL Server 2012 706 110

I hope it helps!

Microsoft Ignite – Certification Voucher

Hey folks,

Do you enjoy learning new things? Between September 22 and September 24, Microsoft will be hosting their yearly Microsoft Ignite event. Of course, it’ll be online this time and free of charge :).

This year, you can earn a certification voucher and that’s awesome. You’ll be able to choose one certification from a specific list. Check the list of available certifications and the Terms and Conditions accessing the following link: https://docs.microsoft.com/pt-br/learn/certifications/microsoft-ignite-cloud-skills-challenge-2020-free-certification-exam.

Apache Airflow Schedule: The scheduler does not appear to be running. Last heartbeat was received % seconds ago.

Hello everyone,

Are you facing the same?

Well, after opening some tasks to check Apache Airflow test environment for some investigation, I decided to check Apache Airflow configuration files to try to found something wrong to cause this error. I noticed every time the error happens, the Apache Airflow Console shows a message like this:

The scheduler does not appear to be running. Last heartbeat was received 14 seconds ago.

The DAGs list may not update, and new tasks will not be scheduled.

In general, we see this message when the environment doesn’t have resources available to execute a DAG. But in this case, it is different because CPU usage was 2%, memory usage was 50%, no swap, no disk at 100% usage. I checked the DAGs logs from the last hours and there were no errors in the logs. I also checked on the airflow.cfg file, I checked the database connection parameter, task memory, and max_paralelism. Nothing wrong. Long history short: everything was fine!

I then searched for the message in Apache Airflow Git and found a very similar bug: AIRFLOW-1156 BugFix: Unpausing a DAG with catchup=False creates an extra DAG run . In summary, it seems this situation happened when the parameter catchup_by_default is set to False in airflow.cfg file.

This parameter means for Apache Airflow to ignore pass execution time and start the schedule now. To confirm the case I checked with change management if we had some change in this environment. For my surprise, the same parameter was changed one month ago.

I then changed the Apache Airflow configuration file and set the parameter catchup_by_default to true again. The environment was released to the developers team to check everything is alright. One week later and we don’t have any issues reported.

Conclusion?

This issue showed us that the development environment is a no man’s land. The change management process exists alone without an approval process to support it. The lack of an approval process leads us to a 4 hours outage and 2 teams unable to work.

I hope you enjoy it!

And please be responsible on your environments!

Apache Airflow Rest API

Hello everyone,

Today I’ll talk about Apache Airflow usage, a REST API.

I frequently have customers asking about Apache Airflow’s integration with their own applications. “How can I execute a job from my application?” or “how can I get my job status in my dashboard?” are good examples of the questions I receive the most.

I’ll use the following question from a customer to show this great feature in Apache Airflow:

“ I would like to call one specific job orchestrated in Apache Airflow environment  in my application, is it possible?”

Quick answer: “Yes, all that you need to do is to call the Airflow DAG using REST API …..“

Details:

The simplest way to show how to achieve this is by using curl to call my Apache Airflow environment. I had one DAG to execute this from a bash operator. Quick example:

curl -X POST \

  http://localhost:8080/api/experimental/dags/my_bash_oeprator/dag_runs \

  -H ‘Cache-Control: no-cache’ \

  -H ‘Content-Type: application/json’ \

  -d ‘{“conf”:”{\”key\”:\”value\”}”}’

The curl execution returns the execution date id, with this ID you can use to get an execution status. Like this:

curl -X GET  http://localhost:8080/api/experimental/dags/my_bash_oeprator/dag_runs/2020-04-05T00:26:35

{“state”:”running”}

This command can also return other status {“state”:”failed”} or {“state”:”success”}.

I hope you enjoy it!

Orphan ASM File Cleanup Script

Hi all,

So I got asked by a client to perform a checking on ASM for orphan files, as they have some frequent create/drop database on this environment, as being a development env.

Also, lots of databases shared the same data diskgroup, so I had to work this out for all databases and also for possible inexistent databases.

Some basic approaches I raised:

1) Locating uncatalloged files in ASM per database.
– Source: https://oraganism.wordpress.com/2012/09/09/orphaned-files-in-asm/
– This approach assumes the files on ASM uncatalogged to any database are the Orphaned ones. Which is a fair assumption.
– But I understand that files can be catalloged and unmonted, which would brake this approach.

2) Listing files in ASM but not in database (v$datafile, v$datafile_copy, v$controlfile, v$tempfile, v$logfile) by database.
– Source: https://oracledba.blogspot.com/2018/11/orphaned-files-in-asm.html
– This seems a fair assumption. Would need to be ran from each database.
– There is not guarantee if this is working properly or not.
– Not clear also if PDB files are included.
– There is another similar one: https://dbaliveblog.wordpress.com/asm-orphaned-file-identification-script/
– Also this one: https://anjo.pt/wp/keyword-oracle/2013/02/26/find-orphan-asm-files/

3) MOS: Query That Can Be Used to Find Orphaned Datafiles on a 12c ASM Instance (Doc ID 2228573.1)
– From MOS, seems the most recommended approach.
– Attention point: PDB$SEED may not be shown as per: PDB$SEED Datafiles Not Appear In CDB_DATA_FILES (Doc ID 1940806.1)
— On 12.1.0.2, recommended to use “EXCLUDE_SEED_CDB_VIEW”. To check if it can be done on session level.

I downloaded and ran referred script on MOS Script to report the list of files stored in ASM and CURRENTLY NOT OPENED (Doc ID 552082.1) on the environment.
But the results didn’t sound correct.

After a while, I ended up building my own script based on all mentioned approaches and it worked very fine.

After approved I dropped all the listed files, freed a several TBs of space and no database affected. So I’d assume it as correct and would really recommend it for you.

So what did I used:

SQL to Check ASM Space per Database:

set pages 350 timing on
col gname form a10
col dbname form a10
col file_type form a16
break on gname skip 2 on dbname skip 1
compute sum label total_db of gb on dbname
compute sum label total_diskg of gb on gname  
SELECT
    gname,
    dbname,
    file_type,
    round(SUM(space)/1024/1024) mb,
    round(SUM(space)/1024/1024/1024) gb,
    COUNT(*) "#FILES"
FROM
    (
        SELECT
            gname,
            regexp_substr(full_alias_path, '[[:alnum:]_]*',1,4) dbname,
            file_type,
            space,
            aname,
            system_created,
            alias_directory
        FROM
            (
                SELECT
                    concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path,
                    system_created,
                    alias_directory,
                    file_type,
                    space,
                    level,
                    gname,
                    aname
                FROM
                    (
                        SELECT
                            b.name            gname,
                            a.parent_index    pindex,
                            a.name            aname,
                            a.reference_index rindex ,
                            a.system_created,
                            a.alias_directory,
                            c.type file_type,
                            c.space
                        FROM
                            v$asm_alias a,
                            v$asm_diskgroup b,
                            v$asm_file c
                        WHERE
                            a.group_number = b.group_number
                        AND a.group_number = c.group_number(+)
                        AND a.file_number = c.file_number(+)
                        AND a.file_incarnation = c.incarnation(+) ) START WITH (mod(pindex, power(2, 24))) = 0
                AND rindex IN
                    (
                        SELECT
                            a.reference_index
                        FROM
                            v$asm_alias a,
                            v$asm_diskgroup b
                        WHERE
                            a.group_number = b.group_number
                        AND (
                                mod(a.parent_index, power(2, 24))) = 0
                    ) CONNECT BY prior rindex = pindex )
        WHERE
            NOT file_type IS NULL
            and system_created = 'Y' )
GROUP BY
    gname,
    dbname,
    file_type
ORDER BY
    gname,
    dbname,
    file_type
/

Expected Output:

SQL> @asm_sizebydb

GNAME	   DBNAME     FILE_TYPE 	       MB	  GB	 #FILES
---------- ---------- ---------------- ---------- ---------- ----------
DATAC1	   DATABSE1   CONTROLFILE	     2316	   2	      1
		      DATAFILE		  7620756	7442	     49
		      DATAGUARDCONFIG	       16	   0	      2
		      ONLINELOG 	    82536	  81	     14
		      PARAMETERFILE		8	   0	      1
	   **********				  ----------
	   total_db					7525

	   DATABSE2   CONTROLFILE	      492	   0	      1
		      DATAFILE		  3081604	3009	     47
		      ONLINELOG 	      416	   0	      4
		      PARAMETERFILE	       16	   0	      2
		      PASSWORD			0	   0	      2
		      TEMPFILE		    83372	  81	      3
	   **********				  ----------
	   total_db					3090

	   DATABSE3   CONTROLFILE	      588	   1	      1
		      DATAFILE		  1430712	1397	      8
		      DATAGUARDCONFIG	       16	   0	      2
		      ONLINELOG 	   147816	 144	     18
		      PARAMETERFILE		8	   0	      1
	   **********				  ----------
	   total_db					1542
[...]

**********					  ----------
total_disk					       76868

SQL To list Orphan files per Database:

SET VERIFY OFF
SET LINESIZE 200
SET SERVEROUTPUT ON
SET PAGESIZE 50000

DECLARE
   cmd   CLOB;
BEGIN
   FOR c IN (SELECT name Diskgroup
               FROM V$ASM_DISKGROUP)
   LOOP
      FOR l
         IN (SELECT 'rm ' || files files
               FROM
                    (SELECT '+' || c.Diskgroup || files files, TYPE
                       FROM (    SELECT UPPER
                                        (
                                           SYS_CONNECT_BY_PATH (aa.name, '/')
                                        )
                                           files
                                      , aa.reference_index
                                      , b.TYPE
                                   FROM (SELECT file_number
                                              , alias_directory
                                              , name
                                              , reference_index
                                              , parent_index
                                           FROM v$asm_alias) aa
                                      , (SELECT parent_index
                                           FROM (SELECT parent_index
                                                   FROM v$asm_alias
                                                  WHERE     group_number =
                                                               (SELECT group_number
                                                                  FROM v$asm_diskgroup
                                                                 WHERE name =
                                                                          c.Diskgroup)
                                                        AND alias_index = 0)) a
                                      , (SELECT file_number, TYPE
                                           FROM (SELECT file_number, TYPE
                                                   FROM v$asm_file
                                                  WHERE group_number =
                                                           (SELECT group_number
                                                              FROM v$asm_diskgroup
                                                             WHERE name =
                                                                      c.Diskgroup)))
                                        b
                                  WHERE     aa.file_number = b.file_number(+)
                                        AND aa.alias_directory = 'N'
                                        AND b.TYPE IN
                                               ('DATAFILE'
                                              , 'ONLINELOG'
                                              , 'CONTROLFILE'
                                              , 'TEMPFILE')
                             START WITH aa.PARENT_INDEX = a.parent_index
                             CONNECT BY PRIOR aa.reference_index =
                                           aa.parent_index)
                      WHERE SUBSTR
                            (
                               files
                             , INSTR (files, '/', 1, 1)
                             ,   INSTR (files, '/', 1, 2)
                               - INSTR (files, '/', 1, 1)
                               + 1
                            ) =
                               (SELECT dbname
                                  FROM (SELECT    '/'
                                               || UPPER (db_unique_name)
                                               || '/'
                                                  dbname
                                          FROM v$database))
                     MINUS
                     (SELECT UPPER (name) files, 'DATAFILE' TYPE
                        FROM v$datafile
                      UNION ALL
                      SELECT UPPER (name) files, 'TEMPFILE' TYPE
                        FROM v$tempfile
                      UNION ALL
                      SELECT UPPER (name) files, 'CONTROLFILE' TYPE
                        FROM v$controlfile
                       WHERE name LIKE '+' || c.Diskgroup || '%'
                      UNION ALL
                      SELECT UPPER (name), 'CONTROLFILE' TYPE
                        FROM v$datafile_copy
                       WHERE deleted = 'NO'
                      UNION ALL
                      SELECT UPPER (MEMBER) files, 'ONLINELOG' TYPE
                        FROM v$logfile
                       WHERE MEMBER LIKE '+' || c.Diskgroup || '%')))
      LOOP
         DBMS_OUTPUT.put_line (l.files);
      END LOOP;
   END LOOP;
END;
/

Expected Output:

rm +DATA/XPTODB/CONTROLFILE/CURRENT.4928.955985765
rm +DATA/XPTODB/CONTROLFILE/CURRENT.4934.955986589
rm +DATA/XPTODB/CONTROLFILE/CURRENT.4962.955998825
rm +DATA/XPTODB/CONTROLFILE/CURRENT.5063.956480113
rm +DATA/XPTODB/CONTROLFILE/CURRENT.6374.955984145
rm +DATA/XPTODB/CONTROLFILE/CURRENT.7547.955968953
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4936.955985803
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4966.955998847
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7540.955968995
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7574.955984177
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4937.955985803
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.4967.955998847
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7542.955968995
rm +DATA/XPTODB/DATAFILE/TBSEXEMPLE.7558.955984177
rm +DATA/XPTODB/DATAFILE/SYSAUX.4935.955986599
rm +DATA/XPTODB/DATAFILE/SYSAUX.4963.955998847
rm +DATA/XPTODB/DATAFILE/SYSAUX.6286.955984161
rm +DATA/XPTODB/DATAFILE/SYSAUX.7544.955968963
rm +DATA/XPTODB/DATAFILE/SYSTEM.4930.955986599
rm +DATA/XPTODB/DATAFILE/SYSTEM.4964.955998847
rm +DATA/XPTODB/DATAFILE/SYSTEM.7536.955968965

To run this for all databases on server (RAC Databases):

export ORAENV_ASK=NO
for DBSID in `ps -ef | grep ora_pmon | grep -v grep | awk -F_ '{ print $3}'` 
do
echo "######" ${DBSID}
export ORACLE_SID=${DBSID}
. oraenv
sqlplus / as sysdba
@script.sql
exit
done

Hope it helps you!