Hi All!
Let’s save some time to review those?
They usually very important when analysing query performance, specially on 10046 traces. I separated those were included on 11g, to have it as subset. Here it goes:
New 11g SQL transformations (CBQT, CNT, JE, JPPD)
The Oracle cost-based optimizer CBO is constantly being improved in its ability to re-write and transform sub-optimal SQL queries. In general way, CBO work by replacing operations by equivalent optimal when possible, changing clauses or re-grouping to take advantage on in indexes, eliminating subqueries by replacing with joins and not null tests and similar. In 11g new SQL transformations were implemented looking to continue improving this process. They are:
Cost-Based Query Transformation (CBQT): This is one of the core mechanisms in CBO, that tries different transformed and untransformed variants of the query and runs them through the costing algorithm. Improvements were done in 11g, specially related to clauses NOT IN into NOT EXISTS.
Count Transformation (CNT): If all rows have a value on this column (not null), Oracle CBO can simply count the number of rows. There is no need to actually retrieve the column value. This transformation changes count(col) to count(*).
Join Elimination (JE): Join elimination is a query transformation of the Oracle optimizer to remove joins that are not required in a query. Released on 10g, Join Elimination got several improvements and new mechanisms in 11g and even more in 12c.
Join Predicate PushDown (JPPD): Allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. The following types of views supports predicate pushdown: UNION ALL/UNION view, Outer-joined view, Anti-joined view, Semi-joined view, DISTINCT view and GROUP-BY view.
I was engaged on a report request that I needed to gather to get the TOP SQL by elapsed time and using Statspack. I got those and than I was asked to it on the following week and on the following, and you may see when this is going. So I created a script which would give is a report and I would not have to do it manually ever again 🙂
Usage: long_run_sql.sh [-h ] [ -r
Where:
If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running
-r = Set the time in minutes for the current running SQLs
-o = Specifies the minutes to be used for the long running SQLs History
-d = set the time period wanted for the execution history, default is 7 days
-h = Shows this help message
Might still have some bugs but it well enough to share here 🙂
You have some parameters that you need to change at the top, to suite your environment and of course have Statspack working and change the sqlplus connection line
Some tweaks are required as no 2 environments are never 100% alike. But still forth the work.
#!/bin/bash
#---------------------------------------------------------------------------
# Creates a report, using statspack, of the long running sqls from database
#
# History:
# Feb-21-2018 - Elisson Almeida - Created.
#---------------------------------------------------------------------------
#usage
### Environment setup
### Global variables
DIR=/home/oracle/scripts
LOG=/home/oracle/logs
DATE=$(date +%Y%m%d_%H%M%S)
NAME=long_running_sql_report
OUTPUT_FILE=${LOG}/${NAME}_${DATE}.log
ERROR_FILE=${LOG}/${NAME}.err
TMP_FILE=$DIR/$.tmp
CONFIG_EMAIL_TO=
PATH=
ORACLE_SID=
ORACLE_BASE=
ORACLE_HOME=
#tns-admin if needed otherwise comment out
#TNS_ADMIN
RUNNING=5
HISTORY=120
Help()
{
echo "Usage: long_run_sql.sh [-h ] [ -r ] [-o ] [-d ] "
echo "Where:"
echo " If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running"
echo " -r = Set the time in minutes for the current running SQLs"
echo " -o = Specifies the minutes to be used for the long running SQLs History"
echo " -d = set the time period wanted for the execution history, default is 7 days"
echo " -h = Shows this help message"
}
### Main script section
RUNNING=5
HISTORY=120
DAY=7
while getopts :r:o:d:h option
do
case ${option} in
h) Help
exit ;;
r) RUNNING=${OPTARG} ;;
o) HISTORY=${OPTARG} ;;
d) DAY=${OPTARG} ;;
:)
echo "Error: -${OPTARG} requires an argument."
Help
exit;;
*) echo "Error: -${OPTARG} is not a valid argument."
Help
exit;;
esac
done
#Queries
SELECT_LONG_TODAY="
set lines 250
PROMPT LONG RUNNING SQLs FOR THE LAST 24 HOURS
col \"Last Active Time\" format a15
select * from (
select --min(s.first_load_time) \"First Time\"
max(LAST_ACTIVE_TIME) \"Last Active Time\"
,round(sum(s.elapsed_time)/greatest(sum(s.executions),1)/1000000/60,2) \"Elapsed Time per Exec (min)\"
,s.sql_id \"SQL ID\"
--,plan_hash_value \"Plan Hash Value\"
,sum(s.executions) \"Total Execs\"
--,count(distinct s.plan_hash_value) \"Plans Count\"
,sum(s.buffer_gets) \"Total Buffer Gets\"
,sum(s.cpu_time) \"Total CPU\"
,round(sum(s.disk_reads)/greatest(sum(s.executions),1),2) \"Disk Reads per Exec\"
,round(sum(s.buffer_gets)/greatest(sum(s.executions),1),2) \"Buffer Gets per Exec\"
,round(sum(s.cpu_time)/greatest(sum(s.executions),1)/1000000,2) \"CPU Time per Exec\"
,s.sql_text \"SQL Text\"
from gv\$sql s
where s.elapsed_time/greatest(s.executions,1)/1000000>${RUNNING}
and s.LAST_ACTIVE_TIME >= trunc(sysdate)
--and s.LAST_ACTIVE_TIME >= sysdate-2/24
-- group by trunc(s.LAST_ACTIVE_TIME), s.sql_id, s.plan_hash_value
group by s.sql_id, s.plan_hash_value,s.sql_text
order by 2 desc
) ;
"
SELECT_LONG_HISTORY="
set lines 250
PROMPT LONG RUNNING SQLs HISTORY
select * from (
select to_char(stime,'yyyymmdd') \"Day\"
,round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) \"Elapsed Time per Exec (min)\"
,sql_id \"SQL ID\"
/* ,OLD_HASH_VALUE ,sql_id */
,sum(EXECS) \"Total Executions\"
,sum(BUF_GETS) \"Total Buffer Gets\"
,sum(cpu_time) \"Total CPU\"
--,sum(ELAPS_TIME) \"Total Elapsed Time (min)\"
,round(sum(DISK_RDS)/greatest(sum(EXECS),1),2) \"Disk Reads per Exec\"
,round(sum(BUF_GETS)/greatest(sum(EXECS),1),2) \"Buffer Gets per Exec\"
,round(sum(cpu_time)/greatest(sum(EXECS),1)/1000000,2) \"CPU Time per Exec\"
-- ,row_number() over (partition by to_char(stime,'yyyymmdd') order by sum(cpu_time) desc) cpu_rank
from (select ssnapid, stime, sql_id, OLD_HASH_VALUE, TEXT_SUBSET,
case when EXECUTIONS < nvl(lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id),0) then EXECUTIONS
else EXECUTIONS - lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id) END EXECS,
case when DISK_READS < nvl(lag(DISK_READS) over (partition by sql_id order by sq.snap_id),0) then DISK_READS
else DISK_READS - lag(DISK_READS) over (partition by sql_id order by sq.snap_id) END DISK_RDS,
case when BUFFER_GETS < nvl(lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id),0) then BUFFER_GETS
else BUFFER_GETS - lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id) END BUF_GETS,
case when ROWS_PROCESSED < nvl(lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id),0) then ROWS_PROCESSED
else ROWS_PROCESSED - lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id) END ROWSP,
case when CPU_TIME < nvl(lag(CPU_TIME) over (partition by sql_id order by sq.snap_id),0) then CPU_TIME
else CPU_TIME - lag(CPU_TIME) over (partition by sql_id order by sq.snap_id) END CPU_TIME,
case when ELAPSED_TIME < nvl(lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id),0) then ELAPSED_TIME else ELAPSED_TIME - lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id) END ELAPS_TIME from STATS\$SQL_SUMMARY sq, (SELECT iv.start_snap_id as ssnapid, iv.end_snap_id as esnapid, iv.start_snap_time as stime, iv.end_snap_time as etime FROM (SELECT lag(dbid) over (order by dbid, instance_number, snap_id) AS start_dbid, dbid AS end_dbid, lag(snap_id) over (order by dbid, instance_number, snap_id) AS start_snap_id, snap_id AS end_snap_id, lag(instance_number) over (order by dbid, instance_number, snap_id) AS start_inst_nr, instance_number AS end_inst_nr, lag(snap_time) over (order by dbid, instance_number, snap_id) AS start_snap_time, snap_time AS end_snap_time, lag(startup_time) over (order by dbid, instance_number, snap_id) AS start_startup_time, startup_time AS end_startup_time FROM perfstat.stats\$snapshot where snap_time > trunc(sysdate-${DAY})
/* where snap_time between to_date('04/12/2017 00:00','dd/mm/yyyy hh24:mi')
and to_date('04/12/2017 23:31','dd/mm/yyyy hh24:mi') */
) iv
WHERE iv.start_snap_id IS NOT NULL
AND iv.start_dbid=iv.end_dbid
AND iv.start_inst_nr=iv.end_inst_nr
AND iv.start_startup_time=iv.end_startup_time
) i
where i.esnapid = sq.SNAP_ID
)
where EXECS is not null and stime > trunc(sysdate-7)
--and ELAPS_TIME/greatest(execs,2)/1000000/60>${HISTORY}
group by to_char(stime,'yyyymmdd'), OLD_HASH_VALUE,sql_id
having round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) > ${HISTORY}
order by 1, 8 desc,2,3
);
"
## Create the report
sqlplus -s /@${ORACLE_SID}<< __eof__ > ${OUTPUT_FILE}
SET MARKUP HTML ON SPOOL ON
SET FEEDBACK OFF
alter session set nls_date_format='HH24:MI DD-MON-YY';
spool ${LOG}/long_run_report.html
${SELECT_LONG_TODAY}
PROMPT
PROMPT
${SELECT_LONG_HISTORY}
__eof__
#Send report to email
#cat long_run_report.html | mail -s "$(echo "Daily Long runing SQL Report for ${ORACLE_SID}\nContent-Type: text/html")" ${CONFIG_EMAIL_TO}
(
echo "To: ${CONFIG_EMAIL_TO}";
echo "Subject: Daily Long runing SQL Report for ${ORACLE_SID}";
echo "Content-Type: text/html";
echo "MIME-Version: 1.0";
echo "Parameters used: ";
echo "";
echo "Currentlty running sql ${RUNNING} minutes ";
echo "";
echo "Long running SQLs history running for ${HISTORY} minutes ";
echo ""
echo "SQL history time span ${DAY} days"
cat ${LOG}/long_run_report.html;
) | sendmail -t
#Removing the old backup files from the file system
find ${LOG} ${OUTPUT_FILE%.*}* -mtime +30 -exec rm{} \;
Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.
So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.
Ready? Here it goes:
Cross-Session PL/SQL Function Result Cache
Since 11gR1 we have simple way to boost the performance of PL/SQL functions by saving the results of function calls for specific combinations of input parameters in the SGA:The cross-session PL/SQL function result cache.
The results can be reused by any session calling the same function with the same parameters. This can result in a significant performance boost when functions are called for each row in a SQL query, or within a loop in PL/SQL.
Ok, but how to do this? It’s as simple as adding the RESULT_CACHE clause:
CREATE OR REPLACE FUNCTION procedure_example (p_in IN NUMBER)
RETURN NUMBER
RESULT_CACHE
The RELIES_ON may be set in 11gR1 but is unnecessary in 11.2 as it automatically tracks dependencies and invalidates the cached results when necessary.
1. Change Retention Period Of AWR Warehouse Repository Database
This retention of the AWR on the Repository Database can be changed by the following:
<OMS_HOME>/bin>./emcli awrwh_reconfigure -retention=<New retention period (in years)>
For example:
[oracle@oem13c oms]$ emcli awrwh_reconfigure -retention=5
2. Change Staging Location Of AWR Dump Files
For the AWR Warehouse, the target database by default creates dump file in home directory. So after adding the target to AWR warehosue, we need to reconfigure it from OEM CLI to change the dump files directory as following:
Reference:
EM13c: How To Change Retention Period Of AWR Warehouse Repository Database In 13.2 OEM Cloud Control (Doc ID 2247437.1)
EM13c: How To Change Staging Location Of Dump Files On AWR Warehouse Repository Database In 13.2 OEM Cloud Control (Doc ID 2247439.1)
EM13c: How To Change Upload Interval Of SnapShots In AWR Warehouse Repository Database In 13.2 OEM Cloud Control (Doc ID 2247438.1)
Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.
So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.
Ready? Here it goes:
Extended Statistics
One of the most expected features of Oracle 11g is improvements to the dbms_stats package, specifically the ability to aid complex queries by providing extended statistics to the cost-based optimizer (CBO).
The 11g extended statistics are intended to improve the optimizers guesses for the cardinality of combined columns and columns that are modified by a built-in or user-defined function. In Oracle 10g dynamic sampling can be used to provide inter-table cardinality estimates, but dynamic sampling has important limitations. However, the 11g create_extended_stats in dbms_stats relieves much of the problem of sub-optimal table join orders allowing for extended statistics on correlated columns.
One of the expectation is to avoid using the ORDERED hint, one of the most popular SQL tuning hints, used to to specify that the tables be joined together in the same order that they appear in the FROM clause.
This feature can be controlled by hidden parameter:
_optimizer_enable_extended_stats in case of SQL Performance regression.
We all know the SAR (System Activity Report), however sometimes it’s dificult to visualize a large amount of data or even extract some long term meaningful information.
How wonderful would be to have a graphical visualization from this data? Well, it’s pretty simple using KSAR.
KSAR is a BSD licensed Java based application to create graph of all parameters from the data collected by Unix sar utilities and can be exported to PDF, JPG, PNG, CSV, TXT and others.
The project Codes are here. The latest Version is KSar2-0.0.4.
See below an I/O Graph from month of Dec, generated from a database server, as an example:
To use it, first thing is to have SAR data. To get it we have basically 3 options: A. Collect from current server. B. Extract from other server using direct SSH connection. C. Use a Generated SAR File D. Run Java tool from Client Server.
Personally, I prefer to use option C, in order to avoid putting any code in client servers and also work in less intrusive mode as possible.
I also don’t use option B because we don’t usually have direct connection to client server, but sometimes with jumpboxes or similar.
There is a third reason: When Chosing option A or B, it’s automatically connected only daily data, but when using C, you can put all data you need. It need only to be available on server.
For reference regarding Option D, please check this link.
By the way, some other useful information about SAR: 1. SAR Collection Jobs can be checked on /etc/cron.d/sysstat 2. SAR Retention can be checked/adjusted on /etc/sysconfig/sysstat
Ok, now how to generate the SAR Files?
Using command: sar -A
Example:
[root@grepora-srvr ~]# cd /var/log/sa/
[root@grepora-srvr sa]# ls -lrt |tail -10
total 207080
-rw-r--r-- 1 root root 3337236 Dec 24 23:50 sa24
-rw-r--r-- 1 root root 3756100 Dec 24 23:53 sar24
-rw-r--r-- 1 root root 3337236 Dec 25 23:50 sa25
-rw-r--r-- 1 root root 3756113 Dec 25 23:53 sar25
-rw-r--r-- 1 root root 3337236 Dec 26 23:50 sa26
-rw-r--r-- 1 root root 3756104 Dec 26 23:53 sar26
-rw-r--r-- 1 root root 3337236 Dec 27 23:50 sa27
-rw-r--r-- 1 root root 3756096 Dec 27 23:53 sar27
-rw-r--r-- 1 root root 3337236 Dec 28 23:50 sa28
-rw-r--r-- 1 root root 3756100 Dec 28 23:53 sar28
-rw-r--r-- 1 root root 2317668 Dec 29 16:30 sa29
[root@grepora-srvr sa]# sar -A -f sa29 > sa29.txt
[root@grepora-srvr sa]# cat sa29.txt |head -10
Linux 3.8.13-118.4.2.el6uek.x86_64 (grepora-srvr) 12/29/2017 _x86_64_ (40 CPU)
12:00:01 AM CPU %usr %nice %sys %iowait %steal %irq %soft %guest %idle
12:10:01 AM all 97.74 0.00 1.71 0.01 0.00 0.00 0.52 0.00 0.02
12:10:01 AM 0 96.46 0.00 2.59 0.02 0.00 0.00 0.92 0.00 0.01
12:10:01 AM 1 98.55 0.00 1.24 0.01 0.00 0.00 0.20 0.00 0.00
12:10:01 AM 2 97.83 0.00 2.04 0.01 0.00 0.00 0.11 0.00 0.02
12:10:01 AM 3 98.44 0.00 1.41 0.01 0.00 0.00 0.14 0.00 0.01
12:10:01 AM 4 98.28 0.00 1.65 0.00 0.00 0.00 0.06 0.00 0.01
12:10:01 AM 5 98.27 0.00 1.70 0.00 0.00 0.00 0.02 0.00 0.00
[root@grepora-srvr sa]#
With this file you can copy it from client server your server and import using KSAR Interface. It’s pretty intuitive and easy to use.
But how to generate all available days or a set of specific days in past?
Here is a script I use for this:
### All Days of SAR
DT=$(ls /var/log/sa/sa[0-9][0-9] | tr '\n' ' ' | sed 's/\/var\/log\/sa\/sa/ /g')
## Explicit Days
#DT="07 08 09"
#DT="12"
# Today
#DT=`date +"%d"`
>/tmp/sar-$(hostname)-multiple.txt
for i in $DT; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /tmp/sar-$(hostname)-multiple.txt
done
ls -l /tmp/sar-$(hostname)-multiple.txt
After this you can copy the generated file to you PC and generate the same report.
Hi all!
Some time ago I faced some session on wait event DFS Lock Handle during a RMAN Backup on Standby Database. Btw, running backup on a Standby is a very interesting approach to avoid running backups on primary, so all nodes can be fully dedicated to application ends.
Turn that in my situation I noticed there was a lock with my apply process. The fix? Quite simple: Cancel apply process, run backup, and restart apply again. In my case, using DG Broker:
dgmgrl
connect /
show configuration
show database
edit database set state = 'apply-off';
Hello all!
This is a very interesing one… what about database taking too long or even showing locks that don’t really exists?
I faced some weird situations related to the size of DBMS_LOCK_ALLOCATED.
After some research, seems root cause is descripted in MOS DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6).
The issue is:
– DBMS_LOCK_ALLOCATED is the table that keep all locks on database.
– This table keeps growing even though many of the locks are probably no longer used.
– Upon checking the EXPIRED column, it reveals that the locks’ expiration date is a year or more in the future.
Besides the mentioned “storage space issue” in MOS Doc, there is also the situation that we have actually 1073741823 “lockhandlers” available. And what if this ‘not releasing’ cause we run out of handlers? Unlike, but possible.
Having a big table can also cause bad performance on DBMS_LOCK.REQUEST, that is the basic mechanism for locks (passing the handler, lockmode, timeout and release instructions).
Hello all,
Having issues to purge SQL Cursor in a 10g database?
DBMS_SHARED_POOL.PURGE is available from 10.2.0.4 on, but according to “DBMS_SHARED_POOL.PURGE Is Not Working On 10.2.0.4 (Doc ID 751876.1)”, it is event-protected.
This means you need to set event 5614566 to be able to use the purge.
Example:
sys@PRODB10> select count(*) from v$sqlarea where sql_id='gbaq7x7myqxjw';
COUNT(*)
----------
1
sys@PRODB10> exec dbms_shared_pool.purge('00000003B0898118,1807156054','C')
PL/SQL procedure successfully completed.
sys@PRODB10> select count(*) from v$sqlarea where sql_id='gbaq7x7myqxjw';
COUNT(*)
----------
1
sys@PRODB10> alter session set events '5614566 trace name context forever';
Session altered.
sys@PRODB10> exec dbms_shared_pool.purge('00000003B0898118,1807156054','C')
PL/SQL procedure successfully completed.
sys@PRODB10> select count(*) from v$sqlarea where sql_id='gbaq7x7myqxjw';
COUNT(*)
----------
0
Hello All,
I have a client that use to Export and Import Dictionary Statistics using a stats table. However, after upgrading database to 12c, is started to raise error below to export data to a stats table (created in 11g) or even to update stats in this table.
SQL> exec dbms_stats.upgrade_stat_table('GREPORA','MYSTATSTABLE');
DECLARE
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected BINARY got NUMBER
ORA-06512: at "SYS.DBMS_STATS", line 40183
ORA-06512: at line 28
SQL> EXEC DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid => 'TEST_MATHEUS',statown => 'GREPORA' );
BEGIN DBMS_STATS.EXPORT_FIXED_OBJECTS_STATS (stattab => 'MYSTATSTABLE',statid => 'TEST_MATHEUS',statown => 'GREPORA' ); END;
*
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected BINARY got NUMBER
ORA-06512: at "SYS.DBMS_STATS", line 37085
ORA-06512: at line 1
I reviewed several references (all very interesting, by the way, recommend you to take a look): – ORA-20002 on Importing Statistics using DBMS_STATS Procedures (Doc ID 740345.1) – Datapump Export Fails With Ora-00932 (Doc ID 1300803.1) – Error “ORA-00932 – Inconsistent Datatypes: Expected BINARY Got NUMBER” When Executing the Primavera Gather-Statistics Scripts on an Oracle 12c Database (Doc ID 2111114.1)
– https://oracle-base.com/articles/12c/concurrent-statistics-collection-12cr1
– http://www.morganslibrary.org/reference/pkgs/dbms_stats.html – How To Export and Import Dictionary Statistics (Doc ID 1450820.1) – Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations (Doc ID 798257.1)
But no reference found to this specific issue, not even in MOS.
After some tests, I realized that maybe columns order in stats table can be changed in implementation between 11g and 12c. Bingo! The same columns and datatypes are in place, but in different order.
This is not documented in MOS neither on internet/independent blogs, but seems EXPORT% on DBMS_STATS presume stats table in it’s internal code without specified column orders.
As we know, a good development practice is to specify column names when performing any operation. Bad on you, Oracle!
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. ACCEPTREADREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.