I was engaged on a report request that I needed to gather to get the TOP SQL by elapsed time and using Statspack. I got those and than I was asked to it on the following week and on the following, and you may see when this is going. So I created a script which would give is a report and I would not have to do it manually ever again 🙂
Usage: long_run_sql.sh [-h ] [ -r
Where:
If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running
-r = Set the time in minutes for the current running SQLs
-o = Specifies the minutes to be used for the long running SQLs History
-d = set the time period wanted for the execution history, default is 7 days
-h = Shows this help message
Might still have some bugs but it well enough to share here 🙂
You have some parameters that you need to change at the top, to suite your environment and of course have Statspack working and change the sqlplus connection line
Some tweaks are required as no 2 environments are never 100% alike. But still forth the work.
#!/bin/bash
#---------------------------------------------------------------------------
# Creates a report, using statspack, of the long running sqls from database
#
# History:
# Feb-21-2018 - Elisson Almeida - Created.
#---------------------------------------------------------------------------
#usage
### Environment setup
### Global variables
DIR=/home/oracle/scripts
LOG=/home/oracle/logs
DATE=$(date +%Y%m%d_%H%M%S)
NAME=long_running_sql_report
OUTPUT_FILE=${LOG}/${NAME}_${DATE}.log
ERROR_FILE=${LOG}/${NAME}.err
TMP_FILE=$DIR/$.tmp
CONFIG_EMAIL_TO=
PATH=
ORACLE_SID=
ORACLE_BASE=
ORACLE_HOME=
#tns-admin if needed otherwise comment out
#TNS_ADMIN
RUNNING=5
HISTORY=120
Help()
{
echo "Usage: long_run_sql.sh [-h ] [ -r ] [-o ] [-d ] "
echo "Where:"
echo " If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running"
echo " -r = Set the time in minutes for the current running SQLs"
echo " -o = Specifies the minutes to be used for the long running SQLs History"
echo " -d = set the time period wanted for the execution history, default is 7 days"
echo " -h = Shows this help message"
}
### Main script section
RUNNING=5
HISTORY=120
DAY=7
while getopts :r:o:d:h option
do
case ${option} in
h) Help
exit ;;
r) RUNNING=${OPTARG} ;;
o) HISTORY=${OPTARG} ;;
d) DAY=${OPTARG} ;;
:)
echo "Error: -${OPTARG} requires an argument."
Help
exit;;
*) echo "Error: -${OPTARG} is not a valid argument."
Help
exit;;
esac
done
#Queries
SELECT_LONG_TODAY="
set lines 250
PROMPT LONG RUNNING SQLs FOR THE LAST 24 HOURS
col \"Last Active Time\" format a15
select * from (
select --min(s.first_load_time) \"First Time\"
max(LAST_ACTIVE_TIME) \"Last Active Time\"
,round(sum(s.elapsed_time)/greatest(sum(s.executions),1)/1000000/60,2) \"Elapsed Time per Exec (min)\"
,s.sql_id \"SQL ID\"
--,plan_hash_value \"Plan Hash Value\"
,sum(s.executions) \"Total Execs\"
--,count(distinct s.plan_hash_value) \"Plans Count\"
,sum(s.buffer_gets) \"Total Buffer Gets\"
,sum(s.cpu_time) \"Total CPU\"
,round(sum(s.disk_reads)/greatest(sum(s.executions),1),2) \"Disk Reads per Exec\"
,round(sum(s.buffer_gets)/greatest(sum(s.executions),1),2) \"Buffer Gets per Exec\"
,round(sum(s.cpu_time)/greatest(sum(s.executions),1)/1000000,2) \"CPU Time per Exec\"
,s.sql_text \"SQL Text\"
from gv\$sql s
where s.elapsed_time/greatest(s.executions,1)/1000000>${RUNNING}
and s.LAST_ACTIVE_TIME >= trunc(sysdate)
--and s.LAST_ACTIVE_TIME >= sysdate-2/24
-- group by trunc(s.LAST_ACTIVE_TIME), s.sql_id, s.plan_hash_value
group by s.sql_id, s.plan_hash_value,s.sql_text
order by 2 desc
) ;
"
SELECT_LONG_HISTORY="
set lines 250
PROMPT LONG RUNNING SQLs HISTORY
select * from (
select to_char(stime,'yyyymmdd') \"Day\"
,round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) \"Elapsed Time per Exec (min)\"
,sql_id \"SQL ID\"
/* ,OLD_HASH_VALUE ,sql_id */
,sum(EXECS) \"Total Executions\"
,sum(BUF_GETS) \"Total Buffer Gets\"
,sum(cpu_time) \"Total CPU\"
--,sum(ELAPS_TIME) \"Total Elapsed Time (min)\"
,round(sum(DISK_RDS)/greatest(sum(EXECS),1),2) \"Disk Reads per Exec\"
,round(sum(BUF_GETS)/greatest(sum(EXECS),1),2) \"Buffer Gets per Exec\"
,round(sum(cpu_time)/greatest(sum(EXECS),1)/1000000,2) \"CPU Time per Exec\"
-- ,row_number() over (partition by to_char(stime,'yyyymmdd') order by sum(cpu_time) desc) cpu_rank
from (select ssnapid, stime, sql_id, OLD_HASH_VALUE, TEXT_SUBSET,
case when EXECUTIONS < nvl(lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id),0) then EXECUTIONS
else EXECUTIONS - lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id) END EXECS,
case when DISK_READS < nvl(lag(DISK_READS) over (partition by sql_id order by sq.snap_id),0) then DISK_READS
else DISK_READS - lag(DISK_READS) over (partition by sql_id order by sq.snap_id) END DISK_RDS,
case when BUFFER_GETS < nvl(lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id),0) then BUFFER_GETS
else BUFFER_GETS - lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id) END BUF_GETS,
case when ROWS_PROCESSED < nvl(lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id),0) then ROWS_PROCESSED
else ROWS_PROCESSED - lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id) END ROWSP,
case when CPU_TIME < nvl(lag(CPU_TIME) over (partition by sql_id order by sq.snap_id),0) then CPU_TIME
else CPU_TIME - lag(CPU_TIME) over (partition by sql_id order by sq.snap_id) END CPU_TIME,
case when ELAPSED_TIME < nvl(lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id),0) then ELAPSED_TIME else ELAPSED_TIME - lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id) END ELAPS_TIME from STATS\$SQL_SUMMARY sq, (SELECT iv.start_snap_id as ssnapid, iv.end_snap_id as esnapid, iv.start_snap_time as stime, iv.end_snap_time as etime FROM (SELECT lag(dbid) over (order by dbid, instance_number, snap_id) AS start_dbid, dbid AS end_dbid, lag(snap_id) over (order by dbid, instance_number, snap_id) AS start_snap_id, snap_id AS end_snap_id, lag(instance_number) over (order by dbid, instance_number, snap_id) AS start_inst_nr, instance_number AS end_inst_nr, lag(snap_time) over (order by dbid, instance_number, snap_id) AS start_snap_time, snap_time AS end_snap_time, lag(startup_time) over (order by dbid, instance_number, snap_id) AS start_startup_time, startup_time AS end_startup_time FROM perfstat.stats\$snapshot where snap_time > trunc(sysdate-${DAY})
/* where snap_time between to_date('04/12/2017 00:00','dd/mm/yyyy hh24:mi')
and to_date('04/12/2017 23:31','dd/mm/yyyy hh24:mi') */
) iv
WHERE iv.start_snap_id IS NOT NULL
AND iv.start_dbid=iv.end_dbid
AND iv.start_inst_nr=iv.end_inst_nr
AND iv.start_startup_time=iv.end_startup_time
) i
where i.esnapid = sq.SNAP_ID
)
where EXECS is not null and stime > trunc(sysdate-7)
--and ELAPS_TIME/greatest(execs,2)/1000000/60>${HISTORY}
group by to_char(stime,'yyyymmdd'), OLD_HASH_VALUE,sql_id
having round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) > ${HISTORY}
order by 1, 8 desc,2,3
);
"
## Create the report
sqlplus -s /@${ORACLE_SID}<< __eof__ > ${OUTPUT_FILE}
SET MARKUP HTML ON SPOOL ON
SET FEEDBACK OFF
alter session set nls_date_format='HH24:MI DD-MON-YY';
spool ${LOG}/long_run_report.html
${SELECT_LONG_TODAY}
PROMPT
PROMPT
${SELECT_LONG_HISTORY}
__eof__
#Send report to email
#cat long_run_report.html | mail -s "$(echo "Daily Long runing SQL Report for ${ORACLE_SID}\nContent-Type: text/html")" ${CONFIG_EMAIL_TO}
(
echo "To: ${CONFIG_EMAIL_TO}";
echo "Subject: Daily Long runing SQL Report for ${ORACLE_SID}";
echo "Content-Type: text/html";
echo "MIME-Version: 1.0";
echo "Parameters used: ";
echo "";
echo "Currentlty running sql ${RUNNING} minutes ";
echo "";
echo "Long running SQLs history running for ${HISTORY} minutes ";
echo ""
echo "SQL history time span ${DAY} days"
cat ${LOG}/long_run_report.html;
) | sendmail -t
#Removing the old backup files from the file system
find ${LOG} ${OUTPUT_FILE%.*}* -mtime +30 -exec rm{} \;
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. ACCEPTREADREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.