Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.
Here is a quick summary I did for a client regarding new Semantic Hints on 11g..
11g New Semantic Hints
In Database version 11.2.0.1, Oracle introduced three new hints: IGNORE_ROW_ON_DUPKEY_INDEX, CHANGE_DUPKEY_ERROR_INDEX and RETRY_ON_ROW_CHANGE. They have different working ends but they all tries to avoid ORA-0001 in some specific situations. An important note is that, unlike almost all other hints, this hint has a semantic effect: it changes the actual behavior – not just the optimization – of the SQL. Oracle Docs says about this Hint:
“The CHANGE_DUPKEY_ERROR_INDEX, IGNORE_ROW_ON_DUPKEY_INDEX, and RETRY_ON_ROW_CHANGE hints are unlike other hints in that they have a semantic effect. The general philosophy explained in “Hints” does not apply for these three hints.”
Some specifics about each one:
IGNORE_ROW_ON_DUPKEY_INDEX: “When a unique key violation is encountered, a row-level rollback occurs and execution resumes with the next input row.”
CHANGE_DUPKEY_ERROR_INDEX: “The CHANGE_DUPKEY_ERROR_INDEX hint provides a mechanism to unambiguously identify a unique key violation for a specified set of columns or for a specified index. When a unique key violation occurs for the specified index, an ORA-38911 error is reported instead of an ORA-001.”
RETRY_ON_ROW_CHANGE: “When you specify this hint, the operation is retried when the ORA_ROWSCN for one or more rows in the set has changed from the time the set of rows to be modified is determined to the time the block is actually modified.”
The use of those hints, however, is only recommended for pretty specific ends and should be avoided if possible. Not only for having some related bugs, but to add some complexity to known operations like bulk inserts and similar.
Other important hint is the Result Cache Hint, mentioned some months ago, under Result Cache Post.
Did you know since 11g the REGEXP_INSTR and REGEXP_SUBSTR functions include a new SUBEXPR parameter that limits the pattern match to a specific subexpression in the search pattern?
Also, a new function, REGEXP_COUNT, returns the number of times the search pattern appears in source string.
The UNION operator is great for merging the results of multiple queries that return similar rowsets, essentially executing each query separately and merging the results together into a single result set. Oracle 11g release 2 come with improvements for SQL optimizer on how Oracle handles UNION ALL performance with the new join factorization transformation (JFT). The Join Factorization Transformation applies only to UNION ALL queries.
The following example show how the optimizer improves the performance of UNION ALL by dynamically re-writing a UNION ALL query into a more efficient form using an in-line view:
Original Query:
select t1.c1, t2.c2
from t1, t2, t3
where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2
union all
select t1.c1, t2.c2
from t1, t2, t4
where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3;
Re-Written Query:
select t1.c1, VW_JF_1.item_2
from t1, (select t2.c1 item_1, t2.c2 item_2
from t2, t3
where t2.c2 = t3.c2 and t2.c2 = 2
union all
select t2.c1 item_1, t2.c2 item_2
from t2, t4
where t2.c3 = t4.c3) VW_JF_1
where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1;
Hi All!
Let’s save some time to review those?
They usually very important when analysing query performance, specially on 10046 traces. I separated those were included on 11g, to have it as subset. Here it goes:
New 11g SQL transformations (CBQT, CNT, JE, JPPD)
The Oracle cost-based optimizer CBO is constantly being improved in its ability to re-write and transform sub-optimal SQL queries. In general way, CBO work by replacing operations by equivalent optimal when possible, changing clauses or re-grouping to take advantage on in indexes, eliminating subqueries by replacing with joins and not null tests and similar. In 11g new SQL transformations were implemented looking to continue improving this process. They are:
Cost-Based Query Transformation (CBQT): This is one of the core mechanisms in CBO, that tries different transformed and untransformed variants of the query and runs them through the costing algorithm. Improvements were done in 11g, specially related to clauses NOT IN into NOT EXISTS.
Count Transformation (CNT): If all rows have a value on this column (not null), Oracle CBO can simply count the number of rows. There is no need to actually retrieve the column value. This transformation changes count(col) to count(*).
Join Elimination (JE): Join elimination is a query transformation of the Oracle optimizer to remove joins that are not required in a query. Released on 10g, Join Elimination got several improvements and new mechanisms in 11g and even more in 12c.
Join Predicate PushDown (JPPD): Allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. The following types of views supports predicate pushdown: UNION ALL/UNION view, Outer-joined view, Anti-joined view, Semi-joined view, DISTINCT view and GROUP-BY view.
I was engaged on a report request that I needed to gather to get the TOP SQL by elapsed time and using Statspack. I got those and than I was asked to it on the following week and on the following, and you may see when this is going. So I created a script which would give is a report and I would not have to do it manually ever again 🙂
Usage: long_run_sql.sh [-h ] [ -r
Where:
If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running
-r = Set the time in minutes for the current running SQLs
-o = Specifies the minutes to be used for the long running SQLs History
-d = set the time period wanted for the execution history, default is 7 days
-h = Shows this help message
Might still have some bugs but it well enough to share here 🙂
You have some parameters that you need to change at the top, to suite your environment and of course have Statspack working and change the sqlplus connection line
Some tweaks are required as no 2 environments are never 100% alike. But still forth the work.
#!/bin/bash
#---------------------------------------------------------------------------
# Creates a report, using statspack, of the long running sqls from database
#
# History:
# Feb-21-2018 - Elisson Almeida - Created.
#---------------------------------------------------------------------------
#usage
### Environment setup
### Global variables
DIR=/home/oracle/scripts
LOG=/home/oracle/logs
DATE=$(date +%Y%m%d_%H%M%S)
NAME=long_running_sql_report
OUTPUT_FILE=${LOG}/${NAME}_${DATE}.log
ERROR_FILE=${LOG}/${NAME}.err
TMP_FILE=$DIR/$.tmp
CONFIG_EMAIL_TO=
PATH=
ORACLE_SID=
ORACLE_BASE=
ORACLE_HOME=
#tns-admin if needed otherwise comment out
#TNS_ADMIN
RUNNING=5
HISTORY=120
Help()
{
echo "Usage: long_run_sql.sh [-h ] [ -r ] [-o ] [-d ] "
echo "Where:"
echo " If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running"
echo " -r = Set the time in minutes for the current running SQLs"
echo " -o = Specifies the minutes to be used for the long running SQLs History"
echo " -d = set the time period wanted for the execution history, default is 7 days"
echo " -h = Shows this help message"
}
### Main script section
RUNNING=5
HISTORY=120
DAY=7
while getopts :r:o:d:h option
do
case ${option} in
h) Help
exit ;;
r) RUNNING=${OPTARG} ;;
o) HISTORY=${OPTARG} ;;
d) DAY=${OPTARG} ;;
:)
echo "Error: -${OPTARG} requires an argument."
Help
exit;;
*) echo "Error: -${OPTARG} is not a valid argument."
Help
exit;;
esac
done
#Queries
SELECT_LONG_TODAY="
set lines 250
PROMPT LONG RUNNING SQLs FOR THE LAST 24 HOURS
col \"Last Active Time\" format a15
select * from (
select --min(s.first_load_time) \"First Time\"
max(LAST_ACTIVE_TIME) \"Last Active Time\"
,round(sum(s.elapsed_time)/greatest(sum(s.executions),1)/1000000/60,2) \"Elapsed Time per Exec (min)\"
,s.sql_id \"SQL ID\"
--,plan_hash_value \"Plan Hash Value\"
,sum(s.executions) \"Total Execs\"
--,count(distinct s.plan_hash_value) \"Plans Count\"
,sum(s.buffer_gets) \"Total Buffer Gets\"
,sum(s.cpu_time) \"Total CPU\"
,round(sum(s.disk_reads)/greatest(sum(s.executions),1),2) \"Disk Reads per Exec\"
,round(sum(s.buffer_gets)/greatest(sum(s.executions),1),2) \"Buffer Gets per Exec\"
,round(sum(s.cpu_time)/greatest(sum(s.executions),1)/1000000,2) \"CPU Time per Exec\"
,s.sql_text \"SQL Text\"
from gv\$sql s
where s.elapsed_time/greatest(s.executions,1)/1000000>${RUNNING}
and s.LAST_ACTIVE_TIME >= trunc(sysdate)
--and s.LAST_ACTIVE_TIME >= sysdate-2/24
-- group by trunc(s.LAST_ACTIVE_TIME), s.sql_id, s.plan_hash_value
group by s.sql_id, s.plan_hash_value,s.sql_text
order by 2 desc
) ;
"
SELECT_LONG_HISTORY="
set lines 250
PROMPT LONG RUNNING SQLs HISTORY
select * from (
select to_char(stime,'yyyymmdd') \"Day\"
,round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) \"Elapsed Time per Exec (min)\"
,sql_id \"SQL ID\"
/* ,OLD_HASH_VALUE ,sql_id */
,sum(EXECS) \"Total Executions\"
,sum(BUF_GETS) \"Total Buffer Gets\"
,sum(cpu_time) \"Total CPU\"
--,sum(ELAPS_TIME) \"Total Elapsed Time (min)\"
,round(sum(DISK_RDS)/greatest(sum(EXECS),1),2) \"Disk Reads per Exec\"
,round(sum(BUF_GETS)/greatest(sum(EXECS),1),2) \"Buffer Gets per Exec\"
,round(sum(cpu_time)/greatest(sum(EXECS),1)/1000000,2) \"CPU Time per Exec\"
-- ,row_number() over (partition by to_char(stime,'yyyymmdd') order by sum(cpu_time) desc) cpu_rank
from (select ssnapid, stime, sql_id, OLD_HASH_VALUE, TEXT_SUBSET,
case when EXECUTIONS < nvl(lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id),0) then EXECUTIONS
else EXECUTIONS - lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id) END EXECS,
case when DISK_READS < nvl(lag(DISK_READS) over (partition by sql_id order by sq.snap_id),0) then DISK_READS
else DISK_READS - lag(DISK_READS) over (partition by sql_id order by sq.snap_id) END DISK_RDS,
case when BUFFER_GETS < nvl(lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id),0) then BUFFER_GETS
else BUFFER_GETS - lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id) END BUF_GETS,
case when ROWS_PROCESSED < nvl(lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id),0) then ROWS_PROCESSED
else ROWS_PROCESSED - lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id) END ROWSP,
case when CPU_TIME < nvl(lag(CPU_TIME) over (partition by sql_id order by sq.snap_id),0) then CPU_TIME
else CPU_TIME - lag(CPU_TIME) over (partition by sql_id order by sq.snap_id) END CPU_TIME,
case when ELAPSED_TIME < nvl(lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id),0) then ELAPSED_TIME else ELAPSED_TIME - lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id) END ELAPS_TIME from STATS\$SQL_SUMMARY sq, (SELECT iv.start_snap_id as ssnapid, iv.end_snap_id as esnapid, iv.start_snap_time as stime, iv.end_snap_time as etime FROM (SELECT lag(dbid) over (order by dbid, instance_number, snap_id) AS start_dbid, dbid AS end_dbid, lag(snap_id) over (order by dbid, instance_number, snap_id) AS start_snap_id, snap_id AS end_snap_id, lag(instance_number) over (order by dbid, instance_number, snap_id) AS start_inst_nr, instance_number AS end_inst_nr, lag(snap_time) over (order by dbid, instance_number, snap_id) AS start_snap_time, snap_time AS end_snap_time, lag(startup_time) over (order by dbid, instance_number, snap_id) AS start_startup_time, startup_time AS end_startup_time FROM perfstat.stats\$snapshot where snap_time > trunc(sysdate-${DAY})
/* where snap_time between to_date('04/12/2017 00:00','dd/mm/yyyy hh24:mi')
and to_date('04/12/2017 23:31','dd/mm/yyyy hh24:mi') */
) iv
WHERE iv.start_snap_id IS NOT NULL
AND iv.start_dbid=iv.end_dbid
AND iv.start_inst_nr=iv.end_inst_nr
AND iv.start_startup_time=iv.end_startup_time
) i
where i.esnapid = sq.SNAP_ID
)
where EXECS is not null and stime > trunc(sysdate-7)
--and ELAPS_TIME/greatest(execs,2)/1000000/60>${HISTORY}
group by to_char(stime,'yyyymmdd'), OLD_HASH_VALUE,sql_id
having round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) > ${HISTORY}
order by 1, 8 desc,2,3
);
"
## Create the report
sqlplus -s /@${ORACLE_SID}<< __eof__ > ${OUTPUT_FILE}
SET MARKUP HTML ON SPOOL ON
SET FEEDBACK OFF
alter session set nls_date_format='HH24:MI DD-MON-YY';
spool ${LOG}/long_run_report.html
${SELECT_LONG_TODAY}
PROMPT
PROMPT
${SELECT_LONG_HISTORY}
__eof__
#Send report to email
#cat long_run_report.html | mail -s "$(echo "Daily Long runing SQL Report for ${ORACLE_SID}\nContent-Type: text/html")" ${CONFIG_EMAIL_TO}
(
echo "To: ${CONFIG_EMAIL_TO}";
echo "Subject: Daily Long runing SQL Report for ${ORACLE_SID}";
echo "Content-Type: text/html";
echo "MIME-Version: 1.0";
echo "Parameters used: ";
echo "";
echo "Currentlty running sql ${RUNNING} minutes ";
echo "";
echo "Long running SQLs history running for ${HISTORY} minutes ";
echo ""
echo "SQL history time span ${DAY} days"
cat ${LOG}/long_run_report.html;
) | sendmail -t
#Removing the old backup files from the file system
find ${LOG} ${OUTPUT_FILE%.*}* -mtime +30 -exec rm{} \;
I think it’s very useful, but not so often I see that in use… More about it:
It was introduced in Oracle 11g Release 2, as an analytic function to make the life easier fot the PLSQL Dev, specially when working to aggregate strings. The nice thing about this function is it also allows us to order the elements in the concatenated list. If you are using 11g Release 2 you should use this function for string aggregation.
In case WM_CONCAT is in use for this end, be aware that WM_CONCAT is an undocumented function and as such is not supported by Oracle for user applications (MOS Note ID 1336219.1). Also, WM_CONCAT has been removed from 12c onward, once it was completely replaced by LISTAGG.
Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.
So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.
Ready? Here it goes:
Pivot and Unpivot Operators
Pivoting tables are now possible in 11g through PIVOT clause. The PIVOT operator takes data in separate rows, aggregates it and converts it into columns. Prior to 11g new functions it was possible to accomplish by using DECODE combined with some aggregate function like SUM.
Also, Adding the XML keyword to the PIVOT operator allows us to convert the generated pivot results to XML format. It also makes the PIVOT a little more flexible, making possible to replace the hard coded IN clause with a subquery, or the ANY commands.
In same way, the UNPIVOT operator converts column-based data into separate rows. Some important considerations about feature:
Column names are required. These can be set to any name not currently in the driving table.
The columns to be unpivoted must be named in the IN clause.
By default the EXCLUDE NULLS clause is used. To override the default behavior use the INCLUDE NULLS clause.
The following query shows the inclusion of the INCLUDE NULLS clause.
Hello readers! My name is Bruno Kremer, this is my first post from a series, and I will be talking about how we can identify the top segments of the database.
Introduction
It’s well known that we can create automated tasks to collect and save the space used/allocated by the database objects, such as saving snapshots of the DBA_SEGMENTS view. But what if this is your first contact with a specific database and you need to identify the top segments, estimate their growth ratio, check the history of space allocated, or even to perform some kind of capacity planning? There are some alternatives to answer these questions, but on this post I will share the starting point. Please feel free to customize the scripts to your own need.
Checking the top sized segments
select
s.owner,
s.segment_name,
s.segment_type,
round(sum(nvl(s.bytes,0))/1024/1024) size_mb
from dba_segments s
where s.bytes > 1048576 -- higher than 1MB
group by s.owner, s.segment_name, s.segment_type
order by size_mb desc
fetch first &TOP rows only;
Input values: &TOP – limit the number of rows returned.
Filters you might want to use: and s.segment_type in ('&OBJECT_TYPE') – ‘TABLE’, ‘TABLE PARTITION’, ‘INDEX’…
Return example:
Note: the scripts used in this series were tested on 12.1.0.2 databases. Some of these use the “FETCH FIRST” clause to limit the number of rows returned, but if you are using older versions of Oracle Database, you can still use the old fashion like “ROWNUM”.
Example:
select * from (
select s.owner, s.segment_name, s.segment_type, round(sum(nvl(s.bytes,0))/1024/1024) size_mb
from dba_segments s
where s.bytes > 1048576 -- higher than 1MB
group by s.owner, s.segment_name, s.segment_type
order by size_mb desc
) where rownum <= &TOP;
Now that you already have an idea regarding the size of the largest database segments, you might want to check the top growing segments… On next publications we will talk about how we can use AWR data dictionary views and some DBMS_SPACE procedures to estimate space usage history and top growing segments.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. ACCEPTREADREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.