Hi All!
Let’s save some time to review those?
They usually very important when analysing query performance, specially on 10046 traces. I separated those were included on 11g, to have it as subset. Here it goes:
New 11g SQL transformations (CBQT, CNT, JE, JPPD)
The Oracle cost-based optimizer CBO is constantly being improved in its ability to re-write and transform sub-optimal SQL queries. In general way, CBO work by replacing operations by equivalent optimal when possible, changing clauses or re-grouping to take advantage on in indexes, eliminating subqueries by replacing with joins and not null tests and similar. In 11g new SQL transformations were implemented looking to continue improving this process. They are:
Cost-Based Query Transformation (CBQT): This is one of the core mechanisms in CBO, that tries different transformed and untransformed variants of the query and runs them through the costing algorithm. Improvements were done in 11g, specially related to clauses NOT IN into NOT EXISTS.
Count Transformation (CNT): If all rows have a value on this column (not null), Oracle CBO can simply count the number of rows. There is no need to actually retrieve the column value. This transformation changes count(col) to count(*).
Join Elimination (JE): Join elimination is a query transformation of the Oracle optimizer to remove joins that are not required in a query. Released on 10g, Join Elimination got several improvements and new mechanisms in 11g and even more in 12c.
Join Predicate PushDown (JPPD): Allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. The following types of views supports predicate pushdown: UNION ALL/UNION view, Outer-joined view, Anti-joined view, Semi-joined view, DISTINCT view and GROUP-BY view.
The standby state was APPLY-ON but when checked the standby if the MRP process was running I found none!
sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 4 17:05:12 2019
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
17:05:12 SYS@STANDBY AS SYSDBA> SELECT PROCESS, STATUS FROM v$managed_standby;
PROCESS STATUS
--------- ------------
ARCH CLOSING
ARCH CLOSING
ARCH CLOSING
ARCH CLOSING
RFS IDLE
RFS IDLE
RFS IDLE
RFS WRITING
8 rows selected.
Elapsed: 00:00:00.01
Checking further I found that the database was bounced while the MRP was running and that was the only explanation that I could see to see a difference between the dg broker also I could not find any error in the dg broker log $ORACLE_HOME/rdbms/log/drc*.log
After setting the state to apply-off and them to apply-on the issue was resolved.
So, as rollout strategy we created a new OEM13c to decommission a 12c. However during the testes, noticed Memory Utilization metric was a lot different between 12c and 13c. Why?
Happens that the Memory Utilization is calculated differently between 12c and 13c, but also seems 13c is more accurate, as per MOS The Host Memory Utilization Percentage Calculation in Enterprise Manager Cloud Control (Doc ID 1908853.1)
Well, those who are familiar with memory use computations in the operating system might become confused when examining the memory use metric data from Enterprise Manager 12c and 13c Cloud Control. Metrics such as Memory Utilization (%) do not have an equivalent in the OS, but OS data will be used in its derivation.
This is the formula used by Enterprise Manager 12.1.0.3 for Linux Memory Utilization (%), for example:
* On this, activeMem is Active Memory (Active), and realMem is Total Memory (MemTotal).
Comparing this with MemFree, which is not valid, might provide an impression that utilization is not being accurately represented.
Also, the “OEM13c value” was already collected in OEM12c, but under metric name “Used Logical Memory”. And basically “Memory Utilization” in 12c uses “activeMem” instead of “realMem-(freeMem+Buffers+Cached)”. As per image below.
The formula in place on 13c is exactly the same as used to fix MOS EM 13c: Incorrect Memory Utilization Reported for Linux Hosts in Enterprise Manager 13.1.0.0.0 Cloud Control (Doc ID 2144976.1)
So one of the most important things that we need to do when setting up a new server or checking the capacity of the server is to see how much memory Oracle is using.
When checking the capacity there are some practical things that always help me to get a fast glimpse of the system:
When opening topas and hitting M you will see this below
On the memory session you will see 3 categories, INUSE, FREE and FILECACHE. There you may see what is being using for what but there is not much granularity there.
Same you have a high level usage. So here comes the question:
How are you to prove that you have a memory shortage?
I often use vmstat on Linux looking on the columns si and so equals to 0 (swap in and swap out) and when the free command, the free column you will also have no or very low swap being used
To check a process specific memory usage (RSS) I often use ps along with other commands to calculate the process memory for a specific process id as below:
RSS is resident memory, but when comes to shared memory like the Oracle SGA the methods above could be miss leading – not to say wrong – but as Oracle memory is shared we may see double counting on the results. I sometimes use pmap to check a process memory as well when available
/home/oracle> pmap 35796
35796: ora_lg00_ORC1
total 0K
But, still when checking a server wide scope, do you want to keep doing manual work and lots of math? I don’t think so. 🙂
That’s why when I came across SMEM made my life a lot easier. It is a python script which gives you a nice breakdown of the memory usage and without the miss leading double counting.
You can see the commands and processes and their memory:
[root@greporasrv ~]# mysql sbtest -B -e 'show tables'|sed ':a;N;$!ba;s/\n/ engine=innodb; \n alter table /g'
Tables_in_sbtest engine=innodb;
alter table sbtest1 engine=innodb;
alter table sbtest10 engine=innodb;
alter table sbtest2 engine=innodb;
alter table sbtest3 engine=innodb;
alter table sbtest4 engine=innodb;
alter table sbtest5 engine=innodb;
alter table sbtest6 engine=innodb;
alter table sbtest7 engine=innodb;
alter table sbtest8 engine=innodb;
alter table sbtest9 engine=innodb;
So, now with more use of resources like Cloud servers, more and more silent instalations are being done, right? Myself, I do it in silent always I can.
What if I need to export X. Plus, if I need to export it from user oracle, but I can only login with ec2-user, as usual?
Here is the process for that:
1) Connect to AWS EC2 instance
[user@securehost ~]$ ssh -X ec2-user@ipaddress
Last login: Fri Dec 7 14:41:41 2018 from grepora.srv.com
I was engaged on a report request that I needed to gather to get the TOP SQL by elapsed time and using Statspack. I got those and than I was asked to it on the following week and on the following, and you may see when this is going. So I created a script which would give is a report and I would not have to do it manually ever again 🙂
Usage: long_run_sql.sh [-h ] [ -r
Where:
If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running
-r = Set the time in minutes for the current running SQLs
-o = Specifies the minutes to be used for the long running SQLs History
-d = set the time period wanted for the execution history, default is 7 days
-h = Shows this help message
Might still have some bugs but it well enough to share here 🙂
You have some parameters that you need to change at the top, to suite your environment and of course have Statspack working and change the sqlplus connection line
Some tweaks are required as no 2 environments are never 100% alike. But still forth the work.
#!/bin/bash
#---------------------------------------------------------------------------
# Creates a report, using statspack, of the long running sqls from database
#
# History:
# Feb-21-2018 - Elisson Almeida - Created.
#---------------------------------------------------------------------------
#usage
### Environment setup
### Global variables
DIR=/home/oracle/scripts
LOG=/home/oracle/logs
DATE=$(date +%Y%m%d_%H%M%S)
NAME=long_running_sql_report
OUTPUT_FILE=${LOG}/${NAME}_${DATE}.log
ERROR_FILE=${LOG}/${NAME}.err
TMP_FILE=$DIR/$.tmp
CONFIG_EMAIL_TO=
PATH=
ORACLE_SID=
ORACLE_BASE=
ORACLE_HOME=
#tns-admin if needed otherwise comment out
#TNS_ADMIN
RUNNING=5
HISTORY=120
Help()
{
echo "Usage: long_run_sql.sh [-h ] [ -r ] [-o ] [-d ] "
echo "Where:"
echo " If no parameters are used 120 minutes and 7 days will be used as default for History, 5 minutes for current running"
echo " -r = Set the time in minutes for the current running SQLs"
echo " -o = Specifies the minutes to be used for the long running SQLs History"
echo " -d = set the time period wanted for the execution history, default is 7 days"
echo " -h = Shows this help message"
}
### Main script section
RUNNING=5
HISTORY=120
DAY=7
while getopts :r:o:d:h option
do
case ${option} in
h) Help
exit ;;
r) RUNNING=${OPTARG} ;;
o) HISTORY=${OPTARG} ;;
d) DAY=${OPTARG} ;;
:)
echo "Error: -${OPTARG} requires an argument."
Help
exit;;
*) echo "Error: -${OPTARG} is not a valid argument."
Help
exit;;
esac
done
#Queries
SELECT_LONG_TODAY="
set lines 250
PROMPT LONG RUNNING SQLs FOR THE LAST 24 HOURS
col \"Last Active Time\" format a15
select * from (
select --min(s.first_load_time) \"First Time\"
max(LAST_ACTIVE_TIME) \"Last Active Time\"
,round(sum(s.elapsed_time)/greatest(sum(s.executions),1)/1000000/60,2) \"Elapsed Time per Exec (min)\"
,s.sql_id \"SQL ID\"
--,plan_hash_value \"Plan Hash Value\"
,sum(s.executions) \"Total Execs\"
--,count(distinct s.plan_hash_value) \"Plans Count\"
,sum(s.buffer_gets) \"Total Buffer Gets\"
,sum(s.cpu_time) \"Total CPU\"
,round(sum(s.disk_reads)/greatest(sum(s.executions),1),2) \"Disk Reads per Exec\"
,round(sum(s.buffer_gets)/greatest(sum(s.executions),1),2) \"Buffer Gets per Exec\"
,round(sum(s.cpu_time)/greatest(sum(s.executions),1)/1000000,2) \"CPU Time per Exec\"
,s.sql_text \"SQL Text\"
from gv\$sql s
where s.elapsed_time/greatest(s.executions,1)/1000000>${RUNNING}
and s.LAST_ACTIVE_TIME >= trunc(sysdate)
--and s.LAST_ACTIVE_TIME >= sysdate-2/24
-- group by trunc(s.LAST_ACTIVE_TIME), s.sql_id, s.plan_hash_value
group by s.sql_id, s.plan_hash_value,s.sql_text
order by 2 desc
) ;
"
SELECT_LONG_HISTORY="
set lines 250
PROMPT LONG RUNNING SQLs HISTORY
select * from (
select to_char(stime,'yyyymmdd') \"Day\"
,round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) \"Elapsed Time per Exec (min)\"
,sql_id \"SQL ID\"
/* ,OLD_HASH_VALUE ,sql_id */
,sum(EXECS) \"Total Executions\"
,sum(BUF_GETS) \"Total Buffer Gets\"
,sum(cpu_time) \"Total CPU\"
--,sum(ELAPS_TIME) \"Total Elapsed Time (min)\"
,round(sum(DISK_RDS)/greatest(sum(EXECS),1),2) \"Disk Reads per Exec\"
,round(sum(BUF_GETS)/greatest(sum(EXECS),1),2) \"Buffer Gets per Exec\"
,round(sum(cpu_time)/greatest(sum(EXECS),1)/1000000,2) \"CPU Time per Exec\"
-- ,row_number() over (partition by to_char(stime,'yyyymmdd') order by sum(cpu_time) desc) cpu_rank
from (select ssnapid, stime, sql_id, OLD_HASH_VALUE, TEXT_SUBSET,
case when EXECUTIONS < nvl(lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id),0) then EXECUTIONS
else EXECUTIONS - lag(EXECUTIONS) over (partition by sql_id order by sq.snap_id) END EXECS,
case when DISK_READS < nvl(lag(DISK_READS) over (partition by sql_id order by sq.snap_id),0) then DISK_READS
else DISK_READS - lag(DISK_READS) over (partition by sql_id order by sq.snap_id) END DISK_RDS,
case when BUFFER_GETS < nvl(lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id),0) then BUFFER_GETS
else BUFFER_GETS - lag(BUFFER_GETS) over (partition by sql_id order by sq.snap_id) END BUF_GETS,
case when ROWS_PROCESSED < nvl(lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id),0) then ROWS_PROCESSED
else ROWS_PROCESSED - lag(ROWS_PROCESSED) over (partition by sql_id order by sq.snap_id) END ROWSP,
case when CPU_TIME < nvl(lag(CPU_TIME) over (partition by sql_id order by sq.snap_id),0) then CPU_TIME
else CPU_TIME - lag(CPU_TIME) over (partition by sql_id order by sq.snap_id) END CPU_TIME,
case when ELAPSED_TIME < nvl(lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id),0) then ELAPSED_TIME else ELAPSED_TIME - lag(ELAPSED_TIME) over (partition by sql_id order by sq.snap_id) END ELAPS_TIME from STATS\$SQL_SUMMARY sq, (SELECT iv.start_snap_id as ssnapid, iv.end_snap_id as esnapid, iv.start_snap_time as stime, iv.end_snap_time as etime FROM (SELECT lag(dbid) over (order by dbid, instance_number, snap_id) AS start_dbid, dbid AS end_dbid, lag(snap_id) over (order by dbid, instance_number, snap_id) AS start_snap_id, snap_id AS end_snap_id, lag(instance_number) over (order by dbid, instance_number, snap_id) AS start_inst_nr, instance_number AS end_inst_nr, lag(snap_time) over (order by dbid, instance_number, snap_id) AS start_snap_time, snap_time AS end_snap_time, lag(startup_time) over (order by dbid, instance_number, snap_id) AS start_startup_time, startup_time AS end_startup_time FROM perfstat.stats\$snapshot where snap_time > trunc(sysdate-${DAY})
/* where snap_time between to_date('04/12/2017 00:00','dd/mm/yyyy hh24:mi')
and to_date('04/12/2017 23:31','dd/mm/yyyy hh24:mi') */
) iv
WHERE iv.start_snap_id IS NOT NULL
AND iv.start_dbid=iv.end_dbid
AND iv.start_inst_nr=iv.end_inst_nr
AND iv.start_startup_time=iv.end_startup_time
) i
where i.esnapid = sq.SNAP_ID
)
where EXECS is not null and stime > trunc(sysdate-7)
--and ELAPS_TIME/greatest(execs,2)/1000000/60>${HISTORY}
group by to_char(stime,'yyyymmdd'), OLD_HASH_VALUE,sql_id
having round(sum(ELAPS_TIME)/greatest(sum(EXECS),1)/1000000/60,1) > ${HISTORY}
order by 1, 8 desc,2,3
);
"
## Create the report
sqlplus -s /@${ORACLE_SID}<< __eof__ > ${OUTPUT_FILE}
SET MARKUP HTML ON SPOOL ON
SET FEEDBACK OFF
alter session set nls_date_format='HH24:MI DD-MON-YY';
spool ${LOG}/long_run_report.html
${SELECT_LONG_TODAY}
PROMPT
PROMPT
${SELECT_LONG_HISTORY}
__eof__
#Send report to email
#cat long_run_report.html | mail -s "$(echo "Daily Long runing SQL Report for ${ORACLE_SID}\nContent-Type: text/html")" ${CONFIG_EMAIL_TO}
(
echo "To: ${CONFIG_EMAIL_TO}";
echo "Subject: Daily Long runing SQL Report for ${ORACLE_SID}";
echo "Content-Type: text/html";
echo "MIME-Version: 1.0";
echo "Parameters used: ";
echo "";
echo "Currentlty running sql ${RUNNING} minutes ";
echo "";
echo "Long running SQLs history running for ${HISTORY} minutes ";
echo ""
echo "SQL history time span ${DAY} days"
cat ${LOG}/long_run_report.html;
) | sendmail -t
#Removing the old backup files from the file system
find ${LOG} ${OUTPUT_FILE%.*}* -mtime +30 -exec rm{} \;
Hi all,
To manage Oracle trace files the way to go is ADRCI. You can see this post from Matheus if you have not read it yet.
In the last part of the script we have a small bash code to configure the ADRCI on all databases running on a server
You could add:
adrci exec="set home $1;purge -age 10080 -type ALERT";
In this case the age parameter is in minutes but still you would be required to run it periodically which could be another script in crontab to be managed.
SO the solution that I found to be best as it takes leverage from an existing solution is called logrotate.
Logrotate is a Unix/Linix based program that helps as its name says, rotate any file that you need. You just need to create a configuration file and place it on /etc/logrotate.d on most Linuxes distributions.
But when you have a server with several databases and listerners and more, it starts to get a bit tedious and time consuming to create this manually.
In this post on the Pythian Blog, will find how to create but it does not handle the listener.log nor the log.xml so I added this piece here
for L in `\ps -ef | grep tnslsnr | grep -v grep | sed s'/-.*$//g' | awk '{print $NF}'`
do
OUT=${DEST}/"logrotate_xml_"${L}
LSRN_LOG=`lsnrctl status ${L} | grep "Listener Log File" | awk '{print $NF}'`
echo ${LSRN_LOG%.*}"*" " {" > ${OUT}
cat << ! >> ${OUT}
daily
rotate 1
compress
notifempty
}
!
echo ${OUT} has been generated
done
for L in `\ps -ef | grep tnslsnr | grep -v grep | sed s'/-.*$//g' | awk '{print $NF}'`
do
OUT=${DEST}/"logrotate_xml_"${L}
LSRN_LOG=`lsnrctl status ${L} | grep "Listener Log File" | awk '{print $NF}'`
echo ${LSRN_LOG%.*}"*.xml" " {" > ${OUT}
cat << ! >> ${OUT}
daily
rotate 1
compress
}
!
echo ${OUT} has been generated
done
Using logroate really helps on the managing on Oracle related files which are not done by ADRCI.
Hey all,
As usual, a client reached out with this issue:
OMSPatcher finds that previous patching session is not yet completed.
Please refer log file "/u01/app/oracle/middleware/cfgtoollogs/omspatcher/28018178/omspatcher_2018-07-09_23-44-58PM_deploy.log"
for the previous session and execute the script "/u01/app/oracle/middleware/.omspatcher_storage/oms_session/scripts_2018-07-09_23-44-39PM/run_script_singleoms_resume.sh" to complete the previous session. OMSPatcher can proceed to execute new operations only if previous session is completed successfully.
Interesting, right?
This means a patch execution in July failed and it wasn’t noticed.
What to do? Point is, the error itself already say what needs to be done.
You just may want to make it properly. How? Here is a quick Action Plan:
ZER0) Check the Deploy log to understand the root cause for the failure on previous patch and fix it.
In my case?
Not all required components were down. A simple “stop oms” stops only the OMS managed server, JVMD engine, and HTTP server but leaves Node Manager and Administration Server running. However, a “stop oms -all” stops all Enterprise Manager processes including Administration Server, OMS, HTTP Server, Node Manager, Management Server, JVMD engine, and Oracle BI Publisher (if it is configured on the host). This was the fixing.
Step-by-Step:
1. Blackout targets to avoid unwanted pages.
– On OEM: Enterprise–>Monitoring–>Blackouts
2. Shutdown OMS and AGENT
cd $AGENT_HOME/bin
./emctl stop agent
cd $OMS_HOME/bin
./emctl stop oms -all
3. Resume Patching with issue (with provided command)
(in my case):
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. ACCEPTREADREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.