Script: Map ASM Disks to Physical Devices

Hey all!
So, I had to map a couple ASM disks to physical devices. But it’s not direct, which causes some manual work.

To save me from this, I found this great post by Alejandro Vargas, with a very nice script to make this mapping easier.

I found however, it was done for RHEL/OEL 6 and older, and I’m in OEL7. So I did some small changes to adapt it.

Anyway, decided to share as this is a great script to have handy. 🙂

# Alejandro’s script (RHEL/OEL 6 and older):

#!/bin/ksh
for i in `/etc/init.d/oracleasm listdisks`
do
v_asmdisk=`/etc/init.d/oracleasm querydisk $i | awk  '{print $2}'`
v_minor=`/etc/init.d/oracleasm querydisk $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk '{print $1}'`
v_major=`/etc/init.d/oracleasm querydisk $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk '{print $2}'`
v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
echo "ASM disk $v_asmdisk based on /dev/$v_device  [$v_minor $v_major]"
done

# Adjustments by Matheus (RHEL/OEL7):

#!/bin/ksh
for i in `/usr/sbin/oracleasm listdisks`
do
v_asmdisk=`/usr/sbin/oracleasm querydisk -d $i | awk '{print $2}'`
v_minor=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $1}'`
v_major=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $2}'`
v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor $v_major]"
done

# Example of execution:

[root@greporasrv]$ for i in `/usr/sbin/oracleasm listdisks`
> do
> v_asmdisk=`/usr/sbin/oracleasm querydisk -d $i | awk '{print $2}'`
> v_minor=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $1}'`
> v_major=`/usr/sbin/oracleasm querydisk -d $i | awk -F[ '{print $2}'| awk -F] '{print $1}' | awk -F, '{print $2}'`
> v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk '{print $10}'`
> echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor $v_major]"
> done
ASM disk "DATA01" based on /dev/sdg1 [8 97]
ASM disk "DATA02" based on /dev/sdh1 [8 113]
ASM disk "DATA03" based on /dev/sdi1 [8 129]
ASM disk "DATA04" based on /dev/sdj1 [8 145]
ASM disk "FRA01" based on /dev/sdk1 [8 161]

Hope you enjoy it like I did.
Cheers!

/bin/rm: cannot execute [Argument list too long]

Hey all!

Just a quickie and useful thing today. How many times you found this?

/bin/rm: cannot execute [Argument list too long]

Ok, so, first thing: Is it related to Oracle logs? If so, you may want to ADCRI. Check this post for more info: ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases.

If not, you may solve this using find with rm. Ok, but want to keep the most recent files?

Some examples for you, removing audit files:

# Remove older then 1 day:

find /oracle/greporadb/admin/greporadb/adump -name "*.aud" -mtime +1 -exec rm {} \;

# Remove older then 1 hour:

find /oracle/greporadb/admin/greporadb/adump -name "*.aud" -cmin +60 -exec rm {} \;

More“/bin/rm: cannot execute [Argument list too long]”

Tips to Improve your Oracle DB Shellscripting!

There are some small tricks to speed up your shellscripts for Oracle DBs. I’m pretty sure you can do almost magic only with those:

# To get all OS Users with an Oracle Instance in server:

ps -ef | grep [o]ra_pmon | awk -F_ '{ print $1}'|cut -f1 -d" "|sort -u

# To get all DBSids in server:

ps -ef | grep [o]ra_pmon  | awk -F_ '{ print $3}'

# To get all DBNames in server:

ps -ef | grep [o]ra_pmon  | awk -F_ '{ print substr($3, 1, length($3)-1)}'

# To get Oracle Home from /etc/oratab for a database:

grep "dbname:" /etc/oratab|cut -d: -f2 -s

# Looping for any of the lists from commands above:

for DBUSER|DBNAME|DBINST in `$Comand_From_ListAbove` do [...] done

# Example:
Take DBInstance and Oracle Home for Each DBName on Each OSUser:

for DBUSER in `ps -ef | grep [o]ra_pmon | awk -F_ '{ print $1}'|cut -f1 -d" "|sort -u`
do
for DBNAME in `ps -ef | grep [o]ra_pmon | grep ${DBUSER} | awk -F_ '{ print substr($3, 1, length($3)-1)}'`
do
DBSID=`ps -ef | grep [o]ra_pmon | grep -i ${DBNAME^^}| awk -F_ '{ print $3}'`
DBOH=`grep "^${DBNAME,,}:" /etc/oratab|cut -d: -f2 -s`
done
done

# Variable Upper Case or Lower Case:

For uppercase: ${DBNAME^^}
For lowercase: ${DBNAME,,}

# Putting Colors on your messages:

RED='\033[0;31m'
GREEN='\033[0;32m'
PURPLE='\033[0;35m'
NC='\033[0m' # No Color
echo -e "Status: ${GREEN}All Good!${NC}";
echo -e "Status: ${RED}ERROR!${NC}"
echo -e "Status: ${PURPLE}Do you like messages in purple?${NC}"

# Conditional: Is there any line with a specific content?

if [`cat myfile.log | grep "text_I_want"| wc -l |awk '{print $1}'` -gt 0 ]; then
echo -e "Count Greater Then Zero: ${GREEN}Line Exists!${NC}";
else echo -e "Count equals to Zero: ${RED}No line with this text...${NC}"
fi;

# Some other related posts:
Shellscript & Oracle: Run Script for a List of Databases
Shellscript: Using eval and SQLPlus
Recursive string change
Grepping Entries from Alert.log
Grepping Alert by Day
Shellscript: Master Blaster KB!

PS: Thanks for Jared Still for suggesting improvements and saving some command greps (grep ora_pmon |grep -v grep -> grep [o]ra_pmon). That’s a nice tip. Thanks! 😀

Anything I forgot to put here? Tell me with a comment!
Cheers!

Disk Space is not Released After Deleting Files

When deleting a large files, the file is deleted successfully but the size of the filesystem does not reflect the change.
The files was be deleted, but only restarting the jvm or java process released the disk space.
Usually occurs with log files.

The lsof command show the files opened in system.
For example: lsof |grep deleted

java 15138 oracle 2959r REG 253,3 5875027 131163 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1.out03422 (deleted)
java 15138 oracle 3054r REG 253,3 10480928 131166 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1-diagnostic-81.log (deleted)
java 15138 oracle 3062r REG 253,3 10479417 131200 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1-diagnostic-82.log (deleted)

The command output shows pid, owner, file descriptor (fd), size and file.

Workaround:
If can’t restart the process, it is possible to force the system to de-allocate the space consumed by an in-use file by forcing the system to truncate the file.
$ echo > /proc/pid/fd/fd_number

Be careful not to truncate unwanted files.

In my case:
$ echo > /proc/15138/fd/2959
$ echo > /proc/15138/fd/3054
$ echo > /proc/15138/fd/3062

That’s all for today.

Shellscript & Oracle: Run Script for a List of Databases

Hey all!
Quick tip/script for today: How to run same script for a list of databases quickly?

In my case I have same password/user in all databases, but in case you haven’t you can make a similar awk command to retrieve users and passwords from a file.

The list:

$: cat /tmp/dbs.cfg
db01
db02
db03
testdb

The script:

for DBSID in ${*-$(awk -F: '!/^#/ {print $1}' /tmp/dbs.cfg}
do
    print "
        connect user/password@${DBSID}
        @script_to_run.sql
        exit " |
    sqlplus /nolog >> /tmp/output_test.log
done

Hope it helps!
Cheers!

Scheduler Job for OS Audit Trail Cleanup Routine

Hello all!
As you all know, most of OS files generated by oracle, like traces and logs can be cleared/managed using ADRCI. However, more than once, I saw filesystem get full of Audit Trail files.
In general, I see DBAs and companies implementing shell scripts to house keep those files. Usually some variation of the find with rm I posted some weeks ago. However, we have a very good “official” solition for this, using the DBMS_AUDIT_MGMT.

Basically we can create Scheduler Jobs in our databases to keep track on this. The advantages are seveal. The job clear in both nodes, this keep all Oracle information correctly updated, you may receive OEM notification for failure if monitoring jobs, etc.
You can see more detaiils here in MOS Note 731908.1 New Feature DBMS_AUDIT_MGMT to Manage and Purge Audit Information.

How I did?

1. Initial Cleanup to clear files older than a week.
2. Set last archive to a week ago.
3. Created Purge Job, purging older than a week

As per:

More“Scheduler Job for OS Audit Trail Cleanup Routine”

Change display settings on linux with Disper

From time to time, I change the Linux distro on my laptop or just do a fresh install on it. And once in a while, have random problems with external displays. It can be something really “simple” like don’t detecting the external monitor or something crazy like the image below.

screenshot of the bug

As you can see at the image, the mint detected the display but mirrored it in a crazy way that works but doesn’t at the same time. If you try anything and doesn’t get working, or just wanna skip the whole job of configuring complexes text files, give a try to Disper. Download the latest version. Extract it on any folder, and make install it (on the extracted folder).

make install

After that, you can start using it… There are a few options that will serve you well.

disper -e #extend your display
disper -c #clone your display
displer -s #only your external display

 

Am I in RHEL or OEL?

OK, first time accessing a client, and you have this question?

# RHEL:

[grepora-rhel-server]$ rpm -qf /etc/redhat-release 
redhat-release-server-6Server-6.9.0.4.el6.x86_64

# OEL:

[grepora-oel-server]$ rpm -qf /etc/redhat-release
enterprise-release-5-0.0.2

You may also have some other options like:

[grepora-fedora-server]$ rpm -qf /etc/redhat-release 
fedora-release-20-3.noarch

Hope you enjoy it!
Cheers!

Recover Standby Using an Incremental Backup

Hey all,
I know is there is a lot of posts about it in the internet, but I’m doing mine about it. The main reason is that the other has lots of steps and outputs and I miss simplicity. Hope you enjoy this one as well.

But why?
Sometimes if the standby database fails for any reason – for example when doing NOLOGGING actions in the primary database – then it can be necessary to recover the datafiles from the incremental backup as the archive logs will not contain the right information.

The recovery of the standby is relatively simple.

1. Take a cumulative incremental backup on the primary

Example

Either take a level 1 backup

backup as compressed backupset incremental level 1 cumulative database filesperset 15;

Or get the scn to recover from on the standby using the lower value from following SQLs

select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';
SELECT CURRENT_SCN FROM V$DATABASE;

2. Run the backup using the SCN

backup as compressed backupset incremental from scn &scn;

3. Create a copy of controlfile for Standby

alter database create standby controlfile as '/tmp/newctfl.ctl';

4. Copy incremental backup files and controlfile to standby

5. Restore copied Controlfile

Example

SQL> shutdown abort; 
SQL> startup nomount; 
RMAN> restore controlfile from '/tmp/newctfl.ctl'; 
SQL> alter database mount;

6. Catalog the backup files in the standby controlfile

Example

catalog start with '/tmp/rmanbackup/';

7. Recover the database with no redo

Example

recover database noredo;

8. Start the managed recovery

Example

alter database recover managed standby database using current logfile disconnect from session;

This is also an easy alternative to the lot more complex described in MOS Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes (Doc ID 958181.1).

Hope you enjoy!
Cheers!

KSar: Generating Graphs from SAR Reports

We all know the SAR (System Activity Report), however sometimes it’s dificult to visualize a large amount of data or even extract some long term meaningful information.
How wonderful would be to have a graphical visualization from this data? Well, it’s pretty simple using KSAR.

KSAR is a BSD licensed Java based application to create graph of all parameters from the data collected by Unix sar utilities and can be exported to PDF, JPG, PNG, CSV, TXT and others.
The project Codes are here. The latest Version is KSar2-0.0.4.

See below an I/O Graph from month of Dec, generated from a database server, as an example:

GrepOra-srv.jpg

To use it, first thing is to have SAR data. To get it we have basically 3 options:
A. Collect from current server.
B. Extract from other server using direct SSH connection.
C. Use a Generated SAR File
D. Run Java tool from Client Server.

Personally, I prefer to use option C, in order to avoid putting any code in client servers and also work in less intrusive mode as possible.
I also don’t use option B because we don’t usually have direct connection to client server, but sometimes with jumpboxes or similar.
There is a third reason: When Chosing option A or B, it’s automatically connected only daily data, but when using C, you can put all data you need. It need only to be available on server.

For reference regarding Option D, please check this link.

By the way, some other useful information about SAR:
1. SAR Collection Jobs can be checked on /etc/cron.d/sysstat
2. SAR Retention can be checked/adjusted on /etc/sysconfig/sysstat

Ok, now how to generate the SAR Files?
Using command: sar -A

Example:

[root@grepora-srvr ~]# cd /var/log/sa/
[root@grepora-srvr sa]# ls -lrt |tail -10
total 207080
-rw-r--r-- 1 root root 3337236 Dec 24 23:50 sa24
-rw-r--r-- 1 root root 3756100 Dec 24 23:53 sar24
-rw-r--r-- 1 root root 3337236 Dec 25 23:50 sa25
-rw-r--r-- 1 root root 3756113 Dec 25 23:53 sar25
-rw-r--r-- 1 root root 3337236 Dec 26 23:50 sa26
-rw-r--r-- 1 root root 3756104 Dec 26 23:53 sar26
-rw-r--r-- 1 root root 3337236 Dec 27 23:50 sa27
-rw-r--r-- 1 root root 3756096 Dec 27 23:53 sar27
-rw-r--r-- 1 root root 3337236 Dec 28 23:50 sa28
-rw-r--r-- 1 root root 3756100 Dec 28 23:53 sar28
-rw-r--r-- 1 root root 2317668 Dec 29 16:30 sa29
[root@grepora-srvr sa]# sar -A -f sa29 > sa29.txt
[root@grepora-srvr sa]# cat sa29.txt |head -10
Linux 3.8.13-118.4.2.el6uek.x86_64 (grepora-srvr) 12/29/2017 _x86_64_ (40 CPU)
12:00:01 AM CPU %usr %nice %sys %iowait %steal %irq %soft %guest %idle
12:10:01 AM all 97.74 0.00 1.71 0.01 0.00 0.00 0.52 0.00 0.02
12:10:01 AM 0 96.46 0.00 2.59 0.02 0.00 0.00 0.92 0.00 0.01
12:10:01 AM 1 98.55 0.00 1.24 0.01 0.00 0.00 0.20 0.00 0.00
12:10:01 AM 2 97.83 0.00 2.04 0.01 0.00 0.00 0.11 0.00 0.02
12:10:01 AM 3 98.44 0.00 1.41 0.01 0.00 0.00 0.14 0.00 0.01
12:10:01 AM 4 98.28 0.00 1.65 0.00 0.00 0.00 0.06 0.00 0.01
12:10:01 AM 5 98.27 0.00 1.70 0.00 0.00 0.00 0.02 0.00 0.00
[root@grepora-srvr sa]#

With this file you can copy it from client server your server and import using KSAR Interface. It’s pretty intuitive and easy to use.

But how to generate all available days or a set of specific days in past?
Here is a script I use for this:

### All Days of SAR
DT=$(ls /var/log/sa/sa[0-9][0-9] | tr '\n' ' ' | sed 's/\/var\/log\/sa\/sa/ /g')
## Explicit Days
#DT="07 08 09"
#DT="12"
# Today
#DT=`date +"%d"`
>/tmp/sar-$(hostname)-multiple.txt
for i in $DT; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /tmp/sar-$(hostname)-multiple.txt
done
ls -l /tmp/sar-$(hostname)-multiple.txt

After this you can copy the generated file to you PC and generate the same report.

Hope you enjoy it!

Cheers!
Matheus.