Verifying topology of Exadata

Hey all!
Some time ago I needed to check topology of a client’s Exadata due a network issue and made a very useful note. Sharing with you now. 😀

This and other cool commands can be found here: Network Diagnostics information for Oracle Database Machine Environments (Doc ID 1053498.1)

# /opt/oracle.SupportTools/ibdiagtools/verify-topology -t quarterrack

Newer versions don’t require -t option.
In case of halfrack, -t halfrack should be used in my case.

Ok, but how to know it? You can have it from here:

[root@greporaexa onecommand]# grep -i MACHINETYPES databasemachine.xml
[MACHINETYPES]X4-2 Eighth Rack HC 4TB[/MACHINETYPES]

Hope it helps! 🙂

Scheduler Job for OS Audit Trail Cleanup Routine

Hello all!
As you all know, most of OS files generated by oracle, like traces and logs can be cleared/managed using ADRCI. However, more than once, I saw filesystem get full of Audit Trail files.
In general, I see DBAs and companies implementing shell scripts to house keep those files. Usually some variation of the find with rm I posted some weeks ago. However, we have a very good “official” solition for this, using the DBMS_AUDIT_MGMT.

Basically we can create Scheduler Jobs in our databases to keep track on this. The advantages are seveal. The job clear in both nodes, this keep all Oracle information correctly updated, you may receive OEM notification for failure if monitoring jobs, etc.
You can see more detaiils here in MOS Note 731908.1 New Feature DBMS_AUDIT_MGMT to Manage and Purge Audit Information.

How I did?

1. Initial Cleanup to clear files older than a week.
2. Set last archive to a week ago.
3. Created Purge Job, purging older than a week

As per:

Continue reading

Change display settings on linux with Disper

From time to time, I change the Linux distro on my laptop or just do a fresh install on it. And once in a while, have random problems with external displays. It can be something really “simple” like don’t detecting the external monitor or something crazy like the image below.

screenshot of the bug

As you can see at the image, the mint detected the display but mirrored it in a crazy way that works but doesn’t at the same time. If you try anything and doesn’t get working, or just wanna skip the whole job of configuring complexes text files, give a try to Disper. Download the latest version. Extract it on any folder, and make install it (on the extracted folder).

make install

After that, you can start using it… There are a few options that will serve you well.

disper -e #extend your display
disper -c #clone your display
displer -s #only your external display

 

Database in Cloud: Quickest Procedure to Clone a Schema

Hey all!
So, we all know that operating with files/dump files can be tricky when using DBaaS in Public Cloud. In some situations, like Amazon RDS service, we simply don’t have access to SO.

In this scenario, how can we quickly clone a schema in the database? Using IMPDP with Database Link.

Also note that when working on AWS environments, avoiding to use dumpfiles when dealing with expdp/impdp is also encouraged to save IOPS from local disks (it is capped based on machine type).

This is, of course, also valid for On-Premise environments with limited area for dump files.

For this to work, we only need to create a database link pointing to the same database.
Also, of course, it is mandatory to use remap_schema, with optional clause remap_tablespace.

In the following example the link name is loop and proceed with impdp from a jumpbox with an Oracle client and tns configuration to RDS database.

1. Creating Database Link:

SQL> create database link loop connect to my_sysdba identified by "***" using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=service-name)))';

Database link created.

SQL> select * from dual@loop;

D
-
X

2. Running IMPDP:

[oracle@jumpbox ~]$ impdp schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log

Import: Release 11.2.0.4.0 - Production on Wed Fev 7 21:03:54 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYS"."SYS_IMPORT_SCHEMA_02": /******** AS SYSDBA schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.481 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NEW_SCHEMA" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
...
. . imported "NEW_SCHEMA"."TABLE1" 0 rows
. . imported "NEW_SCHEMA"."TABLE2" 0 rows
. . imported "NEW_SCHEMA"."TABLE3" 0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Wed Fev 7 21:08:52 2018 elapsed 0 00:04:54

Hope it helps,
Cheers!

Am I in RHEL or OEL?

OK, first time accessing a client, and you have this question?

# RHEL:

[grepora-rhel-server]$ rpm -qf /etc/redhat-release 
redhat-release-server-6Server-6.9.0.4.el6.x86_64

# OEL:

[grepora-oel-server]$ rpm -qf /etc/redhat-release
enterprise-release-5-0.0.2

You may also have some other options like:

[grepora-fedora-server]$ rpm -qf /etc/redhat-release 
fedora-release-20-3.noarch

Hope you enjoy it!
Cheers!

Recover Standby Using an Incremental Backup

Hey all,
I know is there is a lot of posts about it in the internet, but I’m doing mine about it. The main reason is that the other has lots of steps and outputs and I miss simplicity. Hope you enjoy this one as well.

But why?
Sometimes if the standby database fails for any reason – for example when doing NOLOGGING actions in the primary database – then it can be necessary to recover the datafiles from the incremental backup as the archive logs will not contain the right information.

The recovery of the standby is relatively simple.

1. Take a cumulative incremental backup on the primary

Example

Either take a level 1 backup

backup as compressed backupset incremental level 1 cumulative database filesperset 15;

Or get the scn to recover from on the standby using the following SQL

select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';

2. Run the backup using the SCN

backup as compressed backupset incremental from scn &scn;

3. Create a copy of controlfile for Standby

alter database create standby controlfile as '/tmp/newctfl.ctl';

4. Copy incremental backup files and controlfile to standby

5. Restore copied Controlfile

Example

SQL> shutdown abort; 
SQL> startup nomount; 
RMAN> restore controlfile from '/tmp/newctfl.ctl'; 
SQL> alter database mount;

6. Catalog the backup files in the standby controlfile

Example

catalog start with '/tmp/rmanbackup/';

7. Recover the database with no redo

Example

recover database noredo;

8. Start the managed recovery

Example

alter database recover managed standby database using current logfile disconnect from session;

This is also an easy alternative to the lot more complex described in MOS Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes (Doc ID 958181.1).

Hope you enjoy!
Cheers!

KSar: Generating Graphs from SAR Reports

We all know the SAR (System Activity Report), however sometimes it’s dificult to visualize a large amount of data or even extract some long term meaningful information.
How wonderful would be to have a graphical visualization from this data? Well, it’s pretty simple using KSAR.

KSAR is a BSD licensed Java based application to create graph of all parameters from the data collected by Unix sar utilities and can be exported to PDF, JPG, PNG, CSV, TXT and others.
The project Codes are here. The latest Version is KSar2-0.0.4.

See below an I/O Graph from month of Dec, generated from a database server, as an example:

GrepOra-srv.jpg

To use it, first thing is to have SAR data. To get it we have basically 3 options:
A. Collect from current server.
B. Extract from other server using direct SSH connection.
C. Use a Generated SAR File
D. Run Java tool from Client Server.

Personally, I prefer to use option C, in order to avoid putting any code in client servers and also work in less intrusive mode as possible.
I also don’t use option B because we don’t usually have direct connection to client server, but sometimes with jumpboxes or similar.
There is a third reason: When Chosing option A or B, it’s automatically connected only daily data, but when using C, you can put all data you need. It need only to be available on server.

For reference regarding Option D, please check this link.

By the way, some other useful information about SAR:
1. SAR Collection Jobs can be checked on /etc/cron.d/sysstat
2. SAR Retention can be checked/adjusted on /etc/sysconfig/sysstat

Ok, now how to generate the SAR Files?
Using command: sar -A

Example:

[root@grepora-srvr ~]# cd /var/log/sa/
[root@grepora-srvr sa]# ls -lrt |tail -10
total 207080
-rw-r--r-- 1 root root 3337236 Dec 24 23:50 sa24
-rw-r--r-- 1 root root 3756100 Dec 24 23:53 sar24
-rw-r--r-- 1 root root 3337236 Dec 25 23:50 sa25
-rw-r--r-- 1 root root 3756113 Dec 25 23:53 sar25
-rw-r--r-- 1 root root 3337236 Dec 26 23:50 sa26
-rw-r--r-- 1 root root 3756104 Dec 26 23:53 sar26
-rw-r--r-- 1 root root 3337236 Dec 27 23:50 sa27
-rw-r--r-- 1 root root 3756096 Dec 27 23:53 sar27
-rw-r--r-- 1 root root 3337236 Dec 28 23:50 sa28
-rw-r--r-- 1 root root 3756100 Dec 28 23:53 sar28
-rw-r--r-- 1 root root 2317668 Dec 29 16:30 sa29
[root@grepora-srvr sa]# sar -A -f sa29 > sa29.txt
[root@grepora-srvr sa]# cat sa29.txt |head -10
Linux 3.8.13-118.4.2.el6uek.x86_64 (grepora-srvr) 12/29/2017 _x86_64_ (40 CPU)
12:00:01 AM CPU %usr %nice %sys %iowait %steal %irq %soft %guest %idle
12:10:01 AM all 97.74 0.00 1.71 0.01 0.00 0.00 0.52 0.00 0.02
12:10:01 AM 0 96.46 0.00 2.59 0.02 0.00 0.00 0.92 0.00 0.01
12:10:01 AM 1 98.55 0.00 1.24 0.01 0.00 0.00 0.20 0.00 0.00
12:10:01 AM 2 97.83 0.00 2.04 0.01 0.00 0.00 0.11 0.00 0.02
12:10:01 AM 3 98.44 0.00 1.41 0.01 0.00 0.00 0.14 0.00 0.01
12:10:01 AM 4 98.28 0.00 1.65 0.00 0.00 0.00 0.06 0.00 0.01
12:10:01 AM 5 98.27 0.00 1.70 0.00 0.00 0.00 0.02 0.00 0.00
[root@grepora-srvr sa]#

With this file you can copy it from client server your server and import using KSAR Interface. It’s pretty intuitive and easy to use.

But how to generate all available days or a set of specific days in past?
Here is a script I use for this:

### All Days of SAR
DT=$(ls /var/log/sa/sa[0-9][0-9] | tr '\n' ' ' | sed 's/\/var\/log\/sa\/sa/ /g')
## Explicit Days
#DT="07 08 09"
#DT="12"
# Today
#DT=`date +"%d"`
>/tmp/sar-$(hostname)-multiple.txt
for i in $DT; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /tmp/sar-$(hostname)-multiple.txt
done
ls -l /tmp/sar-$(hostname)-multiple.txt

After this you can copy the generated file to you PC and generate the same report.

Hope you enjoy it!

Cheers!
Matheus.