KSar: Generating Graphs from SAR Reports

We all know the SAR (System Activity Report), however sometimes it’s dificult to visualize a large amount of data or even extract some long term meaningful information.
How wonderful would be to have a graphical visualization from this data? Well, it’s pretty simple using KSAR.

KSAR is a BSD licensed Java based application to create graph of all parameters from the data collected by Unix sar utilities and can be exported to PDF, JPG, PNG, CSV, TXT and others.
The project Codes are here. The latest Version is KSar2-0.0.4.

See below an I/O Graph from month of Dec, generated from a database server, as an example:

GrepOra-srv.jpg

To use it, first thing is to have SAR data. To get it we have basically 3 options:
A. Collect from current server.
B. Extract from other server using direct SSH connection.
C. Use a Generated SAR File
D. Run Java tool from Client Server.

Personally, I prefer to use option C, in order to avoid putting any code in client servers and also work in less intrusive mode as possible.
I also don’t use option B because we don’t usually have direct connection to client server, but sometimes with jumpboxes or similar.
There is a third reason: When Chosing option A or B, it’s automatically connected only daily data, but when using C, you can put all data you need. It need only to be available on server.

For reference regarding Option D, please check this link.

By the way, some other useful information about SAR:
1. SAR Collection Jobs can be checked on /etc/cron.d/sysstat
2. SAR Retention can be checked/adjusted on /etc/sysconfig/sysstat

Ok, now how to generate the SAR Files?
Using command: sar -A

Example:

[root@grepora-srvr ~]# cd /var/log/sa/
[root@grepora-srvr sa]# ls -lrt |tail -10
total 207080
-rw-r--r-- 1 root root 3337236 Dec 24 23:50 sa24
-rw-r--r-- 1 root root 3756100 Dec 24 23:53 sar24
-rw-r--r-- 1 root root 3337236 Dec 25 23:50 sa25
-rw-r--r-- 1 root root 3756113 Dec 25 23:53 sar25
-rw-r--r-- 1 root root 3337236 Dec 26 23:50 sa26
-rw-r--r-- 1 root root 3756104 Dec 26 23:53 sar26
-rw-r--r-- 1 root root 3337236 Dec 27 23:50 sa27
-rw-r--r-- 1 root root 3756096 Dec 27 23:53 sar27
-rw-r--r-- 1 root root 3337236 Dec 28 23:50 sa28
-rw-r--r-- 1 root root 3756100 Dec 28 23:53 sar28
-rw-r--r-- 1 root root 2317668 Dec 29 16:30 sa29
[root@grepora-srvr sa]# sar -A -f sa29 > sa29.txt
[root@grepora-srvr sa]# cat sa29.txt |head -10
Linux 3.8.13-118.4.2.el6uek.x86_64 (grepora-srvr) 12/29/2017 _x86_64_ (40 CPU)
12:00:01 AM CPU %usr %nice %sys %iowait %steal %irq %soft %guest %idle
12:10:01 AM all 97.74 0.00 1.71 0.01 0.00 0.00 0.52 0.00 0.02
12:10:01 AM 0 96.46 0.00 2.59 0.02 0.00 0.00 0.92 0.00 0.01
12:10:01 AM 1 98.55 0.00 1.24 0.01 0.00 0.00 0.20 0.00 0.00
12:10:01 AM 2 97.83 0.00 2.04 0.01 0.00 0.00 0.11 0.00 0.02
12:10:01 AM 3 98.44 0.00 1.41 0.01 0.00 0.00 0.14 0.00 0.01
12:10:01 AM 4 98.28 0.00 1.65 0.00 0.00 0.00 0.06 0.00 0.01
12:10:01 AM 5 98.27 0.00 1.70 0.00 0.00 0.00 0.02 0.00 0.00
[root@grepora-srvr sa]#

With this file you can copy it from client server your server and import using KSAR Interface. It’s pretty intuitive and easy to use.

But how to generate all available days or a set of specific days in past?
Here is a script I use for this:

### All Days of SAR
DT=$(ls /var/log/sa/sa[0-9][0-9] | tr '\n' ' ' | sed 's/\/var\/log\/sa\/sa/ /g')
## Explicit Days
#DT="07 08 09"
#DT="12"
# Today
#DT=`date +"%d"`
>/tmp/sar-$(hostname)-multiple.txt
for i in $DT; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /tmp/sar-$(hostname)-multiple.txt
done
ls -l /tmp/sar-$(hostname)-multiple.txt

After this you can copy the generated file to you PC and generate the same report.

Hope you enjoy it!

Cheers!
Matheus.

Advertisements

Oracle ACE Program

Hello all!
I’m glad share I got nominated as Oracle ACE Associate!
Here is the Link for my profile in the ACE Directory.

But what is an Oracle ACE?

The Oracle ACE Program is designed to recognize and reward members of the Oracle Technology and Applications communities for their contributions. These individuals are technically proficient and willingly share their knowledge and experiences through presentations, blog posts, articles, social media posts, community forum support and more.

This way, we have 3 levels of (active) Oracle ACEs:
ACE Associate: The entry level, where I am right now.
ACE: Proficient Community Advocate.
ACE Director: Expert Community Advocate.

I got nominated in 3 Expertises: Database Management & Performance, Cloud, MySQL.

As additional info: Seems I’m the youngest ACE in Brazil and I’m also the only Brazilian ACE with Cloud Expertise.

So, for now on, you’ll see the following logo as a badge in our right menu:

NewScreenshot 2018-01-12 às 13.20.00

Thank you for all your support and for keeping me motivated to share and learn together all this time!

Cheers!

 

DFS Lock Handle During RMAN Backup on Standby Database

Hi all!
Some time ago I faced some session on wait event DFS Lock Handle during a RMAN Backup on Standby Database. Btw, running backup on a Standby is a very interesting approach to avoid running backups on primary, so all nodes can be fully dedicated to application ends.

Turn that in my situation I noticed there was a lock with my apply process. The fix? Quite simple: Cancel apply process, run backup, and restart apply again. In my case, using DG Broker:

dgmgrl
connect / 
show configuration
show database 
edit database  set state = 'apply-off';

— Run Backup

edit database set state = 'apply-on';

 

Hope it helps!
Cheers!

Formatting DBMS_GET_DDL Output on SQLPlus

Hey all!
Issues when formatting output for DBMS_GET_DDL on SQLPlus? Me too!

Check below the best way to format it. Add to your scripts!

set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool on
column ddl format a1000

begin
   dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true);
   dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);
end;
/
-- Now you command:
select dbms_get_ddl.xxx(..) as ddl from dual;

Hope you enjoy!
Cheers!

11g Feature: Fine Grained Dependency Tracking (FGDT)

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

11g Feature: Fine Grained Dependency Tracking (FGDT)

In previous versions, object dependencies were managed at the object level. This way, altering an object automatically invalidated all dependent objects.
Oracle 11g has more granular dependency management, so only changes that directly affect an object will cause an invalidation.

This is not a Feature under control of Dev or DBA, but I judged important to mention here, as per has important impact to development and deployment processes.

More details can be found here: http://www.orafaq.com/node/2683

Finding Trace Files Being Written Right Now!

Hey!
I was not sure on the title for this post, but I bet everyone, at least once, needed to know which file is being modified at this exact moment in your filesystem/server.

Some days ago I noticed something was making my filesystem full. I cleared some gigas in logs from Diag Home but the space gone 100% very quickly. What is consuming the space?
Easy:

1. Create a new file.

$ touch a.log

2. Find everything newer than this file.

$ find . -newer a.log

Here you go!

In my situation, after finding this, I noticed there was a session in a bug situation generating thousands of messages on trace file.
Killed the session, got part of the messages, cleared file. Issue solved.

Hope it helps!

11g Feature: Fine-Grained Access Control (FGAC) on Network Services

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Fine-Grained Access Control (FGAC) on Network Services

Oracle supplies PL/SQL utility packages such as UTL_TCP, UTL_SMTP, UTL_MAIL, UTL_HTTP and UTL_INADDR to access to network services. In 11g Oracle have enhanced security available.
Rather than public being granted execute privileges on these packages, now it’s needed to create an ACCESS CONTROL LIST (ACL) in order to use these packages. Some ACL Related Data Dictionary VIEWS are DBA_NETWORK_ACLS and [DBA/USER]_NETWORK_ACL_PRIVILEGES.

> To create ACL:

SQL> begin
DBMS_NETWORK_ACL_ADMIN.create_acl(
acl => 'example.xml',
description=>'EXEMPLE ACL',
principal=>'EXAMPLE',                        
is_grant=>TRUE,
privilege=>'connect');
End;
/

> Once the ACL is created, additional user or privileges can be added using the DBMS_NETWORK_ACL_ADMIN.add_privileges procedure:

SQL> BEGIN
  DBMS_NETWORK_ACL_ADMIN.add_privilege ( 
    acl         =>  'example.xml', 
    principal   => 'SCOTT',
    is_grant    => FALSE, 
    privilege   => 'connect', 
    position    => NULL, 
    start_date  => NULL,
    end_date    => NULL);
  COMMIT;
END;
/

* DBMS_NETWORK_ACL_ADMIN.delete_privileges can be usedto drop privileges and DBMS_NETWORK_ACL_ADMIN.drop_acl to drop ACL.

> To assign ACL to a Network Host:

SQL> begin
DBMS_NETWORK_ACL_ADMIN.assign_acl(
acl => 'example.xml',
host=>'grepora');
End;
/

See you next week!