12c New Feature: Unused Privileges

Did you know 12c now has a mechanism to examine which privileges are actually being used by an user, module or for the database as a whole, as opposed to merely the privileges granted?
This is a great security improvement in order give users the privileges that they need precisely without granting too much.

Here’s how is works?

1. First you have to set up a capture process using the new package DBMS_PRIVILEGE_CAPTURE.

# Example:

BEGIN 
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE( 
        name         => 'all_priv_analysis_pol', 
        description  => 'database-wide policy to analyze all privileges', 
        type         => DBMS_PRIVILEGE_CAPTURE.G_DATABASE); 
END;
/

This starts the capture process database wide. Let it run for a bit.

2. Now you can either generate a report:

# Example

BEGIN 
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT( 
        name         => 'all_priv_analysis_pol'); 
END;
/

Or examine the new views:

DBA_USED_PRIVS 
DBA_UNUSED_PRIVS 
DBA_USED_OBJPRIVS 
DBA_UNUSED_OBJPRIVS

3. To turn off you proceed with the following:

BEGIN 
DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE( 
        name         => 'all_priv_analysis_pol'); 
END;
/

So now you ensure that you use the best practice of least privilege for users.

Cheers!

Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database

Hi all!
So, running Exacheck on a recently created database, found this error:

 FAIL => The bundle patch version installed does not match the bundle patch version registered in the database: [host]:[sid],...

This means that a boundle patch with sqlpatch was applied to OH and not to this database. Happens because Exacheck try to match the patch info stored in oraInventory with the patch info stored in dba_registry_sqlpatch.

Also note in some situations, running datapatch may require the database to be in upgrade mode and if you are patching Exadata , which is generally a RAC based environment, you need to set the cluster_database=false and at least 1 job_queue_process before starting the database using startup upgrade command. This should be described in readme on related patch.

When checking for this, I found a really interesting validation script here. As per:

opatch_bp=$($ORACLE_HOME/OPatch/opatch lspatches 2>/dev/null|grep -iwv javavm|grep -wi database|head -1|awk -F';' '{print $1}') 
database_bp_status=$(echo -e "set heading off feedback off timing off \n select STATUS from dba_registry_sqlpatch where PATCH_ID = $opatch_bp;"|$ORACLE_HOME/bin/sqlplus -s " / as sysdba" | sed -e '/^ *$/d')
if [ "$database_bp_status" == SUCCESS ]
then
      echo "SUCCESS: Bundle patch installed in the database matches the software home and is installed successfully."
else
      echo "FAILURE: Bundle patch installed in the database does not match the software home, or is installed with errors." 
 fi

To fix, just set environment variables to correct database, go to $ORACLE_HOME/OPatch and run:

Continue reading

Trace on Standby: Tracing MRP Process

Hey all!
Recently I got the following error when trying to start a trace in a standby. The idea was to start a trace in MRP to check on performance, but when trying to use DBMS_MONITOR.session_trace_enable:

oracle:standby-srvr /tmp: sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 19 14:07:56 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> exec DBMS_MONITOR.session_trace_enable(session_id =>1970, serial_num=>55126, waits=>TRUE, binds=>TRUE);
BEGIN DBMS_MONITOR.session_trace_enable(session_id =>1970, serial_num=>55126, waits=>TRUE, binds=>TRUE); END;

*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'DBMS_MONITOR.SESSION_TRACE_ENABLE' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

OMG!
Don’t panic, you can accomplish the same using oradebug:

-- In my case:
SQL> select process,pid from V$managed_standby where process like '%MRP%';

PROCESS PID
--------- ------------------------
MRP0 45693

SQL> oradebug setospid 19564
Oracle pid: 105, Unix process pid: 19564, image: oracle@standby-srvr (MRP0)
SQL> oradebug unlimit
Statement processed.
SQL> oradebug Event 10046 trace name context forever, level 12
Statement processed.
SQL>

You can apply same for any process/sid you need.

In case you desire to get the spid from a sid:

col machine format a30
col process format 999999
select p.spid,b.sid, p.pid
from v$session b, v$process p
where b.paddr=p.addr
and sid=&sid
/

Ok, and how to stop then?
Easy:

SQL> oradebug setospid 19564
SQL> oradebug Event 10046 trace name context off
SQL> oradebug tracefile_name

Enjoy!

Recover Standby Using an Incremental Backup

Hey all,
I know is there is a lot of posts about it in the internet, but I’m doing mine about it. The main reason is that the other has lots of steps and outputs and I miss simplicity. Hope you enjoy this one as well.

But why?
Sometimes if the standby database fails for any reason – for example when doing NOLOGGING actions in the primary database – then it can be necessary to recover the datafiles from the incremental backup as the archive logs will not contain the right information.

The recovery of the standby is relatively simple.

1. Take a cumulative incremental backup on the primary

Example

Either take a level 1 backup

backup as compressed backupset incremental level 1 cumulative database filesperset 15;

Or get the scn to recover from on the standby using the following SQL

select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';

2. Run the backup using the SCN

backup as compressed backupset incremental from scn &scn;

3. Create a copy of controlfile for Standby

alter database create standby controlfile as '/tmp/newctfl.ctl';

4. Copy incremental backup files and controlfile to standby

5. Restore copied Controlfile

Example

SQL> shutdown abort; 
SQL> startup nomount; 
RMAN> restore controlfile from '/tmp/newctfl.ctl'; 
SQL> alter database mount;

6. Catalog the backup files in the standby controlfile

Example

catalog start with '/tmp/rmanbackup/';

7. Recover the database with no redo

Example

recover database noredo;

8. Start the managed recovery

Example

alter database recover managed standby database using current logfile disconnect from session;

This is also an easy alternative to the lot more complex described in MOS Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes (Doc ID 958181.1).

Hope you enjoy!
Cheers!

CRS Not Starting after Removing OS User: How to Workaround and How to Solve!

Hello all!
Turns that a few days ago a client reached me because his CRSD was simply not starting. Like this:

[root@proddb proddb]$ ./crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'proddb'
CRS-2676: Start of 'ora.crsd' on 'proddb' succeeded

[root@proddb proddb]$ ps -ef |grep crsd
root 19217 13424 0 11:53 pts/0 00:00:00 grep crsd

After some investigation, I found the following:

2017-01-24 14:00:06.859: [ CRSSEC][1690195712]{1:51052:2} Exception: OwnerEntry construction failed to retrieve user id by name with ACL string: owner:jacknobody:rwx and error: 1
2017-01-24 14:00:06.912: [ CRSSEC][1690195712]{1:51052:2} Exception: ACL entry creation failed for: owner:jacknobody:rwx

Hmmm, seems some CRS resources are owned by “Jack Nobody”… Turns that I this us was removed from OS:

[root@proddb proddb]$ cat /etc/passwd |grep jacknobody
[root@proddb proddb]$ 

What to do now?

Continue reading

EMagent 12c with high number of threads causing “su: cannot set user id: Resource temporarily unavailable”

Hello all,
Yeah, these days I got some errors and when validating the server found the fllowing error:

su: cannot set user id: Resource temporarily unavailable

As you can imagine, in order to fix the issue, I adjusted the /etc/security/limits.conf increasing oracle nprocs to:

oracle           soft    nproc           4047
oracle           hard    nproc           20384

Ok, turns out that after a while I got the same errors again…
After some investigating I find that the EM Agent process was with 5020 threads!
Take a look:

Continue reading

How To Delete Archive Logs From Downstream Database

After deploy GoldenGate with downstream database option archives from downstream database not cleaning ? All database transaction are gracefull, but archives (from downstream) are not cleaning?

Are no log transactions registered on GG Extract?
check this:

GGSCI (dbcloud) 3> send ETL01 showtrans

Sending SHOWTRANS request to EXTRACT ETL01 ...

Oldest redo log files necessary to restart Extract are:
------------------------------------------------------------
XID:                  14.6.3931092
Items:                1
Extract:              ETL01
Redo Thread:          1
Start Time:           2017-00-00:00:00:00
SCN:                  1682.4049305132 (7228184297004)
Redo Seq:             8612
Redo RBA:             20965491728
Status:               Running
------------------------------------------------------------

If you on same case, make this:

It should reach (and delete) JUST archived log that have already read by GoldenGate.

set serveroutput on size unlimited
set line 1000
set trimsp on
set feed off
set pages 5000
set pagesize 0
spool [[ some_dir ]]/delete_archives__dowstream_goldengate.sh

-- SR 3-14409179111 - Golden Gate Configuration, How To Delete Archive Logs From Downstream Database (Doc ID 2011174.1)
SELECT 'rm ' || r.NAME
-- case when (r.next_scn > c.required_checkpoint_scn) then 'NO' else 'YES' end purgable
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and r.source_database = c.source_database and r.next_scn < ( select min(required_checkpoint_scn) from dba_capture where captur
e_type='DOWNSTREAM')
and purgeable='YES'
order by modified_time;
spool off

Schedule the output in automation tool (crontab / dba_job / windows job scheduler, (whatever)):