Identifying the top segments

Hello readers! My name is Bruno Kremer, this is my first post from a series, and I will be talking about how we can identify the top segments of the database.

Introduction

It’s well known that we can create automated tasks to collect and save the space used/allocated by the database objects, such as saving snapshots of the DBA_SEGMENTS view. But what if this is your first contact with a specific database and you need to identify the top segments, estimate their growth ratio, check the history of space allocated, or even to perform some kind of capacity planning? There are some alternatives to answer these questions, but on this post I will share the starting point. Please feel free to customize the scripts to your own need.

Checking the top sized segments


select
s.owner,
s.segment_name,
s.segment_type,
round(sum(nvl(s.bytes,0))/1024/1024) size_mb
from dba_segments s
where s.bytes > 1048576 -- higher than 1MB
group by s.owner, s.segment_name, s.segment_type
order by size_mb desc
fetch first &TOP rows only;

Input values: &TOP – limit the number of rows returned.

Filters you might want to use: and s.segment_type in ('&OBJECT_TYPE') – ‘TABLE’, ‘TABLE PARTITION’, ‘INDEX’…

Return example:

Note: the scripts used in this series were tested on 12.1.0.2 databases. Some of these use the “FETCH FIRST” clause to limit the number of rows returned, but if you are using older versions of Oracle Database, you can still use the old fashion like “ROWNUM”.

Example:


select * from (
select s.owner, s.segment_name, s.segment_type, round(sum(nvl(s.bytes,0))/1024/1024) size_mb
from dba_segments s
where s.bytes > 1048576 -- higher than 1MB
group by s.owner, s.segment_name, s.segment_type
order by size_mb desc
) where rownum <= &TOP;

Now that you already have an idea regarding the size of the largest database segments, you might want to check the top growing segments… On next publications we will talk about how we can use AWR data dictionary views and some DBMS_SPACE procedures to estimate space usage history and top growing segments.

Oracle Security: Immediate Protection for JVM Exploits (CVE-2018-3110)

Hello all!

Now that CVE-2018-3110 is a hot topic, I think this is a pretty interesting topic to go on.

So, we all know this is consistently one of the components with more CVEs for Oracle Databases. Basically because you can create Java objects in the database (which I think is an abomination :D) and run this code there, usually doing some tricks to escalate privileges to DBA, to PDB, to CDB, to host and other CDBs…

The problem is that (before 18c) OJVM PSU Patches are not RAC Rolling installable. Which means will need a maintenance window to apply fixes for this component. Quite bad, hãn… And if you discover a vulnerability and the PSU window is only in a month or so?

Well, we have a solution 🙂
It is well described in MOS Oracle Recommended Patches — “Oracle JavaVM Component Database PSU and Update” (OJVM PSU and OJVM Update) Patches (Doc ID 1929745.1), under the name “Mitigation Patch”.

It basically consists in install a patch Patch 19721304: SCRIPT TO LOCK DOWN JAVA DEVELOPMENT, which is a Rolling Patch, which allows you to disable any new Java object to be created. This is, so, assuming exploits can be done by creating new java objects on DB (as most of Java CVEs). Also, this patch don’t have any version requirement (after 9i).

Having the patch, however, doesn’t mean you are automatically protected against any vulnerability, but means you can protect yourself temporary by disabling new java objects creation with “exec dbms_java_dev.disable;” anytime.

The Mitigation Patch does not remove Java objects or block any java execution, it only disable the creation of new Java objects, so if the exploit is already planted, it’s not a viable solution. The mitigation patch can be used in any scenario where the PSU or proper JVM fixes cannot be applied at the moment but it’s wanted to prevent against JVM vulnerabilities.

Now, before downloading the patch, first check if it’s not already installed to your home, as it’s part of some Boundle Patches, like “Database Bundle Patch : 12.1.0.2.180417 (27338029)“.

Important Note: The intent is to be like a “workaround” while the PSU is planned. This was not built to be definitive solution. The idea is to just disable new Java objects to be created until the fix is applied on the proper planned maintenance window.

Hope it helps!

WHEN OTHERS -> NULL: Hidding your PLSQL Errors?

Are you using WHEN OTHERS -> NULL to hide your PLSQL errors?
Don’t be so sure…

WHEN OTHERS exception handlers that do nothing and don’t raise errors using RAISE or RAISE_APPLICATION_ERROR can often hide code failures that result in hard to identify bugs.

To avoid this, a new PL/SQL compiler warning was added in 11g to identify those kind of situations. Check example below:

SQL> ALTER SESSION SET plsql_warnings = 'enable:all';

Session altered.

SQL> CREATE OR REPLACE PROCEDURE warning_test AS
  2  BEGIN
  3    RAISE_APPLICATION_ERROR(-20000, 'This is an Exception!');
  4  EXCEPTION
  5    WHEN OTHERS THEN
  6      NULL;
  7  END;
  8  /

SP2-0804: Procedure created with compilation warnings

SQL> SHOW ERRORS
Errors for PROCEDURE OTHERS_TEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
5/8      PLW-06009: procedure "WARNING_TEST" OTHERS handler does not end in
         RAISE or RAISE_APPLICATION_ERROR

Nice, right?!

There are also some other warnings improvements like:

  • New NO_DATA_NEEDED Predefined Exception: ORA-06548: For parallel access and pipelined table functions the caller of a pipelined function does not need more rows to be produced by the pipelined function.

Warnings:

  • Severe
    – 5018 – omitted optional AUTHID clause
    – 5018 – omitted optional AUTHID clause
    – 5019 – deprecated language element
    – 5020 – parameter name must be identified
  • Informative
    – 6016 – native code generation turned off (size/time)
    – 6017 – operation will raise an exception
    – 6018 – an infinity or NaN value computed or used

Cheers!

Cross-Session PL/SQL Function Result Cache

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Cross-Session PL/SQL Function Result Cache

Since 11gR1 we have simple way to boost the performance of PL/SQL functions by saving the results of function calls for specific combinations of input parameters in the SGA:The cross-session PL/SQL function result cache.

The results can be reused by any session calling the same function with the same parameters. This can result in a significant performance boost when functions are called for each row in a SQL query, or within a loop in PL/SQL.

Ok, but how to do this? It’s as simple as adding the RESULT_CACHE clause:

CREATE OR REPLACE FUNCTION procedure_example (p_in IN NUMBER)
  RETURN NUMBER
  RESULT_CACHE

The RELIES_ON may be set in 11gR1 but is unnecessary in 11.2 as it automatically tracks dependencies and invalidates the cached results when necessary.

Nice, right?
Cheers!

Moving Audit Trails (AUD$ and FGA_LOG$) to a Different Tablespace

Hi all!
There is one thing that is always true for any version  of the Oracle Database: the audit trail simply keeps growing and growing.

Some important facts in this regard:
– It’s based on activity, not the amount of data changed.
– The default location of the database audit trail is the AUD$ table, in the SYSTEM tablespace.
– As you know, If this tablespace gets filled up there could be serious consequences.
– The same concern is true for FGA_LOG$, which is located in the SYSAUX tablespace.

Ok, so if you could move these two tables  to specific tablespaces, the root problem would be solved, right?

From 11g R2 on you can move the some or all audit trails to other tablespaces you choose. How?

To move the AUD$:

begin
dbms_audit_mgmt.set_audit_trail_location(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_location_value => 'AUDIT_TBS');
end;
/

To move the FGA_LOG$:

begin
dbms_audit_mgmt.set_audit_trail_location(
audit_trail_type => dbms_audit_mgmt.audit_trail_fga_std,
audit_trail_location_value => 'AUDIT_TBS');
end;
/

 

There is only one important thing to keep in mind:  This tablespace needs to be online when the database is open. Otherwise, if any auditable operation try to be executed, you’ll get an ORA-02002.

This is something recommendable if you are setting the Auditing for your database. This is also something included Oracle in scripts like the Exacheck.

Hope it helps!
Cheers!

ORA-600 [kwqitnmphe:ltbagi]

Hi all,

So, some time ago I starter receiving an internal error “ORA-600 [kwqitnmphe:ltbagi]” from a client’s database. Everything was up and running fine, but an ORA-600 is always an ORA-600.

Investigating on issue, found it can be related to a several issues as per MOS ORA-600 [kwqitnmphe:ltbagi] (Doc ID 1346009.1):

Bug Fixed Description
17831758 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] in Qnnn background process
20987661 12.2.0.1 QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi]
18591240 11.2.0.4.BP17, 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] is seen immediately after ORA-1089
18536720 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] processing History IOT in AQ
16204151 12.1.0.2, 12.2.0.1 ORA-600 [kwqitnmphe:ltbagi] when subscriber is dropped pending enqueue/dequeue
12423122 11.2.0.3, 12.1.0.1 ORA-600 [kwqitnmphe:ltbagi] when scheduler uses AQ

In my situation it was a match to the QMON slave processes issue, the only one was not resolved on 12.1 yet (My DB is 12.1, bad luck?), as per MOS Bug 20987661 – QMON slave processes reporting ORA-600 [kwqitnmphe:ltbagi] (Doc ID 20987661.8).

It is fixed in 12.2.0.1, by now. For 12.1 we have a temporary workaround:

As sysdba:

  DECLARE
   po dbms_aqadm.aq$_purge_options_t;
  BEGIN
     po.block := FALSE;
     DBMS_AQADM.PURGE_QUEUE_TABLE(
       queue_table     => 'SYS.SYS$SERVICE_METRICS_TAB',
       purge_condition => NULL,
       purge_options   => po);
  END;
  /

Hope it helps!
Cheers!

IPv6 Formatting for JDBC and SQLPlus

Hey all!
Seems new right? But it’s available since 11gR2.
Not needed to explain what is IPV6, right? Any questions, go here.

In summary the only thing you need is to enclose the IPv6 address in square brackets. Like this:

For Easy Connect on IPV4:

SQL> connect user/pass@172.23.10.40:1521/GREPORADB
Connected.

 

For Easy Connect on IPV6:

SQL> connect user/pass@[1:95e05a:g0d:da7a:2007]:1521/GREPORADB
Connected.

For JDBC (thin) IPV4:

url="jdbc:oracle:thin:@(DESCRIPTION=
(LOAD_BALANCE=on) (ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=172.23.10.40) (PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=172.23.10.41)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (OCI) IPV4:

url="jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=
(PROTOCOL=TCP)(HOST=172.23.10.40)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (thin) IPV6:

url="jdbc:oracle:thin:@(DESCRIPTION=
(LOAD_BALANCE=on) (ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2007]) (PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2006])(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (OCI) IPV6:

url="jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=
(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2007])(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

As you can imagine, the same applies to your TNSNAMES entries.

Also, according to this, it can be used even for your LISTENER:

LISTENER =
 (DESCRIPTION_LIST =
  (DESCRIPTION =
   (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = [1:95e05a:g0d:da7a:2007])(PORT =1521))
  )
 )

Cheers!

ORA-02158: invalid CREATE INDEX option while using redef_table on Oracle 12cR1

Hey all,
I’m working on a table redefinition project to migrate the existing tables and indexes to new compressed tablespaces. As the customer asked to have near 0 downtime to its data we decided to go with DBMS_REDEFINITION.

Simple right? Well… I sure hoped so.
I’m preparing a serie of posts about it, but before that I would like to share some hands on issues and that the magic of the new redef_table is not that great yet, at least on 12cR1/12.1.0.2.
Prior 12c, when would need to redefine a table you would use the DBMS_REDEFINITION and its 6 steps:

0 – Manually create a interim table to receive the data with the same structure as the original table
1 – DBMS_REDEFINITION.can_redef_table
2 – DBMS_REDEFINITION.start_redef_table
3 – DBMS_REDEFINITION.sync_interim_table
4 – DBMS_REDEFINITION.copy_table_dependents
5 – DBMS_REDEFINITION.finish_redef_table

And sometimes you would need to user the DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT to help on some issues but if everything was good you would only need to do the steps above.

There are a few issues with the approach by that I mean BUGS 🙂 so you need to watch your back do an a good prep work.

In 12c you have a new procedure called DBMS_REDEFINITION.redef_table that would bundle all the 6 steps into one single procedure call. With its up and down side.
For me, the down side is that we can’t monitor the procedure, once this is no loger anything being recorded on dba_redefinition_errors.
By working as a transaction, everything works or its rolled back (Or it should but I will leave it for another post).

So the only way to know what is being done is to trace the session that is doing the redefinition. And that what I needed to do to see what was going on with a strange situation.

This is what was happening: I 1st tried the DBMS_REDEFINITION.redef_table and it raised ORA-02158: invalid CREATE INDEX option but when I used the 6 steps mentioned above (can_redef_table,start_redef_table,etc) it worked without issues:

That bugged me so I traced the session.

SQL> exec dbms_monitor.session_trace_enable(binds=>true,waits=>true);

SQL> BEGIN DBMS_REDEFINITION.REDEF_TABLE(uname => 'USER1',
tname => 'TEST1',
table_compression_type => 'COMPRESS FOR OLTP', 
table_part_tablespace => 'DATA_COMP', 
index_tablespace => 'DATA_COMP', 
index_key_compression_type   => 'COMPRESS ADVANCED LOW', 
lob_compression_type => 'COMPRESS HIGH', 
lob_tablespace => 'DATA_COMP', 
lob_store_as => 'SECUREFILE');
END;
/
BEGIN*
ERROR at line 1:ORA-02158: invalid CREATE INDEX option
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3385
ORA-06512: at line 2

And this was the create index statement that was in the trace:

250438 PARSE ERROR #140737488293344:len=421 dep=1 uid=0 oct=9 lid=0 tim=3977494319094 err=2158250439 CREATE INDEX "USER1"."TMP$$_INDEX0" ON "USER1"."REDEF$_16752430_0" ("VISIT_NO")250440   PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS COMPRESS ADVANCED LOWNOLOGGING250441   STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645250442   PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1250443   BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)250444   TABLESPACE "DATA_COMP"

Can you see the issue?
Well neither could I, a colleague read the trace again and found a silly bug.

Here it is The create index had this:

COMPRESS ADVANCED LOWNOLOGGING

Instead of this, it had this:

COMPRESS ADVANCED LOW NOLOGGING

A silly space was missing and was causing the entire redefinition to fail!

I could not find any reference in MOS but that was it a space prevented to use the redef_table and caused me to lose some hours on it.

Hope this save you some time on your troubleshooting and I will be posting other strange situations that I found using the redef_table on Oracle 12cR1.

See you next post!

Elisson Almeida

After Patch: MRP0: Background Media Recovery terminated with error 10485

Ok,
I had that some time ago after applying Patch 27475598 – Oracle JavaVM Component 11.2.0.4.180417 Database PSU.
Why? Well, this is Non RAC-Rolling Installable and also Not Data Guard Standby First Installable.

This means there downtime for this patch, no escape.

I had to (skipping all the standard opatch steps, you can see those on README):

  • Stop DG Replication:
dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-off';
show database mydg
  • Run postinstall.sql in upgrade mode with only 1 instance on (disable RAC):
cd $ORACLE_HOME/sqlpatch/27475598
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> alter system set cluster_database=false scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP UPGRADE
SQL> @postinstall.sql
SQL> alter system set cluster_database=true scope=spfile;
SQL> SHUTDOWN
SQL> STARTUP

Ok, all good, seems all fine.

But now when starting my DG replication:

dgmgrl /
show configuration
show database mydg
edit database 'mydg' set state='apply-on';
show database mydg

What I see is:

DGMGRL> show database mydg

Database - mydg

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       41 minutes 53 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    myprod

  Database Error(s):
    ORA-16766: Redo Apply is stopped

Database Status:
ERROR

DGMGRL>

And on Database Alert Log:

MRP0: Background Media Recovery terminated with error 10485
Errors in file /u01/app/oracle/diag/rdbms/axwest/greporaprod/trace/greporaprod_pr00_42628.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Recovery interrupted!
MRP0: Background Media Recovery process shutdown (greporaprod)

Well, in my case it happens because I use an Active Dataguard, in open read only. The solution? Start you DG in Mount Mode to apply the patching replication!

This is well described as per MOS: MRP process getting terminated with error ORA-10485 (Doc ID 1618485.1).

After getting sync, you can simple promote it to read only mode again.

Hope it helps!

Oracle Open World SP 2018: Review and Downloads

Hello Everyone!

So, this is a quick post just sharing that (almost) all presentations from the Oracle Open World SP are available, for free, to download here.

Of course, the presentations don’t replace the experience to be there, discussing and seeing in first hand the thoughts and insights from such great specialists. So, if you can, please arrange yourself for next year!

About the event?
Well, as usual we had a lot of merchandising for Oracle, as it’s expected. But it was great to see some well known specialists talking themselves about their products. Not only sales, but people who really work with the Products. The big problem: So many parallel sessions I’d like to attend! lol

I know you like something for specific, so let me share my top10 sessions from this year. I really recommend you to download those presentations from the link above and search for the speaker profiles for more information about them.

# OOW18 Top 10 by Matheus:
– Making Graph Databases Fun Again With Java (Otavio Santana)
– Redução de custos com Backups utilizando Oracle Storage Services (Rapahel Campelo)
– Por dentro da cabeça de um hacker de banco de dados (Antonio Santos)
– Novidades do Oracle 18c que você não pode perder (Luciano Verissimo)
– Nova regulamentação GDPR: proteção de dados e as implicações para os negócios (Angelo Neto)
– Da aquisição ao consumo de dados: informação é poder para tomar decisões  (Felipe Rasche, João Barros Neto)
– Confie no seu pipeline: Teste automaticamente um aplicativo Java de ponta a ponta (Elisas Nogeira)
– Oracle Autonomous Cloud Platform: Inovações mais Rápidas, Baratas e Seguras Utilizando IA (Siddhartha Agarwal)
– SQL Tuning 101 (Alex Zaballa)
– Autonomous: O primeiro Banco de Dados Autônomo do mundo (Andrew Mendelsohn)

Hope you enjoy it, see you there next year!