Purge SYSAUX Tablespace

Your SYSAUX is bigger than the rest of database?
It’s not uncommon to “old” databases, usually bad administrated. Some databases configuration must cause this situation.

The general indication is to review stats and reports retention of objects and database.

But if you need to clean it now, how to do?
1) PURGE_STATS. It’s recommended to execute in smaller steps. Otherwise the RBS tablespace will be blown up.
2) Oracle is sometimes building new extents for SYSAUX stats table in other tablespaces. They will be moved back to the SYSAUX tablespace.
3) The Index rebuild will decrease the size of the indexes. They are mostly larger as the raw data.
4) The Indexes are partly function bases. Therefore it is imported in which order the index rebuild will be done. Otherwise you have to reexecute this steps again and again.

Going practical, I used the follow:

exec DBMS_STATS.PURGE_STATS(SYSDATE-180);
exec DBMS_STATS.PURGE_STATS(SYSDATE-160);
exec DBMS_STATS.PURGE_STATS(SYSDATE-140);
exec DBMS_STATS.PURGE_STATS(SYSDATE-120);
exec DBMS_STATS.PURGE_STATS(SYSDATE-100);
exec DBMS_STATS.PURGE_STATS(SYSDATE-80);
exec DBMS_STATS.PURGE_STATS(SYSDATE-60);
exec DBMS_STATS.PURGE_STATS(SYSDATE-40);
exec DBMS_STATS.PURGE_STATS(SYSDATE-20);
exec DBMS_STATS.PURGE_STATS(SYSDATE-7);
alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_IND_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTHEAD_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_AUX_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_OPR move tablespace sysaux;
alter table WRH$_OPTIMIZER_ENV move tablespace sysaux;
Alter index SYS.I_WRI$_OPTSTAT_IND_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_IND_OBJ#_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_HH_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_TAB_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_OPR_STIME rebuild TABLESPACE SYSAUX;

Matheus.

CREATE MVIEW: ORA-12014: table does not contain a primary key constraint

Ok, you are trying to create a materialized view involving a database link and found a ORA-12014, right?

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND
AS SELECT COL1, COL2, COL3 FROM TABLE@REMOTE_DB;
 *
ERROR at line 1:
ORA-12014: table 'TABLE' does not contain a primary key constraint
SQL>

It blowed me sometime ago. But it’s not complicated to workaround it, just try to:

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND AS
select * from (SELECT COL1, COL2, COL3 FROM TABLE@REMOTE_DB);

An alternative is to use MV log + WITH ROWID on REMOTE_DB side:

CREATE MATERIALIZED VIEW LOG MVW_LOG_NAME
ON TABLE WITH ROWID;

And

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND WITH ROWID
AS SELECT COL1, COL2, COL3 FROM MVW_LOG_NAME@REMOTE_DB;

PS: Make sure username used in remote_db database link has select privileges on MV log. On source db issue:

SELECT LOG_TABLE FROM DBA_MVIEW_LOGS WHERE LOG_OWNER='OWNER' AND MASTER = 'TABLE';

This will give you MV log table name. On target side issue:

SELECT * FROM MVW_LOG_NAME@remote_db;

See ya!
Matheus.

Shellscript: Using eval and SQLPlus

I always liked bash programming, and sometimes need to set Bash variables using information from Oracle tables.

To achieve that I’m using below solution, which I explain in details later.

# SQLPlus should return a string with all Bash commands
$ORACLE_HOME/bin/sqlplus -S -L -R 3 / as sysdba > /tmp/sqlplus.log <<-EOF
   SET PAGES 0 FEEDBACK OFF TIMING OFF VERIFY OFF LINES 1000
   SELECT 'OK:DBNAME='||UPPER(D.NAME)||'; INST_NAME='||I.INSTANCE_NAME AS STR
     FROM V\$DATABASE D, V\$INSTANCE I;
EOF

# Now, tests if sqlplus exit fine, and check if result string starts with OK keyword
if [ $? -eq 0 ] && [ "$( cat /tmp/sqlplus.log | head -n 1 | cut -d: -f1 )" == "OK" ]; then
   sed -i 's/OK://g' /tmp/sqlplus.log
   while read r; do eval "$r"; done </tmp/sqlplus.log
else
   echo "Failed to search local instance $ORACLE_SID"
   return 2
fi

In the first part, I call sqlplus, which select should return an string that contains valid bash commands, to set all variables I need. In this example, sqlplus returns Database Name and Instance Name:

      OK:DBNAME=xpto; INST_NAME=xpto_1;

The second part, exists only for consistency checks. It verify if result string starts with “OK” keywork. If all went fine, it execute the result string using the bash command eval.

 

eval – That is where magic happens!

The command eval, can be used to evaluate (and execute) an ordinary string, using the current bash context and environment. That is different than when you put your commands in a subshell.

The below source code, reads sqlplus.log and execute every command using eval:

while read line; do eval "$line"; done </tmp/sqlplus.log

Cassiano.

OGG-0352: Invalid character for character set UTF-8 was found while performing character validation of source column

Almost a month without post!
My bad, december is always a crazy time to DBAs, right?

This post’s title error happens because the charset is different between databases used on replication thought GoldenGate and occurs only with alphanumerical columns (CHAR, VARCHAR, VARCHAR2), because, even the char length be the same, the data length will be different (like I explained here). Take a look in this example:

sourcedb> select table_name,column_name,data_length,char_length from dba_tab_cols where column_name=’NAME' order by 1,2,3;
 
TABLE_NAME    COLUMN_NAME     DATA_LENGTH CHAR_LENGTH
------------- --------------- ----------- -----------
TABLE_EXAMPLE NAME            25          25
destdb> select table_name,column_name,data_length,char_length from dba_tab_cols where column_name=’NAME' order by 1,2,3;

TABLE_NAME  COLUMN_NAME     DATA_LENGTH CHAR_LENGTH
------------- --------------- ----------- -----------
TABLE_EXAMPLE NAME            100         25

 

There is basically two solutions:
1) Change one of those charsets.
2) Add “SOURCECHARSET PASSTHRU” clause on the replicat file.

I usually prefer the second option, just because it’s less intrusive than number 1.

 

See ya!
Matheus.

Script: Copy Large Table Through DBLink

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

 

Watching the inserts running…

targetdb>@sess
User:MATHEUS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Matheus.

ORA-27369: job of type EXECUTABLE failed with exit code: Operation not permitted

When running external script by scheduler. The solution:

chown root $ORACLE_HOME/bin/extjob
chmod 4750 $ORACLE_HOME/bin/extjob

chown root $ORACLE_HOME/rdbms/admin/externaljob.ora
chmod 640 $ORACLE_HOME/rdbms/admin/externaljob.ora

chown root $ORACLE_HOME/bin/jssu
chmod 4750 $ORACLE_HOME/bin/jssu

 

Have a nice week!
Matheus.

Creating RMAN Backup Catalog

It can soud repetitive, but always good to have notes about 🙂

  1. Create Schema for Catalog on CatalogDB:
-- Create the user
create user RMAN_MYDB identified by &PASS;
-- Grant/Revoke role privileges
grant recovery_catalog_owner to RMAN_MYDB;
-- Grant/Revoke system privileges
grant create session to RMAN_MYDB;

2. Create catalog and register database:

-- Conected to target Database via RMAN
RMAN> connect catalog rman_mydb/password@catdb.grepora.net:1521/catalogdb

connected to recovery catalog database

RMAN> CREATE CATALOG;

recovery catalog created

RMAN> REGISTER DATABASE;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Well done!
Matheus.

EXP-00079 – Data Protected

A quick one: I began to have this problem on 12c’s backup catalog schemas. The reason is that by now all information is protected by policies (VPD). The error:

EXP-00079: Data in table "&TABLE" is protected. Conventional path may only be exporting partial table.

The solution:

catalogdb> GRANT exempt access policy TO &exp_user;
Grant succeeded.

Hugs!
Matheus.

VKTM Hang – High CPU Usage

Today a database (RHEL 6, single instance, 11.2.0.4) suddently started to “explode” CPU on VKTM process (100% CPU).
After some minutes lost (completely) in support.oracle.com (there was just a few notes about binary permissions on Solaris), I decided to make a McGayver by myself. 🙂

By Oracle words: “VKTM acts as a time publisher for an Oracle instance. VKTM publishes two sets of time: a wall clock time using a seconds interval and a higher resolution time (which is not wall clock time) for interval measurements. The VKTM timer service centralizes time tracking and offloads multiple timer calls from other clients.

This way, my solution:

SQL> alter system set "_high_priority_processes"='LMS*' scope=spfile;
System altered.

And restart the database, of course.
So, VKTM is no more a “priority” process. The problem was solved. 🙂

Another possibility is to disable VKTM (undocumented parameter “_disable_vktm” – boolean). But I wanted to keep it running, changing less as possible of database configuration, just reducing priority.

KB:
Master Note: Troubleshooting Oracle Background Processes (Doc ID 1509616.1)
Great post about hidden parameters: http://oracleinaction.com/undocumented-params-11g/
Oficial one: http://www.orafaq.com/parms/index.htm

Hugs!
Matheus.

Charsets: Single-Byte vs Multibyte Encoding Scheme Issue

Sad history:

IMP-00019: row rejected due to ORACLE error 12899
IMP-00003: ORACLE error 12899 encountered
ORA-12899: value too large for column "SCHEMA"."TABLE"."COLUMN" (actual: 61, maximum: 60)

To understand: It happens when the export/import is being made by different charsets. Usually when the destination is a superset with “multibyting” and the source is a single-byte one. The reason is that as more as the charset is not specific, more bits are used to represent a charcter (c-ç, a-ã, o-õ-ô, for example), this way, the columns that uses as data length byte will be different sized between theese databases.

Of course, as more specific a charset configuration is, much better for performance constraints it’ll be (specially for sequencial reads), because the databases needs to work with less bytes in datasets/datablocks for the same tuples, in a simple way to explain. Otherside, this is a quite specific configuration. The performance issues are mostly related to more simple tunings (sql access plan, indexing, statistics or solution architecture) than this kind of details. But, it’s important to mention if you’re working in a database that is enough well tuned…

For more information, I recommend this (recent) documentation: https://docs.oracle.com/database/121/NLSPG/ch2charset.htm. Please, invest your time to understand the relation between “Single-Byte Encoding Schemes” and “Multibyte Encoding Schemes” in this doc.

The follow image ilustrates in a simple way the difference of byting used to address more characters (a characteristic of supersets):

nls81023

Ok, doke!
And the solution is…

Let’s summarize the problem first: The char (char, varchar) columns uses more bytes to represent the same characters. So the situations where, in the source, the column was used by the maximum lengh, it “explodes” the column lengh in the destination database with a multibyte encoding scheme.
For consideration, I’m not using datapump (expdp/impdp or impdb with networklink) just because it’s a legacy system with long columns. Datapump doesn’t support this “deprecated” type of data.
So, my solution, for this pontual problem occouring during a migration was to change the data lengh of the char columns from “byte” to “char”. This way, the used metric is the charchain rather than bytesize. Here is my “kludge” for you:

select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' CHAR('||data_length||' CHAR);'
from dba_tab_cols where DATA_TYPE='CHAR' and owner='&SCHEMA'
union all
select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' VARCHAR2('||data_length||' CHAR);'
from dba_tab_cols
where DATA_TYPE='VARCHAR2' and owner='&SCHEMA';

 

And it works!
Hugs and see ya!
Matheus.