ORA-10456: cannot open standby database; media recovery session may be in progress

A dataguard quick tip!

# Error

SQL> ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE OPEN READ ONLY
*
ERROR at line 1:
ORA-10456: cannot open standby database; media recovery session may be in progress

# Solution

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> ALTER DATABASE OPEN READ ONLY;
Database altered.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.

See ya! 🙂
Matheus.

EXP Missing Tables on 11.2

Made an exp and some table is missing, right? The database is 11.2+? The tables missing have no rows in source dabase, right? Bingo!
This happen because Oracle implemented a space saving feature on 11.2 called Deffered Segment Creation.

This feature basically makes that the first segment of a table is only allocated when the first row is inserted. It was implemented because Oracle realized is not rare to find databases with lots of tables that haven’t ever had a row.

The situation occurs because the EXP client uses dab_segments as index to exporting, and, this feature makes that no segment be allocated. For Oracle, it’s not a problem, considering the use of Datapump (EXPDP/IMPDP).

But (there always exist a “but”), let’s suppose you have to export the file to a different location not accessible by directory nor has local space, or either, your table has a long column (yes, it’s deprecated, I know… but let’s suppose this is a legacy system…). Then, you can do:

1) For all tables that has no rows, allocate an extent:
alter table owner.tabela allocate extent;

To generate, the script:

select 'alter table '||owner||'.'||table_name||' allocate extent;' from all_tables where num_rows=0;

2) Export using clausule VERSION=11.1 or lower on EXP. 🙂

More about Deffered Segment Creation: https://oracle-base.com/articles/11g/segment-creation-on-demand-11gr2

Hope It helped.
See ya!
Matheus.

ORA-04091: Table is Mutating, Trigger/Function may not see it

No!
This is not a super-table nor a x-table (X-Men joke, this was awfull, I know… I’m sorry).

ORA-04091: Table "TABLE NAME" is Mutating, Trigger/Function may not see it
ORA-06512: em "TRC_INSERT_TABLE", line 14
ORA-04088: error during execution of trigger 'TRC_INSERT_TABLE'

Very interesting. But not hard to understand. The cause is that the trigger (or a user defined plsql function that is referenced in this statement) attempted to look at (or modify) a table that was in the middle of being modified by the statement which fired it.

In other words, your trying to read a data the you are modifying. The obviously cause an inconsistency, the reason to this error. The data is “mutant”. But the error could be less annoying, right? Oracle and his jokes…

The solution is to rewrite the trigger to not use the table, or, in some situation, you can use an autonomous transaction, to turn it independent. It can be done using the clause PRAGMA AUTONOMOUS_TRANSACTION.

This FAQ can be useful to you: http://www.orafaq.com/wiki/Autonomous_transaction

Matheus.

ORA-00600: internal error code, arguments: [ktecgetsh-inc], [2]

Alert showing:

Errors in file /oracle/diag/rdbms/mydb/mydb/trace/mydb_smon_6024.trc (incident=9666):
ORA-00600: internal error code, arguments: [ktecgetsh-inc], [2], [], [], [], [], [], [], [], [], [], []

This is a non-fatal internal error happenned while SMON is doing a temporary segment drop. My SMON encountered 9 out of maximum 100 non-fatal internal errors.

So,

alter system set event="10061 trace name context forever, level 10" scope=spfile;

Restart Database and:

alter system reset event scope=both sid='*';

See ya!
Matheus.

CREATE MVIEW: ORA-12014: table does not contain a primary key constraint

Ok, you are trying to create a materialized view involving a database link and found a ORA-12014, right?

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND
AS SELECT COL1, COL2, COL3 FROM TABLE@REMOTE_DB;
 *
ERROR at line 1:
ORA-12014: table 'TABLE' does not contain a primary key constraint
SQL>

It blowed me sometime ago. But it’s not complicated to workaround it, just try to:

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND AS
select * from (SELECT COL1, COL2, COL3 FROM TABLE@REMOTE_DB);

An alternative is to use MV log + WITH ROWID on REMOTE_DB side:

CREATE MATERIALIZED VIEW LOG MVW_LOG_NAME
ON TABLE WITH ROWID;

And

CREATE MATERIALIZED VIEW &OWNER..MVW_NAME
REFRESH FORCE ON DEMAND WITH ROWID
AS SELECT COL1, COL2, COL3 FROM MVW_LOG_NAME@REMOTE_DB;

PS: Make sure username used in remote_db database link has select privileges on MV log. On source db issue:

SELECT LOG_TABLE FROM DBA_MVIEW_LOGS WHERE LOG_OWNER='OWNER' AND MASTER = 'TABLE';

This will give you MV log table name. On target side issue:

SELECT * FROM MVW_LOG_NAME@remote_db;

See ya!
Matheus.

OGG-0352: Invalid character for character set UTF-8 was found while performing character validation of source column

Almost a month without post!
My bad, december is always a crazy time to DBAs, right?

This post’s title error happens because the charset is different between databases used on replication thought GoldenGate and occurs only with alphanumerical columns (CHAR, VARCHAR, VARCHAR2), because, even the char length be the same, the data length will be different (like I explained here). Take a look in this example:

sourcedb> select table_name,column_name,data_length,char_length from dba_tab_cols where column_name=’NAME' order by 1,2,3;
 
TABLE_NAME    COLUMN_NAME     DATA_LENGTH CHAR_LENGTH
------------- --------------- ----------- -----------
TABLE_EXAMPLE NAME            25          25
destdb> select table_name,column_name,data_length,char_length from dba_tab_cols where column_name=’NAME' order by 1,2,3;

TABLE_NAME  COLUMN_NAME     DATA_LENGTH CHAR_LENGTH
------------- --------------- ----------- -----------
TABLE_EXAMPLE NAME            100         25

 

There is basically two solutions:
1) Change one of those charsets.
2) Add “SOURCECHARSET PASSTHRU” clause on the replicat file.

I usually prefer the second option, just because it’s less intrusive than number 1.

 

See ya!
Matheus.

Script: Copy Large Table Through DBLink

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

 

Watching the inserts running…

targetdb>@sess
User:MATHEUS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Matheus.

ORA-27369: job of type EXECUTABLE failed with exit code: Operation not permitted

When running external script by scheduler. The solution:

chown root $ORACLE_HOME/bin/extjob
chmod 4750 $ORACLE_HOME/bin/extjob

chown root $ORACLE_HOME/rdbms/admin/externaljob.ora
chmod 640 $ORACLE_HOME/rdbms/admin/externaljob.ora

chown root $ORACLE_HOME/bin/jssu
chmod 4750 $ORACLE_HOME/bin/jssu

 

Have a nice week!
Matheus.

EXP-00079 – Data Protected

A quick one: I began to have this problem on 12c’s backup catalog schemas. The reason is that by now all information is protected by policies (VPD). The error:

EXP-00079: Data in table "&TABLE" is protected. Conventional path may only be exporting partial table.

The solution:

catalogdb> GRANT exempt access policy TO &exp_user;
Grant succeeded.

Hugs!
Matheus.

VKTM Hang – High CPU Usage

Today a database (RHEL 6, single instance, 11.2.0.4) suddently started to “explode” CPU on VKTM process (100% CPU).
After some minutes lost (completely) in support.oracle.com (there was just a few notes about binary permissions on Solaris), I decided to make a McGayver by myself. 🙂

By Oracle words: “VKTM acts as a time publisher for an Oracle instance. VKTM publishes two sets of time: a wall clock time using a seconds interval and a higher resolution time (which is not wall clock time) for interval measurements. The VKTM timer service centralizes time tracking and offloads multiple timer calls from other clients.

This way, my solution:

SQL> alter system set "_high_priority_processes"='LMS*' scope=spfile;
System altered.

And restart the database, of course.
So, VKTM is no more a “priority” process. The problem was solved. 🙂

Another possibility is to disable VKTM (undocumented parameter “_disable_vktm” – boolean). But I wanted to keep it running, changing less as possible of database configuration, just reducing priority.

KB:
Master Note: Troubleshooting Oracle Background Processes (Doc ID 1509616.1)
Great post about hidden parameters: http://oracleinaction.com/undocumented-params-11g/
Oficial one: http://www.orafaq.com/parms/index.htm

Hugs!
Matheus.