Segment Creation on Demand or Deferred Segment Creation

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Segment Creation on Demand or Deferred Segment Creation

I also talked about it in post EXP Missing Tables on 11.2.

Segment creation on demand, or deferred segment creation as it is also known, is a space saving feature of Oracle Database 11g Release 2. When non-partitioned tables are created, none of the associated segments (table, implicit index and LOB segments) are created until rows are inserted into the table. For systems with lots of empty tables, this can represent a large space saving. This functionality can be controlled by the DEFERRED_SEGMENT_CREATION initialization parameter, which is set to TRUE by default.

More“Segment Creation on Demand or Deferred Segment Creation”

Export/Backup directly to Zip using MKNOD!

We all faced that situation when we have to make a logical backup/export and haven’t so much area to do that, right?
We know the export usually compress a lot on zip/gzip… It wouldn’t be great if we can export directly to compressed file?

zip
This situation become much more common because of Datapump, that requires a directory accessible by database server. If you have not possibility to make a mounting point or any other area, this can help…

## BKP with MKNOD
BKP_DEST=/just/example
DATE=`date +%Y%m%d%H%M`
cd $BKP_DEST
mknod bkp_$DATE.dmp p
gzip  bkp_$DATE.dmp.gz &
### Uncomment and Ajust one of:
## MySQL:
#mysqldump -u $user -p$password $database > bkp_$DATE.dmp
## Oracle (Datapump or EXP)
expdp \"/ as sysdba\" dumpfile=bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE logfile=log_bkpzipped.log compress=y
#expdp $user/$password dumpfile=bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE logfile=log_bkpzipped.log
#exp \"/ as sysdba\" file=bkp_$DATE.dmp log=log_bkpzipped.log compress=y [tables=owner.table,..] [owner=schema1,..] [...]

Hugs!
Matheus.

EXP Missing Tables on 11.2

Made an exp and some table is missing, right? The database is 11.2+? The tables missing have no rows in source dabase, right? Bingo!
This happen because Oracle implemented a space saving feature on 11.2 called Deffered Segment Creation.

This feature basically makes that the first segment of a table is only allocated when the first row is inserted. It was implemented because Oracle realized is not rare to find databases with lots of tables that haven’t ever had a row.

The situation occurs because the EXP client uses dab_segments as index to exporting, and, this feature makes that no segment be allocated. For Oracle, it’s not a problem, considering the use of Datapump (EXPDP/IMPDP).

But (there always exist a “but”), let’s suppose you have to export the file to a different location not accessible by directory nor has local space, or either, your table has a long column (yes, it’s deprecated, I know… but let’s suppose this is a legacy system…). Then, you can do:

1) For all tables that has no rows, allocate an extent:
alter table owner.tabela allocate extent;

To generate, the script:

select 'alter table '||owner||'.'||table_name||' allocate extent;' from all_tables where num_rows=0;

2) Export using clausule VERSION=11.1 or lower on EXP. 🙂

More about Deffered Segment Creation: https://oracle-base.com/articles/11g/segment-creation-on-demand-11gr2

Hope It helped.
See ya!
Matheus.

Script: Copy Large Table Through DBLink

To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate the data logically (not physically).
The first option was to migrate schema by schema through datapump with database link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575 -DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
:)

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error
EXP-00000: Export terminated unsuccessfully

An what Oracle says? “Solution: Use Datapump!”😛

Well, well… I realized I was going to become by myself…
Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:
1) Creating an aux table (source database):

alter session force parallel query parallel 12;
create table SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from SCHEMA_OWNER.TABLE;
alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE
counter number;
CURSOR cur_data is
select row_id from (
select row_id, num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num)
where num >= &1
and num <=&2;
BEGIN
counter :=0;
FOR x IN cur_data LOOP
BEGIN
counter := counter +1;
insert into SCHEMA_OWNER.TABLE select * from SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id;
if counter = 1000 then ---commit every 1000 rows
commit;
counter := 0;
end if;
EXCEPTION
when OTHERS then
dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm);
END;
END LOOP;
COMMIT;
END;
/
exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50 million rows – and commits by every 1k, defined on item 2):

@echo off
set /p db="Target Database.: "
set /p user="Username.......: "
set /p pass="Password..................: "
pause
START sqlplus %user%/%pass%@%db% @run_chunck.sql 1 2060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 2060054 52060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 52060054 102060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 102060054 152060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 152060054 202060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 202060054 252060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 252060054 302060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 302060054 352060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 352060054 402060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 402060054 452060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 452060054 502060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 502060054 552060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 552060054 602060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 602060054 652060053
START sqlplus %user%/%pass%@%db% @run_chunck.sql 652060054 702060053 -- count(*) from table

 

Watching the inserts running…

targetdb>@sess
User:MATHEUS
USERNAME EVENT SQL_ID
---------- ---------- -------------------------
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message from client
MATHEUS_BOESING SQL*Net message to client c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.:)

Matheus.

EXP-00079 – Data Protected

A quick one: I began to have this problem on 12c’s backup catalog schemas. The reason is that by now all information is protected by policies (VPD). The error:

EXP-00079: Data in table "&TABLE" is protected. Conventional path may only be exporting partial table.

The solution:

catalogdb> GRANT exempt access policy TO &exp_user;
Grant succeeded.

Hugs!
Matheus.

Charsets: Single-Byte vs Multibyte Encoding Scheme Issue

Sad history:

IMP-00019: row rejected due to ORACLE error 12899
IMP-00003: ORACLE error 12899 encountered
ORA-12899: value too large for column "SCHEMA"."TABLE"."COLUMN" (actual: 61, maximum: 60)

To understand: It happens when the export/import is being made by different charsets. Usually when the destination is a superset with “multibyting” and the source is a single-byte one. The reason is that as more as the charset is not specific, more bits are used to represent a charcter (c-ç, a-ã, o-õ-ô, for example), this way, the columns that uses as data length byte will be different sized between theese databases.

Of course, as more specific a charset configuration is, much better for performance constraints it’ll be (specially for sequencial reads), because the databases needs to work with less bytes in datasets/datablocks for the same tuples, in a simple way to explain. Otherside, this is a quite specific configuration. The performance issues are mostly related to more simple tunings (sql access plan, indexing, statistics or solution architecture) than this kind of details. But, it’s important to mention if you’re working in a database that is enough well tuned…

For more information, I recommend this (recent) documentation: https://docs.oracle.com/database/121/NLSPG/ch2charset.htm. Please, invest your time to understand the relation between “Single-Byte Encoding Schemes” and “Multibyte Encoding Schemes” in this doc.

The follow image ilustrates in a simple way the difference of byting used to address more characters (a characteristic of supersets):

nls81023

Ok, doke!
And the solution is…

Let’s summarize the problem first: The char (char, varchar) columns uses more bytes to represent the same characters. So the situations where, in the source, the column was used by the maximum lengh, it “explodes” the column lengh in the destination database with a multibyte encoding scheme.
For consideration, I’m not using datapump (expdp/impdp or impdb with networklink) just because it’s a legacy system with long columns. Datapump doesn’t support this “deprecated” type of data.
So, my solution, for this pontual problem occouring during a migration was to change the data lengh of the char columns from “byte” to “char”. This way, the used metric is the charchain rather than bytesize. Here is my “kludge” for you:

select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' CHAR('||data_length||' CHAR);'
from dba_tab_cols where DATA_TYPE='CHAR' and owner='&SCHEMA'
union all
select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' VARCHAR2('||data_length||' CHAR);'
from dba_tab_cols
where DATA_TYPE='VARCHAR2' and owner='&SCHEMA';

 

And it works!
Hugs and see ya!
Matheus.