Service Detected on OEM but not in SRVCTL or SERVICE_NAMES Parameter?

Okey, it happens.
To me, after a database moving from a cluster to another. The service was registered by SRVCTL in the old cluster but is not needed. So, was not registered in the new cluster.
But OEM insists to list, for example, the “service3” as offline. The problem is that you can not remove it by SRVCTL, because you had not registered, right? See the example below:

Listing services:
srvdatabase1:/home/oracle>srvctl status service -d systemdb
Service service1_systemdb is running on nodes: srvdatabase1
Service service2 is running on nodes: srvdatabase1
Service service2_systemdb is running on nodes: srvdatabase1

In the service_name parameter:
srvdatabase1:/home/oracle>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 8 15:21:00 2015
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> show parameters service;
NAME                                 TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
service_names                        string
service2,test,systemdb

And the offline alarm goes to “service3“?
The easiest fix:

SQL> exec dbms_service.DELETE_SERVICE('service3');
PL/SQL procedure successfully completed.

Matheus.

Ordering Sequences over RAC – Hang on ‘DFS lock handle’

Hi all!
Whats up?
I had a fun weekend. So, some things to write about. 🙂

This post is just to show an exerience with the event ‘DFS lock handle’, related to sequence ordering over the cluster nodes.

I started a process to make pk id’s adjustment, related to application’s number limitation (int 32 bits -> 4294967296), looking for move older entries and “release” some ids.
As a legacy of database unification, we have a lot of tables with the same name in different schemas, those use the same sequence for pk ids generation.
The readjustment involve a select of a sequence and is runned in a lot of parallel “sqlplus” call to optimize the time and fit to the business maintenance window.

When I started, I just used the global service name (dedicated connection) and the scanlistener. The result was distribuiting connections over the 5 nodes of the cluster. Bad idea.
In the first, I suspected about the concurrency by the sequence over different nodes (could occour if the node caches are too small), based on a few XA transaction bugs involving this event.

By the way, if you’re facing this hang with XA transactions, please take a look on “High rdbms ipc reply and DFS lock handle in 11gR2 RAC With XA Enabled Application (Doc ID 1361615.1)“.
It can be solved by setting “_clusterwide_global_transactions” to FALSE.
It’s recommendable, additionally, to read the Best Practices for Using XA with RAC.

Take a look on the blocking session over the cluster nodes:

proddb4> @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------ ---------- ---------- ----------------- -------------- ----------- ---------------- -----------------
9386 147 4 DFS lock handle 9zr9vpvmkqzkv VALID 10968 4
9499 179 4 DFS lock handle fqk9y9q7u2d5c VALID 11082 4
8821 153 4 DFS lock handle 2jd84taf7krh2 VALID 13902 4
22442 1155 3 DFS lock handle 8ycpfxq2jthq3 VALID 9067 3
9860 1339 3 DFS lock handle 2jmzv23ug9kth VALID 10299 3
9772 1529 3 DFS lock handle 802kn9htah6pt VALID 22442 3
22543 1673 5 DFS lock handle 6tgvwkt6cqngk VALID 3074 5
22307 135 5 DFS lock handle 5b3zgqgq7bbdz VALID 3665 5
21010 91 5 DFS lock handle gkmycubvn9aa3 VALID 3546 5
9508 1459 3 DFS lock handle 7cw6bcjsf8xf2 VALID 10387 3
10299 4669 3 DFS lock handle 7y2tnuckh37wp VALID 11795 3
121 139 5 DFS lock handle 6tgvwkt6cqngk VALID 3310 5
596 113 5 DFS lock handle 8yqbzu29shvnm VALID 2603 5
360 113 5 DFS lock handle dv49pafm9z8zy VALID 596 5
10740 3177 3 DFS lock handle c6q65hnq0ju7x VALID 11707 3
9838 181 4 DFS lock handle aqa7afq2upkuq VALID 9386 4
714 77 5 DFS lock handle ft8xzyzhycpn2 VALID 360 5
9951 147 4 DFS lock handle 697mts944db7y VALID 9725 4
950 109 5 DFS lock handle cd2gsz5rb2qw9 VALID 3192 5
10387 1529 3 DFS lock handle 2tqnrbh0x60dp VALID 12238 3
10064 143 4 DFS lock handle d833wg4u9cfyb VALID 10649 1
833 1503 5 DFS lock handle 7ynbg2t4taxha VALID 2366 5
10649 53 1 DFS lock handle 0sgzmj1tbx4rh VALID 10737 1
2249 149 5 DFS lock handle aa6jr8ugxaz4z VALID 833 5
9612 175 4 DFS lock handle d2nrr4gtdjq9b VALID 9499 4
10825 57 1 DFS lock handle acmyc4sw7zzc2 VALID 10649 1
2603 1415 5 DFS lock handle fg47vs5wa8zq8 VALID 22307 5
2485 65 5 DFS lock handle 702x9zwtfktu6 VALID 714 5
10737 55 1 DFS lock handle bthxrpmz0ug63 VALID 12148 3

Ok doke, let’s cancel the sessions and rerun the process just in node node (by SID). It should solve the small caches over cluster hang, without need to modify the sequence, right?

snailBeeep. Wrong:

proddb4> @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------ ---------- ---------- --------------- ------------- ----------- ---------------- -----------------
2494 53953 4 DFS lock handle fc3cam368zsp6 UNKNOWN
6561 32113 4 DFS lock handle f618p0hd4xsy0 UNKNOWN
9269 111 4 DFS lock handle fkn8hxbsfkfnz UNKNOWN
9047 175 4 DFS lock handle fqk9y9q7u2d5c VALID 8931 4
459 12605 4 DFS lock handle 5b3zgqgq7bbdz VALID 9271 4
1929 305 4 DFS lock handle 6tgvwkt6cqngk VALID 8026 4
7349 1013 4 DFS lock handle 802kn9htah6pt UNKNOWN
7800 175 4 DFS lock handle 0hc1bmqj1fp4f UNKNOWN
21475 17349 4 DFS lock handle cfh3r4sq788vu VALID 9042 4
8026 641 4 DFS lock handle 6tgvwkt6cqngk VALID 459 4
14919 59 4 DFS lock handle gkmycubvn9aa3 VALID 15373 4
15032 2267 4 DFS lock handle 9zr9vpvmkqzkv VALID 7688 4
15145 2411 4 DFS lock handle ddkqx4xttc9s9 UNKNOWN
15373 1657 4 DFS lock handle 2jd84taf7krh2 VALID 15713 4
8934 157 4 DFS lock handle 8ycpfxq2jthq3 VALID 1929 4
15826 551 4 DFS lock handle d8dhmr2sx08xq VALID 9612 4
15713 3357 4 DFS lock handle 2jmzv23ug9kth VALID 10177 4
8821 155 4 DFS lock handle 9fpmw9cwak21s UNKNOWN
16050 7007 4 DFS lock handle 4t5qkth35r2um VALID 8705 4
2042 1269 4 DFS lock handle 7cw6bcjsf8xf2 UNKNOWN

What a hell!
Lets take a look in one of the sqls to find the sequence…

proddb4> @sqlid 6tgvwkt6cqngk
UPDATE TABLE_XPTO SET RECNO = SEQ_OWNER.SEQ_NAME.NEXTVAL WHERE ROWID=:B1 EXTVAL WHERE ROWID=:B1

And what about the sequence configuration?

proddb4> @getddl sequence SEQ_OWNER SEQ_NAME
create sequence SEQ_OWNER.SEQ_NAME
minvalue 1
maxvalue 9999999999
start with 85669803
increment by 1
cache 120000
cycle
order;

ORDER!
Man, of course. It create a several control over the nodes just to keep the sequence in order, as explained in this post by Christo Kutrovsky.

To my situation, in the business maintenance window, it’s not an important constraint. So, lets disable the ordering:

proddb4> alter sequence SEQ_OWNER.SEQ_NAME noorder;
Sequence altered.

Then, TAADÃÃ!

proddb4> @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
----- ---------- ------- ------------------------ ------------- ----------- ---------------- -----------------
15145 2411 4 library cache: mutex X ddkqx4xttc9s9 UNKNOWN
15032 2267 4 library cache: mutex X 9zr9vpvmkqzkv UNKNOWN
14919 59 4 library cache: mutex X gkmycubvn9aa3 NOT IN WAIT
9269 111 4 library cache: mutex X fkn8hxbsfkfnz UNKNOWN
9047 175 4 library cache: mutex X fqk9y9q7u2d5c UNKNOWN
8934 157 4 library cache: mutex X 8ycpfxq2jthq3 UNKNOWN
8821 155 4 library cache: mutex X 9fpmw9cwak21s UNKNOWN
8026 641 4 library cache: mutex X 6tgvwkt6cqngk UNKNOWN
7800 175 4 library cache: mutex X 0hc1bmqj1fp4f UNKNOWN
7349 1013 4 library cache: mutex X 802kn9htah6pt UNKNOWN
2042 1269 4 library cache: mutex X 7cw6bcjsf8xf2 UNKNOWN
9160 1205 4 library cache: mutex X 6tgvwkt6cqngk NOT IN WAIT
9042 293 4 library cache: mutex X 4jc1u6n2qx94z UNKNOWN
10177 5611 4 library cache: mutex X c6q65hnq0ju7x UNKNOWN
9271 235 4 library cache: mutex X d2nrr4gtdjq9b NOT IN WAIT
9951 1191 4 library cache: mutex X bkdhxhhqdbqb9 UNKNOWN
8931 291 4 library cache: mutex X 697mts944db7y UNKNOWN
9838 1315 4 library cache: mutex X 6q6r7ht1hnctg NOT IN WAIT
8818 325 4 library cache: mutex X 2tqnrbh0x60dp UNKNOWN

Of course we’re having some mutex x, but it’s a lot better then DFS lock, and the process just “go”. 🙂

After be done, to keep the configuration, let’s enable ordering again:

proddb4> alter sequence SEQ_OWNER.SEQ_NAME order;
Sequence altered.

Matheus.

ASM: Adding disk “_DROPPED%” FORCE

Ok doke,
First let I make it clear: Adding a disk with force should be avoided, mainly by all the rebalance involved. The best choice, if you has “time”, is to just put disks online, like:

1) ALTER DISKGROUP ONLINE DISK ; or
2) ALTER DISKGROUP ONLINE DISKS IN FAILGROUP ; or
3) ALTER DISKGROUP ONLINE ALL;

But, the post is about adding back to DG the dropped disks.
Let’s imagine, to undestand my situation, you lost the contact with one of your two site storage… In this example, represented by failgroup FGAUX. You would see the disks like this:

SQL> select name,failgroup,state from v$asm_disk a where state <> 'NORMAL';

NAME FAILGROUP STATE
------------------------------ ------------------------------ --------
_DROPPED_0000_DGDATA FGAUX FORCING
_DROPPED_0001_DGDATA FGAUX FORCING
_DROPPED_0002_DGDATA FGAUX FORCING

So, you know your disks by the name pattern (0 are FGMAIN and 1 are FGAUX, the problematic). You can do something like:

[root@database-host ~]# /etc/init.d/oracleasm listdisks |grep DGDATA
DGDATA001
DGDATA002
DGDATA003
DGDATA101
DGDATA102
DGDATA103

Now, make the simple… 🙂

SQL> ALTER DISKGROUP DGDATA ADD
FAILGROUP FGAUX
DISK
'ORCL:DGDATA101' name DGDATA101 FORCE,
'ORCL:DGDATA102' name DGDATA102 FORCE,
'ORCL:DGDATA103' name DGDATA103 FORCE;

Diskgroup altered.

SQL> ALTER DISKGROUP DGDATA rebalance power 8;

Diskgroup altered.

While waiting the reball, let’s see the disks in DG:

SQL> select * from v$asm_operation where group_number=(select group_number from v$asm_diskgroup where name='DGDATA');

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- ----------- --------------------------------------------
3 REBAL WAIT 8
SQL> select name,failgroup,state from v$asm_disk a where group_number=(select group_number from v$asm_diskgroup where name='DGDATA');

NAME FAILGROUP STATE
------------------------------ ------------------------------ --------
_DROPPED_0000_DGDATA FGAUX FORCING
_DROPPED_0001_DGDATA FGAUX FORCING
_DROPPED_0002_DGDATA FGAUX FORCING
DGDATA101 FGAUX NORMAL
DGDATA102 FGAUX NORMAL
DGDATA103 FGAUX NORMAL
DGDATA001 FGMAIN NORMAL
DGDATA002 FGMAIN NORMAL
DGDATA003 FGMAIN NORMAL

And, when the rebalance end, the situation will be OK:

SQL> select * from v$asm_operation where group_number=(select group_number from v$asm_diskgroup where name='DGDATA');

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- ----------- --------------------------------------------
3 REBAL RUN 8 8 629 19087 10143 1

SQL> select * from v$asm_operation where group_number=(select group_number from v$asm_diskgroup where name='DGDATA');

no rows selected

SQL> select name,failgroup,state from v$asm_disk a where group_number=(select group_number from v$asm_diskgroup where name='DGDATA');

NAME FAILGROUP STATE
------------------------------ ------------------------------ --------
DGDATA101 FGAUX NORMAL
DGDATA102 FGAUX NORMAL
DGDATA103 FGAUX NORMAL
DGDATA001 FGMAIN NORMAL
DGDATA002 FGMAIN NORMAL
DGDATA003 FGMAIN NORMAL

OK? Easy! 😀

Matheus.

ORA-10456: cannot open standby database; media recovery session may be in progress

Easy, easy… Take a look:

# Error

db2database2p:>srvctl status database -d database
Instance database1 is running on node db1database1p
Instance database2 is not running on node db2database2p
db2database2p:>srvctl start instance -d database -i database2
PRCR-1013 : Failed to start resource ora.database.db
PRCR-1064 : Failed to start resource ora.database.db on node db2database2p
CRS-5017: The resource action "ora.database.db start" encountered the following error:
ORA-10456: cannot open standby database; media recovery session may be in progress
. For details refer to "(:CLSN00107:)" in "/grid/product/11.2.0/log/db2database2p/agent/crsd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.database.db' on 'db2database2p' failed.

# Solution

db2database2p:>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jun 4 20:27:46 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.
Total System Global Area 1.1224E+11 bytes
Fixed Size 2234920 bytes
Variable Size 6.1472E+10 bytes
Database Buffers 5.0466E+10 bytes
Redo Buffers 299741184 bytes
Database mounted.
ORA-10456: cannot open standby database; media recovery session may be in progress

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> ALTER DATABASE OPEN READ ONLY;
Database altered.
SQL> exit

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

db2database2p:>srvctl status database -d database
Instance database1 is running on node db1database1p
Instance database2 is running on node db2database2p

Matheus.

Grepping Entries from Alert.log

Hey hey,
One more McGayver by me! Haha
Again to find some information in alert. This time, I’m looking to count and list all occurrences of an action in alert. To archive this, I made the script below.

grep-swiss-knife-590x295

The functionality is just a little bit more complex than the script of the last post, but stills quite simple. Take a look:

Parameters:
PAR1: name of alert (the main alert.log)
PAR2: Searched token
PAR3: Start day you want to, in the format “Mon dd” or just “Mon”. Below an example.
PAR4: Start Year (4 digits)
PAR5: [optional]End day you want to, in the format “Mon dd” or just “Mon”. The default value is “until now”.
PAR6: [optional]End Year (4 digits). The default value is “until now”. If you use the PAR5, you have to use PAR6.
PAR7: [optional] List All entries and when?. If you want to use this PAR, you must to use PAR5 and PAR6.

Examples (Looking for service reconfigurations):
Ex1: sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 12” 2015
(Seach between April 12 and now and count entries).
Ex2: sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 01” 2015 “May 30” 2015
(Seach between April 01 and May 30 and count the entries).
Ex3: sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 01” 2015 “May 30” 2015 LIST
(Seach between April 01 and May 30 and count the entries and list them all…)

# Script grep_entries_alert.sh
if [ $# -lt 6 ]; then
FIN=`cat $1 |wc -l`
else FIN=`cat $1 |grep -n $5 |grep $6$ |head -n 1 |cut -d':' -f1`
fi
BEG=`cat $1 |grep -n "$3" |grep $4$ |head -n 1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
ENTR=`cat $1 |head -n $FIN |tail -$NMB| grep $2|wc -l`
echo Number of Entries: $ENTR >log.log
if [ $# -lt 7 ]; then
echo ------- Complete List Of Entries and When ---------- >> log.log
for line in `cat $1 |head -n $FIN |tail -$NMB| grep -n $2|cut -d':' -f1`;do
LR=`expr $line + $BEG` # To get "real line", without the displacement
DAT=`expr $LR - 1`     # To get line date of entry
echo awk \'NR==$DAT\' $1 >>aux.sh # Printing the lines just calculted
echo awk \'NR==$LR\' $1 >>aux.sh  # with aux.sh
done;
sh aux.sh >>log.log
fi
cat log.log

It’s not beautiful. But it works! 🙂

After that, there is the new blog sponsor:
MacGyver-macgyver-880400_200_228
(Hahahaha)

Matheus.

Grepping Alert by Day

Hi all,
For that moment when your alert is very big and some OS doesn’t “work very well with it” (in my case was using AIX), I jerry-ringged the shellscript bellow. It puts in a new log just the log entries of a selected day.

24 7 365

The call can be made with two or three parameters, this way:

Parameters:
PAR1:
name of alert (the main alert.log)
PAR2: Day you want to, in the format “Mon dd”. Below an example.
PAR3: [optional] desired year. The default is the current year. But is useful specially on the “new year” period…

Examples:
Ex1: sh grep_day.sh alert_xxdb_1.log “Apr 12”
Ex2: sh grep_day.sh alert_xxdb_1.log “Apr 12” 2014

Generated files:
dalert_2015Apr12.log
dalert_2014Apr12.log

# Script grep_day.sh
if [ $# -lt 3 ]; then
YEAR=`date +"%Y"`
else YEAR=$3
fi
DATEFORMAT=`echo $2|cut -d' ' –f1`""`echo $2|cut -d' ' –f2`
BEG=`cat $1 |grep -n "$2" |grep $YEAR |head -1 |cut -d':' -f1`
FIN=`cat $1 |grep -n "$2" | grep $YEAR |tail -1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
cat $1 |head -$FIN |tail -$NMB > dalert_$YEAR$DATEFORMAT.log

Belive me! It can be useful…. haha

See ya!

Matheus.

Leap Second and Impact for Oracle Database

Don’t know what is this? Oh boy, I suggest you take a look…

It can sound a little crazy, but it’s about an universal time adjustment of atomic time. Something like that. To understand, take a look on:
http://www.meinberg.de/english/info/leap-second.htm
http://en.wikipedia.org/wiki/Coordinated_Universal_Time
http://en.wikipedia.org/wiki/International_Atomic_Time
http://www.britannica.com/EBchecked/topic/136395/Coordinated-Universal-Time
http://www.britannica.com/EBchecked/topic/290686/International-Atomic-Time

20499seconds
Okey doke!
But what about Oracle Database adjustment? Good news: Nothing to do! 😀

In Oracle words: “The Oracle RDBMS needs no patches and has no problem with the leap second changes on OS level.

But, attention!
If your application uses timestamp or sysdate, verify the adjust of the OS Level. If it consists on a “60” second, it can result on “ORA-01852 seen 60 seconds is a illegal value for the date or timestamp dataype.
(Insert leap seconds into a timestamp column fails with ORA-01852 (Doc ID 1553906.1))

Another possibilities is documented on these notes:
NTP leap second event causing Oracle Clusterware node reboot (Doc ID 759143.1)
(Oracle VM and RHEL 4.4 to 6.2): Leap Second Hang – CPU Can Be Seen at 100% (Doc ID 1472421.1)
(OEM on Linux): Enterprise Manager Management Agent or OMS CPU Use Is Excessive near Leap Second Additions on Linux (Doc ID 1472651.1)

So, pay attention! 🙂

Here other Oracle notes that I recommend to take a look:
Leap seconds (extra second in a year) and impact on the Oracle database. (Doc ID 730795.1)
Leap Second Time Adjustment (e.g. on June 30, 2015 at 23:59:59 UTC) and Its Impact on Exadata Database Machine (Doc ID 1986986.1)
How Leap Second Affects The OS Clock on Linux and Oracle VM (Doc ID 1453523.1)
NOTE:1461363.1 – What Leap Second Affects Occur In Tuxedo?
NOTE:1553906.1 – Insert leap seconds into a timestamp column fails with ORA-01852
NOTE:412160.1 – Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches
NOTE:1453523.1 – How Leap Second Affects The OS Clock on Linux and Oracle VM
NOTE:1019692.1 – Leap Second Handling in Solaris – NTPv3 and NTPv4
NOTE:1444354.1 – Strftime(3c) Does Not Show The Leap Second As 23:59:60
NOTE:1461606.1 – Any Effect of Leap Seconds to MessageQ?

Matheus.

Unplug/Plug PDB between different Clusters

Everyone test, write and show how to move pluggable databases between containers (CBDs) in the same Cluster, but a little more than a few write/show about move pluggable databases between different clusters, with isolated storage. So, let’s do that:

OBS: Just to stay easy to understand, this post is about migration of a Pluggable Database (BACENDB) from a cluster named ORAGRID12C and a Container Database named INFRACDB to the Cluster CLBBGER12, into Container CDBBGER.
(Click on images to get it bigger)

1. Access the container INFRACDB (Cluster GRID12C) and List the PDBs: 1

2. Shutdown BACENDB:
2
(of course it does’n worked with a normal shutdown. I don’t know what I was thinking… haha) 3

3. Unplug BACENDB (PDB) to XML (must be done from Pluggable, as you see…) 4
4. Created an ACFS (180G) to use as “migration area” mounted on “/migration/” in ORAGRID12C cluster:
5

5. Copy Datafiles and Tempfiles for the “/migration” through ASMCMD cp 6

6. ACFS exported and mounted as NFS on destination (CLBBGER12): 7
8

7. Pluggable created (Plugged) on new Cluster (CDBBGER), using “MOVE” FILE_NAME_CONVERT, to send the files to diskgroup +DGCDBBGER:

9

7.1 How it looks like on alert.log?

10

7.2 How about the Datafiles?

11

7.3 Checking database by remote sqlplus:

13

8. Creating the services as needed:

12

9. Dropping Pluggable from INFRACDB:

14

That’s Okey? Of course there is a few other ways to copy the files from an infra to another, like scp rather than mount.nfs, RMAN Copy, or other possibilities…

By the way, one of the restrictions of pluggable migration is to use the same endian format. Buut it’s possible to use RMAN Convert Plataform and convert datafiles to a filesystem, isn’t?
So, I guess it’s not a necessary limitation. Must to test an write another post… haha

About the post, this link helped, but, again, don’t mention about “another” cluster/infra/storage.

Matheus.

Oracle Convert Number into Days, Hours, Minutes

There’s a little trick…
Today I had to convert a “number” of minutes into hours:minutes format. Something like convert 570 minutes in format hour:minutes. As you know, 570/60 is “9,5” and should be “9:30”.

Lets use 86399 seconds (23:59:59) as example:

I began testing “to_char(to_date)” functions:
boesing@db>select to_char(to_date(86399,'sssss'),'hh24:mi:ss') formated from dual;

FORMATED
——–
23:59:59

Ok, it works. But using “seconds past midnight” (sssss). By the way, it works between 0 and 86399 only:

boesing@db> select to_char(to_date(86400,'sssss'),'hh24:mi:ss') from dual;
select to_char(to_date(86400,'sssss'),'hh24:mi:ss') from dual
*
ERROR at line 1:
ORA-01853: seconds in day must be between 0 and 86399

The problem remains. How to use minutes in 3 digits (570 minutes -> 9:30), for example?
The best way I solve was:

--- Seconds in hours:minutes:seconds
--- If you comment the first "TO_CHAR" line, can be minutes in hours:minutes too..
select
TO_CHAR(TRUNC(vlr/3600),'FM9900') || ':' || -- hours
TO_CHAR(TRUNC(MOD(vlr,3600)/60),'FM00') || ':' || -- minutes
TO_CHAR(MOD(vlr,60),'FM00') -- second
from dual;

It always works. 🙂

boesing@db>select
2 TO_CHAR(TRUNC(86399/3600),'FM9900') || ':' || -- hours
3 TO_CHAR(TRUNC(MOD(86399,3600)/60),'FM00') || ':' || -- minutes
4 TO_CHAR(MOD(86399,60),'FM00') -- second
5 from dual;

TO_CHAR(TRUNC
————-
23:59:59

boesing@db>select
2 TO_CHAR(TRUNC(570/3600),’FM9900′) || ‘:’ || — hours
3 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — minutes
4 TO_CHAR(MOD(570,60),’FM00′) — second
5 from dual;

TO_CHAR(TRUNC
————-
00:09:30

boesing@db>select
2 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — hours
3 TO_CHAR(MOD(570,60),’FM00′) — minutes
4 from dual;

TO_CHAR
——-
09:30

Any better way? Leave a comment. Thanks!

Matheus.

ORA-24777: use of non-migratable database link not allowed

Hoje reencontrei este erro e, como de costume, já havia esquecido qual era a solução. Por isso fica aqui minha anotação. 🙂
Tive que puxar mais da memória do que qualquer outra coisa, pois todos os links da primeira página do Google tinham apenas o stack do erro mas sem solução. Que na realidade é bem simples:

O erro ocorre devido sessão utilizando shared connection (XA) utilizar DBLink com conexão dedicated. O uso de XA no JDBC é uma configuração do datasource, caso você esteja utilizando weblogic, jboss, tomcat ou outro com suporte a java.
A solução que adotei até hoje consiste setar o “host” do DBLink ou a entrada de TNS utilizada pelo DBLink no arquivo tnsnames do BD de origem para utilizar tipo de conexão “shared” para conexão remota.

Ex Antes:  (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=))) (CONNECT_DATA= (SERVICE_NAME=<serviço>) (SERVER=DEDICATED)))
Ex Depois:  (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=))) (CONNECT_DATA= (SERVICE_NAME=<serviço>) (SERVER=SHARED)))

Segundo o stackoverflow, pode ser resolvido criando o database link como shared.
Ex: CREATE SHARED DATABASE LINK CONNECT TO bob IDENTIFIED  BY “MyNewPasswd1” AUTHENTICATED BY jim IDENTIFIED BY “JimsPass23” USING ‘DB01’;

Na realidade a segunda solução me pareceu muito mais elegante. Mas só percebi que existia depois de já ter resolvido pela primeira abordagem.
Se acontecer novamente, testo da segunda forma e comento aqui. Prometo. 😉

Ah!
Acho que não é necessario mencionar que o BD destino do DBLink precisa estar escutando em algum serviço com suporte a XA, né?
Não é o assunto deste post, então não vou entrar em detalhes. Mas basicamente você precisa setar para um valor diferente de zero os parâmetros: max_shared_servers; shared_servers; max_dispatchers e dispatchers e subir o serviço do dispartcher no listener. Taca no Google aí que essa parte é barbada!

Abraço!
Matheus.