Error: Starting ACFS in RHEL 6 (Can’t exec “/usr/bin/lsb_release”)

Quick tip:

# Error:
[root@db1gridserver1 bin]# ./acfsload start -s
Can’t exec “/usr/bin/lsb_release”: No such file or directory at /grid/product/11.2.0/lib/osds_acfslib.pm line 511.
Use of uninitialized value $LSB_RELEASE in split at /grid/product/11.2.0/lib/osds_acfslib.pm line 516.

# Solution:
[root@db1gridserver1 bin]# yum install redhat-lsb-core-4.0

Note: Bug 17359415 – Linux: Configuring ACFS reports that cannot execute ‘/usr/bin/lsb_release’ (Doc ID 17359415.8)

Matheus.

Getting Oracle Parameters: Hidden and Unhidden

Today’s post is a quick post!
Very quick post! very very quick post!
But it’s a helpful post!

Connected as sys with sysdba:

select x.ksppinm name,
ksppdesc description,
y.kspftctxvl value,
y.kspftctxdf isdefault,
decode(bitand(y.kspftctxvf, 7), 1,'MODIFIED',4,'SYSTEM_MOD','FALSE') ismod, decode(bitand(y.kspftctxvf, 2), 2, 'TRUE', 'FALSE') isadj
from sys.x$ksppi x, sys.x$ksppcv2 y
where x.inst_id = userenv('Instance')
and y.inst_id = userenv('Instance')
and x.indx + 1 = y.kspftctxpn
order by name;

Matheus.

Application Hangs: resmgr:become active

Application APP hangs with resmgr:become active. There is a resource plan defined who has a specific group to this Application. What is wrong and how to fix?

Here I presume you what is a resource manager and a resource plan. And, of course, for what purpose they exists. You must to know that this event is related to high active sessions in the group of resource plan too.

Before everything else, please understand if this is an acceptable behavior of the application. Then, in which resource group the sessions in this event are. The are other application in this same group with an unacceptable behavior? Yes? So, fix it.
No? Consider tho adjust the resource plan, switch the application to a new group, or, like in my case, remap the Application APP to the right group… ¬¬

To make it clear: In my case, the mapping is missing, so the schema MYAPP (Application APP) fit to OTHER_GROUP, where we use to set minimal limits:

SID SERIAL# INST_ID USERNAME RESOURCE_CONSUMER_GROUP EVENT
----- ---------- ---------- ------------------------------ -----------
492 29459 2 MYAPP OTHER_GROUPS resmgr:become active
1102 19145 2 MYAPP OTHER_GROUPS resmgr:become active
955 33161 2 MYAPP OTHER_GROUPS resmgr:become active
1084 33839 2 MYAPP OTHER_GROUPS db file sequential read
MYDB> show parameters resource_manager_plan
NAME TYPE VALUE
--------------------- ------ --------------
resource_manager_plan string MYDB_PLAN
MYDB> select group_or_subplan, active_sess_pool_p1, cpu_p1, cpu_p2, cpu_p3, cpu_p4 from DBA_RSRC_PLAN_DIRECTIVES where plan = 'MYDB_PLAN'
Enter value for plano: MYDB_PLAN
GROUP_OR_SUBPLAN ACTIVE_SESS_POOL_P1 CPU_P1 CPU_P2 CPU_P3 CPU_P4
------------------------------ ------------------- ---------- ---------- ---------- ----------
BATCH_GROUP 60 0 10 0 0
SYS_GROUP 80 0 0 0
APP_PLAN 20 0 30 0 0
OTHER_GROUPS 20 0 20 0 0
GGATE_GROUP 0 10 0 0
PAYTRUE_GROUP 40 0 30 0 0
DBA_GROUP 20 0 0 0

You can configure the mapping by user like that:

BEGIN
DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
attribute => DBMS_RESOURCE_MANAGER.oracle_user,
-- DBMS_RESOURCE_MANAGER.service_name (or a lot of possibilities. Google it!)
value => 'MYAPP',
consumer_group => 'APP_PLAN');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/

To switch the connected sessions, it can be done like:

SELECT 'EXEC DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS ('''||SID||''','''||SERIAL#||''',''APP_PLAN'');' FROM V$SESSION where username='MYAPP'
and RESOURCE_CONSUMER_GROUP='OTHER_GROUPS';

Remember that creating a resource plan without making the mappings is a bit pointless… 😛

Matheus.

Application Looping Until Lock a Row with NOWAIT Clause

Yesterday I treated an interesting situation:
A BATCH stayed on “SQL*Net message from client” event but the last_call_et was always on 1 or 0. Seems OK, with some client contention to send the commands to the DBMS, right? Nope.

It was caused by a loop in the application code “waiting” for a row lock but without “DBMS waiting events” (something like “select * from table for update nowait”). Take a look in how it was identified below.

First the session with no SQL_ID, no wait events and last_Call_et=0 of a “BATH_PROCESS” user:

proddb2> @sid
Sid:9796
Inst:
LAST_CALL_ET SQL_ID   EVENT STATUS SID SERIAL# INST_ID USERNAME
------------ ------- ------------- ---------- ------------------------
0 SQL*Net message from client INACTIVE 9796 45117 2 BATCH_PROCESS
proddb2> @trace
Enter value for sid: 9796
Enter value for serial: 45117
PL/SQL procedure successfully completed.

As you see, with no idea about what is happening, I started a trace. The trace was stuck with this:

*** 2015-06-15 14:03:25.755
WAIT #4574470448: nam='SQL*Net message from client' ela=993072 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833326636999
CLOSE #4574470448:c=10,e=15,dep=0,type=3,tim=12833326637228
PARSE #4574470448:c=25,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1139820409,tim=12833326637286
BINDS #4574470448:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=01 fl2=1000000 frm=00 csi=00 siz=24 off=0
kxsbbbfp=110a8d0d8 bln=22 avl=05 flg=05
value=5022011
WAIT #4574470448: nam='gc cr block 2-way' ela= 709 p1=442 p2=5944 p3=8483 obj#=0 tim=12833326638533
WAIT #4574470448: nam='gc cr block 2-way' ela= 541 p1=3 p2=2088264 p3=4367 obj#=0 tim=12833326639352
WAIT #4574470448: nam='gc cr block 2-way' ela= 651 p1=442 p2=5944 p3=8483 obj#=0 tim=12833326641673
WAIT #4574470448: nam='enq: TX - row lock contention' ela= 1093 name|mode=1415053318 usn<obj#=23141074 tim=12833326643029
EXEC #4574470448:c=1776,e=5836,p=0,cr=117,cu=1,mis=0,r=0,dep=0,og=1,plh=1139820409,tim=12833326643150
ERROR #4574470448:err=54 tim=12833326643172
WAIT #4574470448: nam='SQL*Net break/reset to client' ela= 9 driver id=1413697536 break?=1 p3=0 obj#=23141074 tim=12833326643373
WAIT #4574470448: nam='SQL*Net break/reset to client' ela= 503 driver id=1413697536 break?=0 p3=0 obj#=23141074 tim=12833326643891
WAIT #4574470448: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833326643915

AHÁ!
Did you see the “err=54” there? Yes. You know this error:

ORA-00054: Resource busy and acquire with NOWAIT specified

It’s caused by a SELECT FOR UPDATE NOWAIT in the code.
But, this select is in a loop, so the session don’t go ahead until have it.
(Obviously it could be coded with some treatment/better logic for this loop and errors, buuuut…)

What can we do now?
The easy way is to discover the holding session and kill it.
And sometimes the easy way is the best way. 😉

For that, we use the “obj#” and “value”, also bolded in the trace.
As I know the application, I know that the used field in all “where clauses” is the “RECNO” column. But if you don’t, it’s needed to discover. With this information in mind:

proddb2>select * from dba_objects where object_id='23141074';
OWNER OBJECT_NAME
------------------------------ ----------------
OWNER_EXAMPLE TABLE_XPTO
proddb2> select * from OWNER_EXAMPLE.TABLE_XPTO WHERE recno=5022011;
COL_KEY FSAMED0 FSAMED1 FSMNEG1 FSMNEG2 FSMNEG3 COL_DATE RECNO
------- ---------- ---------- ---------- ---------- ---------- -----
1002974 0 0 -516.8 0 0 15/06/2015 00:00:00 5022011

Ok, I know the row that is holded by the other session.
Let’s discover which session is causing a lock by myself (but in my case, without “NOWAIT” clause, to have time to find the holder):

proddb5>select * from OWNER_EXAMPLE.TABLE_XPTO WHERE recno=5022011 for update;

In another sqlplus session:

proddb2> @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------- ---------- ---------- --------------- ----------------------
5 14174 479 MATHEUS_BOESING enq:TX - row lock contention VALID 11006 1
2 4233 12879 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT
1 15410 7697 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT

AHÁ again!
The SID 11006. Let’s see who is there:

proddb2> @sid
Sid:11006
Inst:
SQL_ID SEQ# EVENT STATUS SID SERIAL# INST_ID USERNAME
-------------------- ---------- --------------------------------------
9jzm6vn5j06js 24919 enq: TX - row lock contention ACTIVE 11006 44627 1 DBLINK_OTHER_BATCH_SCHEMA

Ok, it’s another session of a different batch process in a remote database holding this row. As it’s less relevant, lets kill! Muahaha!
Then, you’ll see, my session get the lock and is in the middle of a transaction:

proddb1> @kill
***
sid : 11006
serial : 44627
***
System altered.
***
proddb1> @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION BLOCKING_INSTANCE
------- ---------- ---------- --------------- --------------------
5 14174 479 MATHEUS_BOESING transaction UNKNOWN
2 4332 56037 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT
1 12058 9 MATHEUS_BOESING class slave wait NO HOLDER

To release the “row locked” to my principal process, lets suicide (kill my own session, this case, that is holding the row lock right now).

proddb5> @kill
***
sid : 14174
serial : 479
***
System altered.
***

After kill all the holding sessions, my BATCH_PROCESS just gone! 😀
Take a look on the trace (running ok):

WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531019
FETCH #4576933904:c=45,e=71,p=0,cr=3,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531062
WAIT #4576933904: nam='SQL*Net message from client' ela= 562 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531654
WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981531788
FETCH #4576933904:c=55,e=86,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531826
WAIT #4576933904: nam='SQL*Net message from client' ela= 715 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981532576
WAIT #4576933904: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981532721
FETCH #4576933904:c=61,e=96,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981532758
WAIT #4576933904: nam='SQL*Net message from client' ela= 600 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981533617
WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534163
FETCH #4576933904:c=52,e=82,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981534203
WAIT #4576933904: nam='SQL*Net message from client' ela= 517 driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534752

Now, with the problem solved, lets disable the trace and continue the other daily tasks… 🙂

proddb2> @untrace
Enter value for sid: 9796
Enter value for serial: 45117
PL/SQL procedure successfully completed.

I hope it was useful!
If helped you, make a comment! 😀

See ya!
Matheus.

High CPU usage by LMS and Node Evictions: Solved by Setting “_high_priority_processes”

Another thing that may help you in environments with highly interdependent applications:

Our env has high interconnect network block changing, and, as a consequence, high CPU usage by Global Cache Services (GCS)/Lock Manager Server Process (LMS).

This way, for each little latency in the interconnect interface, we were having a node eviction and all the impacts to the legacy application you can imagine (without gridlink or any solution to make the relocation ‘transparent’, as is usual to legacy application) and, of course, the business impact.

Oracle obviously suggested that we reduce the block concurrency over the cluster nodes grouping the application by affinity. But, it’s just no applicable to our env… 🙁

When nothing seemed to help, the workaround came from here: Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1).

Here is our change:

boesing@proddb> alter system set "_high_priority_processes"='LMS*|LGWR|VKTM' scope=spfile sid='*';
System altered.

No magic, but the problem stopped to happen. After that, we’re having some warnings about clock synchronization over the cluster nodes on CRS alerts. Like this:

CRS-2409:The clock on host proddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization. Service is running in observer mode.

I believe it happens because VKTM lost priority. But it’s OK: The node evictions has stopped! 😀

Matheus.

VPD: “row cache objects” latch contention

The other day, we found high occurrence of latch events in our principal/core environment (11.2.0.3.0). The origins are all “different businesses channels” that access objects through the use of VPD. The latch events was bit by bit dominating the environment during the last months and turn on an “attention alarm” to us.

Then we found the the note: Bug 12772404 – Significant “row cache objects” latch contention when using VPD – superseded (Doc ID 12772404.8).

The situation is right the same:

“When VPD is used, intense row cache objects latch contention (dc_users) may caused by an internal Exempt Access Policy privilege check. Rediscovery Information: 
VPD is in use 
Significant “latch: row cache objects” waits occur
The waits are for the latch covering dc_users”

Take a look on the DC_USERS latches:
dc_users

And about the workaround:
“There is no direct workaround available.
The following guidelines may help to alleviate the problem :
– Dropping the database roles from our user:
The Number of Roles granted to user can increase the row cache
look-ups proportionally. When database is required to check whether
a system privilege is granted to User, it checks if that privilege
is granted to any of the User’s roles. Hence, it’s not helpful
to do something like “set role A, B, C, D, F …” to recreate its
environment for every execution.
– Changing the policy function might be helpful in some cases
eg: To use CONTEXT dependent policies instead of DYNAMIC policies”

Take a look in one of the examples of:

boesing@mydb4> /
P1RAW EVENT USERNAME SQL_ID SQL_CHILD_NUMBER LAST_CALL_ET SID SEQ# WAIT_TIME SECOND
--------- ---------------- ------ ------ ---------- ---------- ------
0700011807B50D08 latch: row cache objects CHANNELAPP 4nwvpx8xt3h3m 22 0 1276 59113 0
0700011807B50D08 latch: row cache objects CHANNELAPP fp3mft3usb74w 0 21719 16636 0
0700011807B50D08 latch: row cache objects CHANNELAPP 58pund2p09hgg 0 6774 11061 0
0700011807B50D08 latch: row cache objects OTHER_CHANNELAPP 54a2wfa60rgu1 1 0 8046 12386 0
0700011807B50D08 latch: row cache objects CHANNELAPP 1gwr69wduk9v4 42 0 9454 53927 0
0700011807B50D08 latch: row cache objects OTHER_CHANNELAPP 9pqrqqfzukrq4 68 0 9732 19311 0
0700011807B50D08 latch: row cache objects CHANNELAPP d1bnq8wb0nhrf 0 1 11425 56830 -1
0700011807B50D08 latch: row cache objects CHANNELAPP 32aqdd8cbmc4b 0 11711 39182 0
0700011807B50D08 latch: row cache objects IB_RUN adgnrpwazbfmz 0 12133 3372 0
0700011807B50D08 latch: row cache objects IB_RUN cqmgxvb78q9hy 0 17913 6345 0
0700011807B50D08 latch: row cache objects CHANNELAPP byzm159jbjxaa 0 6 19606 52624 0
0700011807B50D08 latch: row cache objects OTHER_CHANNELAPP 2kbjztd9yzqfm 61 0 20732 28687 0
0700011807B50D08 latch: row cache objects CHANNELAPP 6dvagdabts9nx 19 7 21011 504 0
0700011807B50D08 latch: row cache objects CHANNELAPP 9pqrqqfzukrq4 78 0 21439 19030 0
0700011807B50D08 latch: row cache objects CHANNELAPP gq1avu79h2np3 85 0 3815 33831 -1
boesing@mydb4>SELECT child# FROM v$latch_children WHERE addr= '0700011807B50D08';
CHILD#
----------
8
boesing@mydb4> select s.kqrstcln latch#, s.kqrstcid cache#, kqrsttxt name from x$kqrst s where s.kqrstcln=8;
LATCH# CACHE# NAME
---------- ---------- --------------------------------
8 10 dc_users
8 7 dc_users
8 7 dc_users
8 7 dc_users

The problem was definitively solved by applying the 11.2.0.4.2 PSU. No problems after that.
Good luck, if it’s your situation.

Hugs!
Matheus.

Adding ASM Disks on RHEL Cluster with Failgroups

# Recognizing as ASMDISK on ASM Libs (ORACLEASM):

1) All cluster nodes: /etc/init.d/oracleasm scandisk
[root@db1host1p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db2host2p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]

2) One of cluster nodes:
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA059 /dev/asmdsk/DGDATA059
Marking disk "DGDATA059" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA060 /dev/asmdsk/DGDATA060
Marking disk "DGDATA060" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA061 /dev/asmdsk/DGDATA061
Marking disk "DGDATA061" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA062 /dev/asmdsk/DGDATA062
Marking disk "DGDATA062" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA159 /dev/asmdsk/DGDATA159
Marking disk "DGDATA159" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA160 /dev/asmdsk/DGDATA160
Marking disk "DGDATA160" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA161 /dev/asmdsk/DGDATA161
Marking disk "DGDATA161" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA162 /dev/asmdsk/DGDATA162
Marking disk "DGDATA162" as an ASM disk: [ OK ]

3) All cluster nodes: /etc/init.d/oracleasm scandisk
[root@db1host1p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db2host2p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]

# Adding Disk on Diskgroup (sqlplus / as sysasm – ASM Instance)
1) Listing Failgroups
SQL> select distinct failgroup from v$asm_disk where group_number in (select group_number from v$asm_diskgroup where name='DGDATA');
FAILGROUP
----------------------------------------------------
FGMASTER
FGAUX

1) Adding Disks (naming and setting rebalance power):
SQL> alter diskgroup DGDATA
2 add failgroup FG01 disk
3 'ORCL:DGDATA059' name DGDATA059,
4 'ORCL:DGDATA060' name DGDATA060,
5 'ORCL:DGDATA061' name DGDATA061,
6 'ORCL:DGDATA062' name DGDATA062
7 add failgroup FG02 disk
8 'ORCL:DGDATA159' name DGDATA159,
9 'ORCL:DGDATA160' name DGDATA160,
10 'ORCL:DGDATA161' name DGDATA161,
11 'ORCL:DGDATA162' name DGDATA162
12 rebalance power 10 nowait;
Diskgroup altered

2) Be patient, and wait the rebalancing:
SQL> select * from v$asm_operation;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE
------------ ----------- ---------- ---------- ---------- -----------
4 REBAL RUN 10 10 191386 540431 1651 211 5 REBAL WAIT 4
SQL> /
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE
------------ --------------- ----------------------------------------
4 REBAL RUN 10 10 443438 548118 2345 44 5 REBAL WAIT 4
SQL> /
no rows selected

Well done! 😀
Matheus.

Manually Mounting ACFS

A server rebooted and I needed to remount the ACFS where the Oracle Home is. About that:
Today’s post: Manually Mounting ACFS
Tomorrow’s Someday’s post: Kludge: Mounting ACFS Thought Shellscript
Day Before Tomorrow’s Another Day’s post: Auto Mounting Cluster Services Through Oracle Restart

But, first, some usefull links:
– ACFS Introduction
– ACFS Advanced
– ACFS Command-Line Utilities

# Manually Mounting ACFS
Checked my $ORACLE_HOME (mounted on ACFS) is not available to start the database. Checked ACFS service is down. So, let’s do all the process:

# Starting ACFS
[root@db1host1p ~]$ $GRID_HOME/bin/acfsload start -s

# Volumes OFFLINE: Let’s Enable it:
[root@db1host1p ~]$ $GRID_HOME/bin/crsctl stat res -t |grep acfs
ora.dghome.sephome.acfs
ONLINE OFFLINE db1host1p
[root@db1host1p ~]$ su - grid
[grid@db1host1p ~]$ asmcmd
ASMCMD> volinfo -a
Diskgroup Name: DGHOME
Volume Name: LVHOME
Volume Device: /dev/asm/lvhome-270
State: DISABLED
Size (MB): 10240
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /oracle/MYDB
ASMCMD> volenable -a
ASMCMD> volinfo -a
Diskgroup Name: DGHOME
Volume Name: LVHOME
Volume Device: /dev/asm/lvhome-270
State: ENABLED
Size (MB): 10240
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /oracle/MYDB

[root@db1host1p ~]$ $GRID_HOME/bin/crsctl stat res -t |grep acfs
ora.dghome.sephome.acfs
ONLINE ONLINE db1host1p mounted on /oracle/MYDB
ONLINE ONLINE db2host2p mounted on /oracle/MYDB

# As root, let’s mount it:
[root@db1host1p ~]# mount -t acfs /dev/asm/lvhome-270 /oracle/MYDB

# Then, with the $ORACLE_HOME available:
[oracle@db1host1p ~]$ srvctl start instance -d MYDB -i MYDB001

Matheus.

Rebuild all indexes of a Partioned Table

Another quick post!

Regarding you frequently need to collect all indexes of a partioned table (local and global indexes), this is a quick script that make the task a little bit easier:

begin
-- local indexes
for i in (select p.index_owner owner, p.index_name, p.partition_name
from dba_indexes i, dba_ind_partitions p
where i.owner='&OWNER'
and   i.table_name='&TABLE'
and   i.partitioned='YES'
and   i.visibility='VISIBLE' -- Rebuild only of the visible indexes, to get real effect :)
and   p.index_name=i.index_name
and   p.index_owner=i.owner
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild  partition '||i.partition_name||' online parallel 12'; -- parallel 12 solve most of the problems
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- If you don't use parallel indexes in your database, or the default parallel of the index, or what you want...
end loop;
-- global indexes
for i in (select i.owner owner, i.index_name
from dba_indexes i
where i.owner='&OWNER'
and   i.table_name='&TABLE'
and   i.partitioned='NO'
and   i.visibility='VISIBLE' -- same comment
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild online parallel 12'; -- same
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- same :)
end loop;
end;
/

I hope this script make your life easier. Hugs!

Matheus.

How to Prevent Automatic Database Startup

This is a quick post! 😀
About Oracle Restart
Reference to SRVCTL

Ok!
In a nutshell, my notes:

To register the database, if not already registered:
> srvctl add database -d $DBNAME -o $ORACLE_HOME -p $ORACLE_HOME/dbs/spfile.ora -y manual

Once the database is registered, change the management policy for the database to manual:
> srvctl modify database -d $DBNAME -y manual

Matheus.