ASM: Process PPAx holds disk from being dropped

So I was trying to remove some no longer needed disks from a cluster: 1st I umounted the disgroups from the other cluster nodes and from the last one I executed the DROP DISKGROUP DG_NAME INCLUDING CONTENTS. Nothing new right, but the SA told me that the disks were still being accessed.

I checked with kfod and kfed and also using asmcmd lsdsk -g –candidate, the disks were there, avaliable to be used in another diskgroup but they were no a part of any diskgroup.

I tried to check using asmcmd lsod but it was not returning anything while using lsop and fuser I was able to see the process that was holding the disks, which was an asm process PPA7:
*the disks that I´m was looking for was from the db5*

ASMCMD> lsod --process ppa
Instance Process OSPID Path
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_data3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos7p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_data3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_redos2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_data6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_redos2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ocrvoting2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ocrvoting5p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_data2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo4p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_data6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_flash2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_redos2p1

[oracrs@server01 ~]$ fuser /dev/mapper/mp_db5_redos2p1
/dev/dm-99: 38908
[oracrs@server01 ~]$ lsof /dev/mapper/mp_db5_redos2p1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
asm_ppa7_ 38908 oracrs 270u BLK 253,99 0t0 70663 /dev/mapper/../dm-99

Checking MOS I found some notes like Database Process Hold Locks Dropped ASM Disk (Doc ID 2159941.1) but still would not help me as there was no work around the issue.

So basicly I needed to kill the process but could I? It was a production system and kill ASM processes is not something that one should take lightly.

In the documentation the PPA process is found under the
Parallel Query Slave Process which made my life easier, right? Not really.

How to make sure that killing a process would not make the instance to go down?

There is a script and blog post from Tanel Poder which also helped me , and it is worth the reading,

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

SYS@+ASM1>SELECT indx,ksuprpnm,TO_CHAR(ksuprflg,'XXXXXXXXXXXXXXXX')
FROM x$ksupr
WHERE BITAND(ksuprflg,4) = 4 ORDER BY indx
/ 2 3 4

INDX KSUPRPNM TO_CHAR(KSUPRFLG,
---------- ------------------------------------------------ -----------------
2 oracle@server01 (PMON) E
3 oracle@server01 (CLMN) E
4 oracle@server01 (PSP0) 6
5 oracle@server01 (VKTM) 6
6 oracle@server01 (GEN0) 6
7 oracle@server01 (MMAN) 6
13 oracle@server01 (PMAN) 6
15 oracle@server01 (LMON) 6
16 oracle@server01 (LMD0) 6
17 oracle@server01 (LMS0) 6
20 oracle@server01 (LCK1) 6
21 oracle@server01 (DBW0) 6
22 oracle@server01 (LGWR) 6
23 oracle@server01 (CKPT) 6
24 oracle@server01 (SMON) 6
25 oracle@server01 (LREG) 6
27 oracle@server01 (RBAL) 6
28 oracle@server01 (GMON) 6
31 oracle@server01 (IMR0) 6
32 oracle@server01 (LCK0) 6
38 oracle@server01 (ASMB) 6

Cutting the sotry short as my process was not the query output, I did some other testing and I was comfortable enough to kill it and release the disks to the SA team.

Hope it helps

Elisson Almeida

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.