If you’ve ever performed a rolling upgrade from Oracle Grid Infrastructure 12c to 19c in a RAC environment, you may have hit this one: after running rootupgrade.sh on the first node, the ora.asm resource fails to come online, and the upgrade stalls.
Symptoms:
rootupgrade.shcompletes on node 1 butora.asmfails to start- CRS alert log shows errors like:
CRS-2674: Start of 'ora.asm' on '<node1>' failed crsctl stat res ora.asm -tshows OFFLINE on node 1- ASM alert log may show mount errors or listener registration failures
Root cause
This issue is documented under Doc ID 2606735.1 and is related to the ASM SPFILE location and CRS resource configuration misalignment during the upgrade transition window. The 19c CRS stack is running on node 1 while the 12c stack is still active on remaining nodes — creating a brief but critical incompatibility in how ora.asm is registered and started.
Workaround during upgrade:
- Before running
rootupgrade.shon node 1, ensure ASM is using a PFILE instead of an SPFILE stored on ASM diskgroups that require the instance to already be mounted. - Run the upgrade with the ASM SPFILE temporarily backed up to a local filesystem location.
- After the upgrade on all nodes, restore the SPFILE to ASM.
Related bugs:
This issue is closely related to Bug 30265357 and Bug 30452852, which I’ll cover in the next two weeks. If you’re planning a 12c-to-19c Grid Infrastructure upgrade, read all three before you start the maintenance window.
Always test your upgrade procedure in a non-production environment first, and keep My Oracle Support (MOS) Doc ID 2606735.1 bookmarked.
