ASM Fails to Start After rootupgrade.sh on the First Node

If you’ve ever performed a rolling upgrade from Oracle Grid Infrastructure 12c to 19c in a RAC environment, you may have hit this one: after running rootupgrade.sh on the first node, the ora.asm resource fails to come online, and the upgrade stalls.

Symptoms:

  • rootupgrade.sh completes on node 1 but ora.asm fails to start
  • CRS alert log shows errors like: CRS-2674: Start of 'ora.asm' on '<node1>' failed
  • crsctl stat res ora.asm -t shows OFFLINE on node 1
  • ASM alert log may show mount errors or listener registration failures

Root cause

This issue is documented under Doc ID 2606735.1 and is related to the ASM SPFILE location and CRS resource configuration misalignment during the upgrade transition window. The 19c CRS stack is running on node 1 while the 12c stack is still active on remaining nodes — creating a brief but critical incompatibility in how ora.asm is registered and started.

Workaround during upgrade:

  1. Before running rootupgrade.sh on node 1, ensure ASM is using a PFILE instead of an SPFILE stored on ASM diskgroups that require the instance to already be mounted.
  2. Run the upgrade with the ASM SPFILE temporarily backed up to a local filesystem location.
  3. After the upgrade on all nodes, restore the SPFILE to ASM.

Related bugs:

This issue is closely related to Bug 30265357 and Bug 30452852, which I’ll cover in the next two weeks. If you’re planning a 12c-to-19c Grid Infrastructure upgrade, read all three before you start the maintenance window.

Always test your upgrade procedure in a non-production environment first, and keep My Oracle Support (MOS) Doc ID 2606735.1 bookmarked.

Discover more from grepOra

Subscribe now to keep reading and get access to the full archive.

Continue reading