Are You Missing Your PDBs After ExaCS Patching or OCPU Scaling?

Picture this: you’ve just finished patching your shiny Oracle 19.27 Exadata Cloud@Customer (ExaCS) database or scaled up some OCPUs to boost performance… and then BAM! 💥 — your PDBs are gone, your database won’t start, and DIA0 throws a tantrum like a toddler denied screen time.

But when you start Oracle databases slaps back with errors?:

DIA0 Critical Database Process As Root: Hang ID 1 blocks 1 sessions
Final blocker is session ID 524 serial# 3363 OSPID 53716 on Instance 1
If resolvable, instance eviction will be attempted by Hang Manager
2025-07-11T06:35:24.821694+00:00

PRCR-1079 : Failed to start resource ora.dbadeeds.db
CRS-2800: Cannot start resource 'ora.datac1.acfsvol01.acfs' as it is already in the INTERMEDIATE state on server 'db1'
CRS-2632: There are no more servers to try to place resource 'ora.dbadeeds.db' on that would satisfy its placement policy

Wait, INTERMEDIATE state? Sounds like the ASM resource is caught somewhere between the astral plane and /dev/null.

So, you pop into SQL*Plus, expecting to see your PDBs:

sqlplus "/as sysdba"
SQL> show pdbs;

And Oracle simply whispers back: 
No PDBs. No warnings. Just... silence. 🫥

Root Cause: Buggy Oracle ExaCS Image starting with 25 above especially
After scaling OCPUs. Oracle's clusterware and ACFS components can get stuck in the INTERMEDIATE state, causing dependent resources like databases or volumes to not mount. Also, the hang manager doesn’t auto-resolve this type of root process block — leaving you in limbo.

✅ Action: Disable CRS on each nodes and Reboot the affected VM(s). Just a good old-fashioned:

Oracle ExaCS gives you Extreme Performance™, but sometimes it delivers Extreme Confusion™ too. Stay spicy, stay sharp — and when in doubt, reboot the node, not your career. 

Leave a comment