So, just a few days ago, during a client support, crossed the following case.
A few database creations initially failed with DBCA due to other issues, but it seems the DBCA didn’t cleread all creation steps after the failure and theoretical rollback.
As a consequence, whenever running DBCA with GUI client was seeing an old database. When trying to remove it with DBCA, the removal fail as the database can’t be brought up (creation has failed, remember?). Ok, we can leave with it, right?
Yes until we reached the point where Opatchauto failed with the following:
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/db "/bin/sh -c 'cd /u01/app/oracle/product/19c/db; ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.
The reason? See the complete log about the failing step:
Executing command as oracle: /bin/sh -c 'cd /u01/app/oracle/product/19c/db;ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -verbose' 2020-09-02 16:26:56,362 INFO  com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - COMMAND Looks like this: /bin/sh -c 'cd /u01/app/oracle/product/19c/db;ORACLE_HOME=/u01/app/oracle/product/19c/db ORACLE_SID=DB1 /u01/app/oracle/product/19c/db/OPatch/datapatch -verbose' 2020-09-02 16:26:57,662 INFO  com.oracle.glcm.patch.auto.db.product.executor.GISystemCall - Is retry required=false 2020-09-02 16:26:57,662 INFO  com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - status: 1 2020-09-02 16:26:57,662 INFO  com.oracle.glcm.patch.auto.db.product.executor.PatchingStepExecutor - COMMAND EXECUTION FAILURE : SQL Patching tool version 22.214.171.124.0 Production on Wed Sep 2 16:26:57 2020 Copyright (c) 2012, 2020, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_25218_2020_09_02_16_26_57/sqlpatch_invocation.log Connecting to database... Error: prereq checks failed! Database connect failed with: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux-x86_64 Error: 2: No such file or directory Additional information: 4376 Additional information: 1275019259 (DBD ERROR: OCISessionBegin) Please refer to MOS Note 1609718.1 and/or the invocation log /u01/app/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_25218_2020_09_02_16_26_57/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Wed Sep 2 16:26:57 2020
Clearly, the database is still in place.
As per MOS (ORA-01078 Can Not Delete Database From Dbca on Linux (Doc ID 1559634.1)) -> See the /etc/oratab!
Thing is, the doesn’t have the DB1 line. Also, all related files, logs directory structure, passwd, init, etc… all wiped out. What else?
Here is goes what seems Oracle forgot to tell:
[oracle@PRODB01 dbca]$ srvctl status database -d DB1 Database is not running. [oracle@DMSDB1PA dbca]$ srvctl config database -d DB1 Database unique name: DB1 Database name: DB1 Oracle home: /u01/app/oracle/product/19c/db Oracle user: oracle Spfile: Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Disk Groups: DATA Services: OSDBA group: oinstall OSOPER group: Database instance: DB1
Ohhh, that took me a while to realize, this was a Standalone server. Once understood, the fix is straight forward:
[oracle@PRODB01 dbca]$ srvctl remove database -d DB1 Remove the database dms? (y/[n]) y [oracle@PRODB01 dbca]$
I hope this can buy you some minutes of MOS, in case you are googling it first.
Or save you in case you gave up MOS already.