GRID upgrade FREEZES – 11g to 12c

Hey guys,
Upgrading is always something critical and a delicate operation but when you have no feedback on in the screen even harder.

I was working on an upgrade and using the GUI to upgrade the GRID from 11g to 12c. The 11gr2 11.2.0.4 was working without issue and ASM was as well (note this point, we will come back here later on).

When it was time to run the rootupgrade.sh, it just got stuck. No matter what, the GRID upgrade to 12c just FROZE. Checking the logs the last message was only this:

CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

Looking the other logs (/u01/app/12.1.0/grid/cfgtoollogs/crsconfig) there were messages related to OCR, pointing it cannot get OCR key with CLUUTIL, try using OCRDUMP. I checked ORC with ocrdump and ocrcheck. No issues there as well. Also, as I said before, the cluster was working without any issues.

As I had no error code or any thing that would give me a more specific cause. I went to a broad search on google and MOS. Saw all kind of things until I found the MOS: Wrong DiscoveryString /dev/*: rootupgrade.sh/root.sh hangs: Check OCR key using ocrdump (Doc ID 1916106.1)

I checked any my ASM disk discovery string was set to /dev/* which did not strike me as an issue as I mentioned it was working… BUT when I changed the script in ASM to /dev/asm-* the upgrade worked like a charm.

Also as note there is this note, with some best practices for upgrading: How to Upgrade to/Downgrade from Grid Infrastructure 12.1 and Known Issues (Doc ID 1579762.1).

Hope this helps and save some time in your troubleshooting.

Élisson Almeida

“TNS-12531: TNS:cannot allocate memory error” – Are you sure, Oracle?

Hey guys!
So, I was working on a server build and everything was running fine until I tried to start the listerner. The process hang on “Starting /u01/app/grid/product/12.1.0/grid/bin/tnslsnr: please wait…” and then raised TNS-12531: TNS:cannot allocate memory error.

Well 1st thing, looked the error up using orerr:

TNS-12531: TNS: cannot allocate memory
Cause: Sufficient memory could not be allocated to perform the desired activity.
Action: Either free some resource for TNS, or add more memory to the machine. For further details, turn on tracing and re-execute the operation.

Should be simple right? Well, not in this case. The server had plenty of resources and not even the database was up yet so over 90% of the server memory was free.

Checked all sort of things when I started to check the server network configuration.
Looking up found that the server will through this error also when the hostname definition is different from what is resolved by the /etc/hosts file.

Once those matched, volià, listener started successfully.

Not the memory right? Oracle and its tricks…

That kept me bugging so I found this article, which shows a trace of the listener with a bit more information.

Hope this can save you some minutes on troubleshooting.

Until next time!

Opening New PDB 12.1.0.2 – Warning: PDB altered with errors

Hi,
Some time ago, after creating a pluggable database from seed, it was simply not open, check it out:

SQL> alter pluggable database mypdb open;

Warning: PDB altered with errors.

When checking for the pdb status on v$pdb, pdb, I found it was in restricted mode. Checking the alert.log, not a single error to help to solving the issue, as you can see:

CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
Fri Feb 02 16:06:05 2018
APEX_040200.WWV_FLOW_TEMPLATES (JAVASCRIPT_CODE_ONLOAD) - CLOB populated
Fri Feb 02 16:06:37 2018
****************************************************************
Pluggable Database mypdb with pdb id - 7 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
This instance was first to open pluggable database mypdb (container=7)
Database Characterset for mypdb is WE8MSWIN1252
Deleting old file#2 from file$
Deleting old file#4 from file$
Adding new file#108 to file$(old file#2)
Adding new file#109 to file$(old file#4)
Successfully created internal service mypdb at open
ALTER SYSTEM: Flushing buffer cache inst=1 container=7 local
****************************************************************
Post plug operations are now complete.
Pluggable database mypdb with pdb id - 7 is now marked as NEW.
****************************************************************
Completed: CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
alter pluggable database mypdb open
Fri Feb 02 16:06:52 2018
This instance was first to open pluggable database mypdb (container=7)
Pluggable database mypdb dictionary check beginning
Pluggable Database mypdb Dictionary check complete
Database Characterset for mypdb is WE8MSWIN1252
Fri Feb 02 16:07:07 2018
Opening pdb mypdb (7) with no Resource Manager plan active
Fri Feb 02 16:07:08 2018
Logminer Bld: Build started
Resize operation completed for file# 108, old size 296960K, new size 307200K
Resize operation completed for file# 108, old size 307200K, new size 317440K
Fri Feb 02 16:07:15 2018
Logminer Bld: Done
Pluggable database mypdb opened read write
Completed: alter pluggable database mypdb open

In the end of the day, discovered this is a match to MOS Bug 19174942 – After upgrade from 12.1.0.2 new cloned PDB opens with warnings in restricted mode (Doc ID 19174942.8).

To fix the problem you need create the missing default tablespaces for each common user (you can see that from DBA_USERS view in ROOT). In my case, it was the tablespace USERS (didn’t existing by looking up on dba_data_files view on the pdb). Seems it was not created correctly during create of the pluggable. So, to fix it:

create tablespace users datafile '+DGDATA/MYCDB/DATAFILE/user_01.dbf' size 100m;

Then you just have to restart your PDB and it will open without problems:

SYS@MYCDB>alter session set container=CDB$ROOT;

SQL> alter pluggable database mypdb close;

Pluggable database altered.

SQL> alter pluggable database mypdb open;

Pluggable database altered.

See you in the next post!

Scheduler Job Start Time Changed After DST Change

Hi all,
Some time ago a client asked me to check why a Job started running in a different time. This was a week after the DST change. Easy guess, right?

The job was scheduled using timezone offset (PM-04:00) instead of timezone region. Considering the DST change, the offset changed as well (-5:00), but not the job schedule.
To fix the job time definitely, I changed schedule to use timezone region that database is in. Check below the investigation and solution:

SQL> select owner,job_name, repeat_interval, last_start_date, next_run_date from dba_scheduler_jobs where job_name='MYJOB';

OWNER			 JOB_NAME    REPEAT_INTERVAL	LAST_START_DATE                      NEXT_RUN_DATE
---------- ---------- ----------------- -----------------------------------  --------------------
GREPORA		 MYJOB      FREQ=DAILY;			   08-NOV-17 09.00.00.330739 PM -04:00 08-NOV-17 09.00.00.300000 PM -04:00
							  
												  
SQL> SELECT * FROM   dba_scheduler_global_attribute  WHERE  attribute_name = 'DEFAULT_TIMEZONE';

ATTRIBUTE_NAME	  VALUE
----------------- ---
DEFAULT_TIMEZONE  EST5EDT


SQL> SELECT tzabbrev, TZ_OFFSET(tzname), tzname FROM V$TIMEZONE_NAMES 
WHERE tzname IN ('EST', 'EDT') OR tzabbrev IN ('EST', 'EDT') ORDER BY 1,2;

TZABBREV	TZ_OFFS TZNAME
--------- ------- ---------------------------
EDT       -04:00  America/Santo_Domingo
EDT       -05:00  America/Fort_Wayne
EDT       -05:00  America/Grand_Turk
EDT       -05:00  America/Indiana/Indianapolis
EDT       -05:00  America/Indiana/Marengo
EDT       -05:00  America/Indiana/Petersburg
EDT       -05:00  US/Michigan
EDT       -05:00  America/Detroit
EDT       -05:00  US/Eastern
EDT       -05:00  US/East-Indiana
EDT       -05:00  Jamaica
EDT       -05:00  EST5EDT
EDT       -05:00  Canada/Eastern
EDT       -05:00  America/Toronto
EDT       -05:00  America/Thunder_Bay
EDT       -05:00  America/Port-au-Prince
EDT       -05:00  America/Pangnirtung
EDT       -05:00  America/Nipigon
EDT       -05:00  America/New_York
EDT       -05:00  America/Nassau
EDT       -05:00  America/Montreal
EDT       -05:00  America/Louisville
EDT       -05:00  America/Kentucky/Monticello
EDT       -05:00  America/Kentucky/Louisville
EDT       -05:00  America/Jamaica
EDT       -05:00  America/Iqaluit
EDT       -05:00  America/Indiana/Vevay
EDT       -05:00  America/Indiana/Vincennes
EDT       -05:00  America/Indiana/Winamac
EDT       -05:00  America/Indianapolis
EDT       -06:00  America/Indiana/Tell_City
EDT       -06:00  America/Cancun
EST       +10:00  Australia/Queensland
EST       +10:00  Australia/Lindeman
EST       +10:00  Australia/Brisbane
EST       +10:30  Australia/Broken_Hill
EST       +10:30  Australia/Yancowinna
EST       +11:00  Australia/Currie
EST       +11:00  Australia/Canberra
EST       +11:00  Australia/ACT
EST       +11:00  Antarctica/Macquarie
EST       +11:00  Australia/Hobart
EST       +11:00  Australia/LHI
EST       +11:00  Australia/Lord_Howe
EST       +11:00  Australia/Melbourne
EST       +11:00  Australia/NSW
EST       +11:00  Australia/Sydney
EST       +11:00  Australia/Tasmania
EST       +11:00  Australia/Victoria
EST       -04:00  America/Antigua
EST       -04:00  America/Moncton
EST       -04:00  America/Santo_Domingo
EST       -05:00  US/Eastern
EST       -05:00  America/Grand_Turk
EST       -05:00  America/Atikokan
EST       -05:00  Jamaica
EST       -05:00  EST5EDT
EST       -05:00  EST
EST       -05:00  America/Indiana/Indianapolis
EST       -05:00  Canada/Eastern
EST       -05:00  America/Toronto
EST       -05:00  America/Thunder_Bay
EST       -05:00  America/Resolute
EST       -05:00  America/Indiana/Marengo
EST       -05:00  America/Indiana/Petersburg
EST       -05:00  America/Cayman
EST       -05:00  America/Indiana/Vevay
EST       -05:00  America/Indiana/Vincennes
EST       -05:00  America/Indiana/Winamac
EST       -05:00  America/Indianapolis
EST       -05:00  America/Iqaluit
EST       -05:00  America/Jamaica
EST       -05:00  America/Kentucky/Louisville
EST       -05:00  America/Kentucky/Monticello
EST       -05:00  US/Michigan
EST       -05:00  America/Louisville
EST       -05:00  America/Coral_Harbour
EST       -05:00  America/Detroit
EST       -05:00  America/Fort_Wayne
EST       -05:00  America/Montreal
EST       -05:00  America/Nassau
EST       -05:00  America/New_York
EST       -05:00  America/Nipigon
EST       -05:00  America/Panama
EST       -05:00  America/Pangnirtung
EST       -05:00  America/Port-au-Prince
EST       -05:00  US/East-Indiana
EST       -06:00  US/Central
EST       -06:00  CST
EST       -06:00  America/Rankin_Inlet
EST       -06:00  America/Merida
EST       -06:00  America/Menominee
EST       -06:00  America/Managua
EST       -06:00  America/Knox_IN
EST       -06:00  America/Indiana/Tell_City
EST       -06:00  America/Indiana/Knox
EST       -06:00  America/Chicago
EST       -06:00  America/Cancun
EST       -06:00  US/Indiana-Starke
EST       -07:00  America/Cambridge_Bay

100 rows selected.


SQL> BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
   name                 => 'GREPORA.MYJOB',
   attribute            => 'start_date',
   value                => TO_TIMESTAMP_TZ('09-11-2017 21:00:00 EST5EDT' ,'DD-MM-YYYY HH24:MI:SS TZR'));
END;
/

PL/SQL procedure successfully completed.

SQL> select owner,job_name, repeat_interval, last_start_date, next_run_date from dba_scheduler_jobs where job_name='MYJOB';

OWNER			 JOB_NAME    REPEAT_INTERVAL	LAST_START_DATE                  NEXT_RUN_DATE
---------- ---------- ----------------- -------------------------------  --------------------
GREPORA		 MYJOB      FREQ=DAILY;			  08-NOV-17 09.00.00.3 09-NOV-17   09-NOV-17 09.00.00.0

Hope this helps you!
Cheers!

Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database

Hi all!
So, running Exacheck on a recently created database, found this error:

 FAIL => The bundle patch version installed does not match the bundle patch version registered in the database: [host]:[sid],...

This means that a boundle patch with sqlpatch was applied to OH and not to this database. Happens because Exacheck try to match the patch info stored in oraInventory with the patch info stored in dba_registry_sqlpatch.

Also note in some situations, running datapatch may require the database to be in upgrade mode and if you are patching Exadata , which is generally a RAC based environment, you need to set the cluster_database=false and at least 1 job_queue_process before starting the database using startup upgrade command. This should be described in readme on related patch.

When checking for this, I found a really interesting validation script here. As per:

opatch_bp=$($ORACLE_HOME/OPatch/opatch lspatches 2>/dev/null|grep -iwv javavm|grep -wi database|head -1|awk -F';' '{print $1}') 
database_bp_status=$(echo -e "set heading off feedback off timing off \n select STATUS from dba_registry_sqlpatch where PATCH_ID = $opatch_bp;"|$ORACLE_HOME/bin/sqlplus -s " / as sysdba" | sed -e '/^ *$/d')
if [ "$database_bp_status" == SUCCESS ]
then
      echo "SUCCESS: Bundle patch installed in the database matches the software home and is installed successfully."
else
      echo "FAILURE: Bundle patch installed in the database does not match the software home, or is installed with errors." 
 fi

To fix, just set environment variables to correct database, go to $ORACLE_HOME/OPatch and run:

Continue reading

Application Named Variables Returning ORA-01008: not all variables bound

Hi all,
I saw it a long time ago and last week I was involved in a discussion for this same error. So, this deserve a post for future reference. 🙂

Imagine the situation: You are building an application or a module to perform queries on database. You want to use variables in the query and fill the values using text fields on application. Sounds easy and works fine for SQLServer and others, but Oracle database is returning:

ORA-01008: not all variables bound

What to do?

Fist let’s clear the issue: this is not related to database layer or oracle interpreter/parser. This error happens when a bind variable being used on SQL have no value. The official reasoning is:

Cause: A SQL statement containing substitution variables was executed without all variables bound. All substitution variables must have a substituted value before the SQL statement is executed.

In my case ODP.Net with C# (but apply to other languages). Interesting fact:
“ODP.Net provider from oracle uses bind by position as default. To change the behavior to bind by name, set property BindByName to true.”

This means Oracle may be waiting for “:1”, “:2” as bind variables and this can also not being set correctly by application.
In this case, please try to set BindByName to true in application code for Oracle command. Example below:

using(OracleCommand cmd = con.CreateCommand()) {
    ...
    cmd.BindByName = true;
    ...
}

Now, try again. 🙂

Hope this helps you!
Cheers!

RAC: ORA-01265: Unable to delete ARCHIVED LOG

Weird?
Found that some time ago. This error was reported because a run condition given by the 2 instances was trying to delete the same archivelog file. This can vary to more than 2 instances, depending on your RAC configuration, of course.

This is a quick way to identify if you are in presence of the same situation and ignore the error, using the sequence number reported on the error (SEQ) and the last line number reported by grep (LINE):

# Node1:

grep -n SEQ /oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb1.log | tail
tail -n +(LINE-10) oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb1.log| head

# Node2:

grep -n SEQ /oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb2.log | tail
tail -n +(LINE-10) oracle/greporadb/diag/rdbms/greporadb/greporadb2/trace/alert_greporadb2.log| head

The result should be the archivelog sequence deleted on one of the nodes and the error reported on the other, at the same timestamp.

Hope it helps!