“TNS-12531: TNS:cannot allocate memory error” – Are you sure, Oracle?

Hey guys!
So, I was working on a server build and everything was running fine until I tried to start the listerner. The process hang on “Starting /u01/app/grid/product/12.1.0/grid/bin/tnslsnr: please wait…” and then raised TNS-12531: TNS:cannot allocate memory error.

Well 1st thing, looked the error up using orerr:

TNS-12531: TNS: cannot allocate memory
Cause: Sufficient memory could not be allocated to perform the desired activity.
Action: Either free some resource for TNS, or add more memory to the machine. For further details, turn on tracing and re-execute the operation.

Should be simple right? Well, not in this case. The server had plenty of resources and not even the database was up yet so over 90% of the server memory was free.

Checked all sort of things when I started to check the server network configuration.
Looking up found that the server will through this error also when the hostname definition is different from what is resolved by the /etc/hosts file.

Once those matched, volià, listener started successfully.

Not the memory right? Oracle and its tricks…

That kept me bugging so I found this article, which shows a trace of the listener with a bit more information.

Hope this can save you some minutes on troubleshooting.

Until next time!

Oracle SecureFiles | In 11.2 extended to Oracle Database File System – DBFS

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Oracle SecureFiles | In 11.2 extended to Oracle Database File System – DBFS

In version 11g Oracle introduced SecureFiles, a new LOB storage architecture as replacement for BASICFILES LOBs’storage, being faster than Unix files to read/write. Lots of potential benefits for OLAP analytic workspaces are expected, as the LOBs used to hold AWs have historically been very slow to write. In addition, this object type is compliant to other mechanisms like deduplication, compression and encryption. Besides that, lock and concurrency model has been improved to manage those kind of objects. Other improvements like space management, reduced fragmentation, intelligent pre-fetching, new network layer, no LOB index contention, no high water mark contention and being easier to manage are important to mention.

Continue reading

11g+ Features: Extended Statistics

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Extended Statistics

One of the most expected features of Oracle 11g is improvements to the dbms_stats package, specifically the ability to aid complex queries by providing extended statistics to the cost-based optimizer (CBO).

The 11g extended statistics are intended to improve the optimizers guesses for the cardinality of combined columns and columns that are modified by a built-in or user-defined function. In Oracle 10g dynamic sampling can be used to provide inter-table cardinality estimates, but dynamic sampling has important limitations. However, the 11g create_extended_stats in dbms_stats relieves much of the problem of sub-optimal table join orders allowing for extended statistics on correlated columns.

One of the expectation is to avoid using the ORDERED hint, one of the most popular SQL tuning hints, used to to specify that the tables be joined together in the same order that they appear in the FROM clause.

This feature can be controlled by hidden parameter:

_optimizer_enable_extended_stats in case of SQL Performance regression.

 

You can also see more detailed material here:
About improvements on 18c (yeah, lots of things on 12c and 18c).
Oracle 12c: Automatic Column Group Detection in Oracle Database 12c Release 1 (12.1)
Oracle Base: Extended Statistics on 11gR2.

Cheers!

Monitoring Your Oracle Database With Grafana

Hi everybody,

Let’s talk about Dashboarding Oracle Databases with Grafana.

I always felt the need of a graphical monitoring tool for basic database things such as volume of archives, back-up archives, state of services, offline disks, space of diskgroup, consum of UNDO, consum of TEMP, space of filesystem, space of every diskgroups in all clusters. OEM seems just too much complicated to give a simple online graphical dashboard for this.

So I developed a “collector” of data that sends the data to Influxdb and generate these graphs. Simple like that.

Have a look on how it looks like:

grafana1

grafana2

Ok, but I how did it?
Here it goes a piece of code:
Continue reading

Shellscript & Oracle: Run Script for a List of Databases

Hey all!
Quick tip/script for today: How to run same script for a list of databases quickly?

In my case I have same password/user in all databases, but in case you haven’t you can make a similar awk command to retrieve users and passwords from a file.

The list:

$: cat /tmp/dbs.cfg
db01
db02
db03
testdb

The script:

for DBSID in ${*-$(awk -F: '!/^#/ {print $1}' /tmp/dbs.cfg}
do
    print "
        connect user/password@${DBSID}
        @script_to_run.sql
        exit " |
    sqlplus /nolog >> /tmp/output_test.log
done

Hope it helps!
Cheers!

Opening New PDB 12.1.0.2 – Warning: PDB altered with errors

Hi,
Some time ago, after creating a pluggable database from seed, it was simply not open, check it out:

SQL> alter pluggable database mypdb open;

Warning: PDB altered with errors.

When checking for the pdb status on v$pdb, pdb, I found it was in restricted mode. Checking the alert.log, not a single error to help to solving the issue, as you can see:

CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
Fri Feb 02 16:06:05 2018
APEX_040200.WWV_FLOW_TEMPLATES (JAVASCRIPT_CODE_ONLOAD) - CLOB populated
Fri Feb 02 16:06:37 2018
****************************************************************
Pluggable Database mypdb with pdb id - 7 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
This instance was first to open pluggable database mypdb (container=7)
Database Characterset for mypdb is WE8MSWIN1252
Deleting old file#2 from file$
Deleting old file#4 from file$
Adding new file#108 to file$(old file#2)
Adding new file#109 to file$(old file#4)
Successfully created internal service mypdb at open
ALTER SYSTEM: Flushing buffer cache inst=1 container=7 local
****************************************************************
Post plug operations are now complete.
Pluggable database mypdb with pdb id - 7 is now marked as NEW.
****************************************************************
Completed: CREATE PLUGGABLE DATABASE mypdb ADMIN USER PDBADMIN identified by *
alter pluggable database mypdb open
Fri Feb 02 16:06:52 2018
This instance was first to open pluggable database mypdb (container=7)
Pluggable database mypdb dictionary check beginning
Pluggable Database mypdb Dictionary check complete
Database Characterset for mypdb is WE8MSWIN1252
Fri Feb 02 16:07:07 2018
Opening pdb mypdb (7) with no Resource Manager plan active
Fri Feb 02 16:07:08 2018
Logminer Bld: Build started
Resize operation completed for file# 108, old size 296960K, new size 307200K
Resize operation completed for file# 108, old size 307200K, new size 317440K
Fri Feb 02 16:07:15 2018
Logminer Bld: Done
Pluggable database mypdb opened read write
Completed: alter pluggable database mypdb open

In the end of the day, discovered this is a match to MOS Bug 19174942 – After upgrade from 12.1.0.2 new cloned PDB opens with warnings in restricted mode (Doc ID 19174942.8).

To fix the problem you need create the missing default tablespaces for each common user (you can see that from DBA_USERS view in ROOT). In my case, it was the tablespace USERS (didn’t existing by looking up on dba_data_files view on the pdb). Seems it was not created correctly during create of the pluggable. So, to fix it:

create tablespace users datafile '+DGDATA/MYCDB/DATAFILE/user_01.dbf' size 100m;

Then you just have to restart your PDB and it will open without problems:

SYS@MYCDB>alter session set container=CDB$ROOT;

SQL> alter pluggable database mypdb close;

Pluggable database altered.

SQL> alter pluggable database mypdb open;

Pluggable database altered.

See you in the next post!

GGATE ABBENDED: ORA-00308: Cannot Open Archived Log

Hi all!
Ok, so this is one of the most common issues for GGate administration. How to solve it? Easy thing.

First let’s understand what it means: It means the redologs don’t have the required information (assuming integrated) and you have already deleted archivelogs the extract needs. Why? Probably because you already backed up those archivelogs and they were not needed for the database anymore.

Unfortunately we don’t have any kind of ARCHIVELOG DELETION POLICY to guarantee extracts had already read it, like we have for Dataguard. So, what can we do?

Restore the missing archivelogs.

But first let’s confirm on the errors. Some examples:

ERROR OGG-00446 Oracle GoldenGate Capture for Oracle, ext1.prm: Opening ASM file +ARCH/2_11055_742755632.dbf in DBLOGREADER mode: (308) ORA-00308: cannot open archived log '+ARCH/2_11055_742755632.dbf' ORA-17503.

or

ERROR OGG-01028 Oracle GoldenGate Capture for Oracle,ext1.prm: Getting attributes for ASM file +ARCH/2_86720_716466928.dbf,

SQL : (15056)

ORA-15056: additional error message ORA-15173: entry '2_86720_716466928.dbf' does not exist in directory '/...


SOLUTION
:

Restore all archive logs starting from recovery checkpoint until the current checkpoint and restart the extract:
Continue reading