Pluggable Database in Mount after Restart

Hi all,

So, I client reached me to fix the following: After restarting a database, all pluggable databases stay as mounted, instead of opening automatically.

Well, this was the quickest fix ever. After having all pluggable databases as they should be (open, in this case, but could have some in mount, depending on the configuration desired):

alter pluggable database all save state;

Easy, right?
We have some other good options like:

alter pluggable database pdb_name save state;
alter pluggable database all except pdb_name1, pdb_name2 save state;

I don’t really have to explain them, right?

Some good reference:

40.4.7 Preserving or Discarding the Open Mode of PDBs When the CDB Restarts
Preserve PDB Startup State (12.1.0.2 onward)

Cheers!

ERROR OGG-02636 when creating a integrated extract in Goldengate 12C on a Puggable database 12C

While creating an integrated extract in Goldengate 12C on a Puggable database 12C I came across the follow error, stating that the needed catalog name is mandatory  and was not being informed.

ERROR OGG-02636 Oracle GoldenGate Capture for Oracle, ext1.prm: The TABLE specification ‘TABLE table_name’ for the source table ‘table_name’ does not include a catalog name. The database requires a catalog name.

There is two ways to solve this case: The first , besides less indicated, is to add the name of pluggable database (Catalog) before the owner name on the table maps, for example:

GGSCI (host1.net) 1> edit param ext1

–Tables
TABLE PDB_NAME.SCHEMA_OWNER.TABLE_NAME;

Not really enjoying this solution and after searching for long hours without any other result , our friend Maiquel DC indicated a parameter that identifies the catalog name for all tables in the extract ;

Add the following parameter in extract  configuration file:

GGSCI (host1.net) 1> edit param ext1

–Parameters
SOURCECATALOG PDB_NAME

 

Thats all folks.
Dieison.

Unplug/Plug PDB between different Clusters

Everyone test, write and show how to move pluggable databases between containers (CBDs) in the same Cluster, but a little more than a few write/show about move pluggable databases between different clusters, with isolated storage. So, let’s do that:

OBS: Just to stay easy to understand, this post is about migration of a Pluggable Database (BACENDB) from a cluster named ORAGRID12C and a Container Database named INFRACDB to the Cluster CLBBGER12, into Container CDBBGER.
(Click on images to get it bigger)

1. Access the container INFRACDB (Cluster GRID12C) and List the PDBs: 1

2. Shutdown BACENDB:
2
(of course it does’n worked with a normal shutdown. I don’t know what I was thinking… haha) 3

3. Unplug BACENDB (PDB) to XML (must be done from Pluggable, as you see…) 4
4. Created an ACFS (180G) to use as “migration area” mounted on “/migration/” in ORAGRID12C cluster:
5

5. Copy Datafiles and Tempfiles for the “/migration” through ASMCMD cp 6

6. ACFS exported and mounted as NFS on destination (CLBBGER12): 7
8

7. Pluggable created (Plugged) on new Cluster (CDBBGER), using “MOVE” FILE_NAME_CONVERT, to send the files to diskgroup +DGCDBBGER:

9

7.1 How it looks like on alert.log?

10

7.2 How about the Datafiles?

11

7.3 Checking database by remote sqlplus:

13

8. Creating the services as needed:

12

9. Dropping Pluggable from INFRACDB:

14

That’s Okey? Of course there is a few other ways to copy the files from an infra to another, like scp rather than mount.nfs, RMAN Copy, or other possibilities…

By the way, one of the restrictions of pluggable migration is to use the same endian format. Buut it’s possible to use RMAN Convert Plataform and convert datafiles to a filesystem, isn’t?
So, I guess it’s not a necessary limitation. Must to test an write another post… haha

About the post, this link helped, but, again, don’t mention about “another” cluster/infra/storage.

Matheus.