Disk Space is not Released After Deleting Files

When deleting a large files, the file is deleted successfully but the size of the filesystem does not reflect the change.
The files was be deleted, but only restarting the jvm or java process released the disk space.
Usually occurs with log files.

The lsof command show the files opened in system.
For example: lsof |grep deleted

java 15138 oracle 2959r REG 253,3 5875027 131163 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1.out03422 (deleted)
java 15138 oracle 3054r REG 253,3 10480928 131166 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1-diagnostic-81.log (deleted)
java 15138 oracle 3062r REG 253,3 10479417 131200 /logs/soa_domain/WLS1_SOA1/WLS1_SOA1-diagnostic-82.log (deleted)

The command output shows pid, owner, file descriptor (fd), size and file.

Workaround:
If can’t restart the process, it is possible to force the system to de-allocate the space consumed by an in-use file by forcing the system to truncate the file.
$ echo > /proc/pid/fd/fd_number

Be careful not to truncate unwanted files.

In my case:
$ echo > /proc/15138/fd/2959
$ echo > /proc/15138/fd/3054
$ echo > /proc/15138/fd/3062

That’s all for today.

GGATE ABBENDED: ORA-00308: Cannot Open Archived Log

Hi all!
Ok, so this is one of the most common issues for GGate administration. How to solve it? Easy thing.

First let’s understand what it means: It means the redologs don’t have the required information (assuming integrated) and you have already deleted archivelogs the extract needs. Why? Probably because you already backed up those archivelogs and they were not needed for the database anymore.

Unfortunately we don’t have any kind of ARCHIVELOG DELETION POLICY to guarantee extracts had already read it, like we have for Dataguard. So, what can we do?

Restore the missing archivelogs.

But first let’s confirm on the errors. Some examples:

ERROR OGG-00446 Oracle GoldenGate Capture for Oracle, ext1.prm: Opening ASM file +ARCH/2_11055_742755632.dbf in DBLOGREADER mode: (308) ORA-00308: cannot open archived log '+ARCH/2_11055_742755632.dbf' ORA-17503.

or

ERROR OGG-01028 Oracle GoldenGate Capture for Oracle,ext1.prm: Getting attributes for ASM file +ARCH/2_86720_716466928.dbf,

SQL : (15056)

ORA-15056: additional error message ORA-15173: entry '2_86720_716466928.dbf' does not exist in directory '/...


SOLUTION
:

Restore all archive logs starting from recovery checkpoint until the current checkpoint and restart the extract:
More“GGATE ABBENDED: ORA-00308: Cannot Open Archived Log”

Error JPS-06516: Failed to get credential store in Webcenter Portal

After startup NodeManager wrongly started from the root account, the following error occurs:

<Apr 9, 2018 8:34:41 AM BRT> <Warning> <oracle.jps.credstore> <JPS-01050> <Opening of wallet based credential store failed. Reason java.io.IOException: PKI-02002: Unable to open the wallet. Check password. >
<Apr 9, 2018 8:34:41 AM BRT> <Warning> <oracle.webcenter.framework.service.WebCenterConfig> <WCS-43054> <An error occured while trying to lookup connection Pagelet Producer.
javax.naming.NamingException [Root exception is oracle.adf.share.jndi.ConnectionException: java.lang.IllegalArgumentException: oracle.security.jps.service.keystore.KeyStoreServiceException: JPS-06516: Failed to get credential store.

Verify that the user owns the managed server has access to all files and directories in the temporary directory, specifically oracle-dfw* directories.

arquivos-oracle-dfw-webcenter

If there are files and / or directories with different owners of the managed server owner that presents the problem, back up the existing files and delete or move them.

That’s all for today.

Granting OSB Test Console access in OSB 12C

In weblogic console, add the user to the role ‘IntegrationAdmin’:
Security Realms > myrealm > Roles and Polices > Realm Roles > Global Roles > IntegrationAdmin > View Role Conditions

testconsole12c1

In EM, add the user to the following roles:
Middleware Administrator
Developer
Tester

More“Granting OSB Test Console access in OSB 12C”

OGG-02077 Extract encountered a read error in the asynchronous reader thread and is abending: Error code 1343

It’s been a long time since my last post here.
Well, the time has arrived. New GoldenGate runtime errors (lol):

ERROR   OGG-02077  Extract encountered a read error in the asynchronous reader thread and is abending: Error code 1343, error message: 
ORA-01343: LogMiner encountered corruption in the logstream.

OK then, don’t worry.

Check database alert regarding messages about RFS retries. Wait until it stops, then try to restart GoldenGate Extracts.

It appears to be GG 12.2 bug and seems it fixes itself. There is no published MOS DOC regarding that so far.

Hope it solve your Extract errors too… \o/

ERROR OGG-03533: Conversion from character set {???} of source column {???}

Hi.

If you need to replicate data using Goldengate, between different databases types, you may get the following error.

” ERROR OGG-03533: Conversion from character set {???} of source column {???} to character set {???} of target column {???} failed because the source column contains a character ‘{?}’ at offset {?}  that is not available in the target character set. “

To replace charset not accepted on the target, try using the replication parameter: REPLACEBADCHAR.

In my case, I set up a data replication between MS-SQLServer and ORACLE, this bases uses charset windows-1252 and US-ASCII respectively.

” ERROR OGG-03533 Conversion from character set windows-1252 of source column CATEGORY to character set US-ASCII of target column CATEGORY failed because the source column contains a character ‘d3’ at offset 2 that is not available in the target character set. “

I am using the following parameter in replicat:
REPLACEBADCHAR ESCAPE
Replication works and no data loss.

See you!

ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB

Hi.

When you try to start the Goldengate Extraction Process in MSQL server, and you receive the following error.

” ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB Create the Oracle GoldenGate CDC cleanup job prior to starting the capture process. “

   To create cleanUP job for Goldengate SQL Server, use the .bat script in the GOLDENGATE home directory.

Comand Sintax

ogg_cdc_cleanup_setup.bat createJob [goldengate user] [goldengate password] [database name] [database host] [instance]

Exemple

ogg_cdc_cleanup_setup.bat createJob GGATE welcome1 Msql_DB msql-db01.net dbo

 

In some cases, it may return an error, stating that the process already exists.

” Msg 50000, Level 16, State 1, Server msql-db01, Line 34 The specified @name (‘OracleGGCleanup_Msql_DB_Job’) already exists. “

In this case, you may drop and recreate the Job. just change “createJob” for “dropJob”

The following is the success message of job creation

” INFO OGG-05281 Current OGG cleanup Job Settings – Job Name: OracleGGCleanup_Msql_DB_Job, JobSchedRec: , JobSchedFreq: , DatabaseName: Msql_DB, Tranlogoption managecdccleanup: 1, threshold: 500, retention: 4.320. “

Hope this helps!

Downstream database with ORA-00317: file type 0 in header is not log file

You missed some Downstream archived log?

Is this archived log on Downstream area either it’s not read by Logminer.

Try to run to the hills or read GrepOra posts.

In alert log is tracing like these:

ORA-00317: file type 0 in header is not log file
ORA-00334: archived log: '/oracle/dowstream-archive/2_136361_87643997.dbf'
LOGMINER: Error 317 encountered, failed to read corrupt logfile /oracle/dowstream-archive/2_136361_87643997.dbf
LOGMINER: Encountered error 1291 while adding logfile /oracle/dowstream-archive/2_136361_87643997.dbf to session 1

Copy the archived log from RMAN to Downstream, register logical logfile. Wait for while to Logimer start to provide LCR GoldenGate Integrated Extracts to Logminer new registered archived log.

Try register achived log on Dowstream database as below:

ALTER DATABASE REGISTER LOGFILE '/oracle/dowstream-archive/2_136361_87643997.dbf' FOR 'OGG$CAP_EXT_1';
ALTER DATABASE REGISTER LOGFILE '/oracle/dowstream-archive/2_136361_87643997.dbf' FOR 'OGG$CAP_EXT_2';

 

integratedcapture[1]

Weblogic AdminServer fails to start, after moving database repository to new server or port

In an oracle fusion middleware environment, after migrate a metadata repository database to a new server or a new port, AdminServer failed after restart.

Error:

Caused By: oracle.security.jps.JpsException: oracle.security.jps.service.policystore.PolicyStoreException: javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services – 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Error Code: 17002
….
####<Oct 18, 2017 8:09:27 PM BRST> <Notice> <WebLogicServer> <osb1grepora1> <AdminServer> <main> <<WLS Kernel>> <> <> <1508364567559> <BEA-000365> <Server state changed to FAILED.>
####<Oct 18, 2017 8:09:27 PM BRST> <Error> <WebLogicServer> <osb1grepora1> <AdminServer> <main> <<WLS Kernel>> <> <> <1508364567559> <BEA-000383> <A critical service failed. The server will shut itself down.>
####<Oct 18, 2017 8:09:27 PM BRST> <Notice> <WebLogicServer> <osb1grepora1> <AdminServer> <main> <<WLS Kernel>> <> <> <1508364567560> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>

Solution:
The files below have the physical address for the old configuration:
$DOMAIN_HOME/config/fmwconfig/jsp-config.xml
$DOMAIN_HOME/config/fwmconfig/jps-config-jse.xml

Edit both.
Look for the entry “jdbc.url” and change it with the new settings.

That’s all for today.