Relying in Guaranteed Restore Points? Be careful!

Hi all,

Are you relying on Guaranteed Restore Points (GRP) as a fallback plan for your migration or upgrade strategy? Be careful!

When performing some non-Prod upgrade with the Autoupgrade tool,  after completing the upgrade, I wanted to roll it back and go through the process again,  This is what happened:

SQL> startup
ORA-29702: error occurred in Cluster Group Service operation

When looking for it found this blog post from Mike I missed the last year: https://mikedietrichde.com/2020/11/13/ora-29702-and-your-instance-does-not-startup-in-the-cluster-anymore/

This means my database is not starting anymore! Oh man, glad that I’m in the testing phase!

This caused by of Bug 31561819 – Incompatible maxmembers at CRSD Level Causing Database Instance Not Able to Start.

As per Mike’s post, “you don’t need to even restore or flashback a database to hit this error. A simple instance in NOMOUNT state leads to the same error. Without even any datafile.”

The bug is fixed on:

  • 19.9.0.0.201020 (Oct 2020) OCW RU
  • 18.12.0.0.201020 (Oct 2020) OCW RU
  • 12.2.0.1.201020 (Oct 2020) OCW RU

As being, you should include this patch BEFORE starting any move! Do it right away if you are on these versions!

Also, be aware of the latest change regarding Restore Points propagation on 19c, as per MOS Automatic Propagate Restore Points from Primary to Standby site in 19C (Doc ID 2463082.1)

In my case, the usage is exactly for a 12.1->19c upgrade. So, the fix is not even available (no Extended Support in place). As being, we had to think on alternate fallback plans, like a physical standby. But this is a topic for another post.

So for YOU:

  • Apply this patch if you can!
  • If not, be very careful on the fallback plans and as usual: Test, Test and Test!

See you next post!

Exadata DNS Change – Pitfalls to be avoided

Hi all, it’s been a while but here I am!

There were some changes in the infrastructure at the place I work and I was asked to do a DNS change on a bit old Exadata X5. I had never done one before this, so the idea of this post is to help others who might face the issues I had.

The first thing I did was to look up the documentation about it and see the steps, yes there are blogs about it but the doc can help to get at least the first glance of the situation.

Long story short: Exadata has lots of components and the new DNS should be changed on all of them.

Here is a summary of the steps.

Infiniband switches

Connect to the switches and sudo to ilom-admin and change the DNS

su - ilom-admin
show /SP/clients/dns
set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3
show /SP/clients/dns

 

Database nodes

For my image I only needed to change the /etc/resolv.conf, if you have a newer one you will need to user ipconf – That´s why you need to go to the documentation, at least there we hope that they will put some mentions on the pitfalls (well keep reading and you will see that was not my case)

Also changed the DNS on wach database node ilom, runing the ipmtool from the each node

ipmitool sunoem cli 'show /SP/clients/dns'
ipmitool sunoem cli 'set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3'
ipmitool sunoem cli 'show /SP/clients/dns'


Cell nodes – Here things start to get interesting

For the storage cell there are some points that need to be taken under consideration:

Increase the ASM disk_repair_time – the goal here is to avoid a full rebalance if you do this within its timeframe, if you don’t know this parameter,  ASM will wait for up to the interval specified for DISK_REPAIR_TIME for the disk(s) to come online. If the disk(s) come back online within this interval, a resync operation will occur, where only the extents that were modified while the disks were offline are written to the disks once back online. If the disk(s) do not come back within this interval, ASM will initiate a forced drop of the disk(s), which will trigger a rebalance.

On each cell node we need to make sure all disks are OK, stop all cell disks, stop all cell services and user ipconfig to change the DNS configuration

#Check that putting the grid disks offline will not cause a problem for Oracle ASM - it should all say YES on the 3rd column 
cellcli -e LIST GRIDDISK ATTRIBUTES name,asmmodestatus,asmdeactivationoutcome

#Inactivate all grid disks on the cell - may take a while to complete
cellcli -e ALTER GRIDDISK ALL INACTIVE


#Confirm the grid disks are offline, it should show asmmodestatus=OFFLINE or asmmodestatus=UNUSED, and asmdeactivationoutcome=Yes for all grid disks
cellcli -e LIST GRIDDISK ATTRIBUTES name, asmmodestatus,asmdeactivationoutcome

#Confirm that the disks are offline
cellcli -e LIST GRIDDISK

#Shut down the cell services and ocrvottargetd service
cellcli -e ALTER CELL SHUTDOWN SERVICES ALL
service ocrvottargetd stop #on some images this services does not exists

To execute the ipconf on the old way we only need to call it can follow the prompts, but if you have a newer image you will need to provide its parameters as is shown in the documentation.

The documentation says that after it we could start the cell services back up but I would recommend validating the DNS prior to doing that, why is that you might say because mine did not work and I could have a bigger issue with a cell node without DNS trying to start the services.

So, how to test, use nslookup, dig and curl

nslookup dns_domain.com
curl -v 192.168.16.1:53
dig another_server_in_the_network

 

My tests did not work, I was able to ping the DNS servers but not to resolve any name, I had an SR on MOS but did not help much either, looking up as this is a production system I tried to see if the firewall was up on the Linux site, and to my surprise it was.

I tried to manually add rules to iptables but it did not work and then I came across this note Exadata: New DNS server is not accessible after changing using IPCONF (Doc ID 1581417.1)

And there it was, I needed to restart the cellwall service to recreate the iptables rules.

# Restart cellwall service
service cellwall restart
service cellwall status

One final point, check if ASM started the rebalance or not, if it did, do not start to bring down another cell node until the rebalance is finish, otherwise you may run into deeper issues.

 

I hope it helps!

Elisson Almeida

Opatchauto Failing on “CheckActiveFilesAndExecutables” during Prerequisite Check

Hi all,
So, very recently when applying the 2021 January CPU in a client environment, the following happened:

[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto apply

OPatchauto session is initiated at Sun Mar 14 03:00:06 2021

System initialization log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchautodb/systemconfig2021-03-14_03-00-08AM.log.

Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_03-00-13AM.log
The id for this session is 1J89

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/db
Patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing patch validation checks on home /u01/app/oracle/product/19c/db
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/db


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/db
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/db


Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/grid
Patch applicability verified successfully on home /u01/app/oracle/product/19c/grid


Executing patch validation checks on home /u01/app/oracle/product/19c/grid
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/grid


Preparing to bring down database service on home /u01/app/oracle/product/19c/db
Successfully prepared home /u01/app/oracle/product/19c/db to bring down database service


Bringing down database service on home /u01/app/oracle/product/19c/db
Following database has been stopped and will be restarted later during the session: er1pprd,obiee
Database service successfully brought down on home /u01/app/oracle/product/19c/db


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/oracle/product/19c/grid
Prepatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_03-06-15AM.log
CRS service brought down successfully on home /u01/app/oracle/product/19c/grid


Start applying binary patch on home /u01/app/oracle/product/19c/db
Failed while applying binary patches on home /u01/app/oracle/product/19c/db

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : dbserver01->/u01/app/oracle/product/19c/db Type[sidb]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/db, host: dbserver01.
Command failed: /u01/app/oracle/product/19c/db/OPatch/opatchauto apply /ora02/soft/jan21cpu/32126842/32226239 -oh /u01/app/oracle/product/19c/db -target_type oracle_database -binary -invPtrLoc /u01/app/oracle/product/19c/grid/oraInst.loc -jre /u01/app/oracle/product/19c/grid/OPatch/jre -persistresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_dbserver01_sidb_2.ser -analyzedresult /u01/app/oracle/product/19c/db/opatchautocfg/db/sessioninfo/sessionresult_analyze_dbserver01_sidb_2.ser
Command failure output:
==Following patches FAILED in apply:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_03-17-58AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Mar 14 03:19:25 2021
Time taken to complete the session 19 minutes, 19 seconds

opatchauto failed with error code 42

OK, going by parts, let's see what we have on the refered log:

[Mar 14, 2021 3:33:32 AM] [INFO] Start fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Finish fuser command /sbin/fuser /u01/app/oracle/product/19c/grid/bin/expdp at Sat Mar 14 03:33:32 PDT 2021
[Mar 14, 2021 3:33:32 AM] [INFO] Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:32 AM] [INFO] Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:


Following active executables are not used by opatch process :


Following active executables are used by opatch process :
/u01/app/oracle/product/19c/grid/lib/libclntsh.so.19.1
[Mar 14, 2021 3:33:33 AM] [INFO] UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Mar 14, 2021 3:33:33 AM] [INFO] Finishing UtilSession at Sat Mar 14 03:33:33 PDT 2021
[Mar 14, 2021 3:33:33 AM] [INFO] Log file location: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/oapatch_2021-03-14_03-06-15AM.log

This is an interesting situation.

After some validations making sure no service is online, the path is writable, oracle and root have the required privilege and access, I found some relevant Oracle notes:

  • 19c Installation Fails with error “libclntsh.so: file format not recognized; treating as linker script” (Doc ID 2631283.1): Pointing to file corruption
  • While Applying a Weblogic Patch, opatch Fails with “Prerequisite check “CheckActiveFilesAndExecutables” failed” Error (Doc ID 2705809.1): Not a DB note and pointing to other processes using the files.
  • Opatch failure due to “CheckActiveFilesAndExecutables” as Remote registry service holding files (Doc ID 2462952.1): Remote registry holding the binaries.
  • Prerequisite Check “Checkactivefilesandexecutables” Failed (Doc ID 1281644.1): Patch requisite miss on 10g
  • Failed to apply PSU due to CheckActiveFilesAndExecutables check failure (Doc ID 2506432.1): SQLPlus holding the binaries.
  • [OCI]: Database System Patching Failed With Error “DCS-10001:Internal Error Encountered: Failure : Failed To Apply” And Opatch Log Shows “Prerequisite check “CheckActiveFilesAndExecutables” failed” (Doc ID 2687607.1): My case is not an OCI and not in RAC.

So, no matches at all.

However, this last note gave me the hints I needed. From Doc ID 2687607.1, for RAC environments:

/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -unlock
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -init
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -prepatch
/u01/app/19.0.0.0/grid/crs/install/rootcrs.sh -postpatch

So, in my case, a Standalone On-Premise Database (and GI):

/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
[ Apply the patch! ]
/ora01/app/oracle/product/19c/grid/crs/install/roothas.sh -postpatch

Check the output:

[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -unlock
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/haunlock__2021-03-14_04-00-35AM.log
2021/03/14 04:01:01 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -init
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/roothas_2021-03-14_04-01-09AM.log
[root@dbserver01 jan21cpu]# /u01/app/oracle/product/19c/grid/crs/install/roothas.sh -prepatch
Using configuration parameter file: /u01/app/oracle/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-01-16AM.log
2021/03/14 04:01:27 CLSRSC-347: Successfully unlock /u01/app/oracle/product/19c/grid
2021/03/14 04:01:27 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

And now resuming the Opatchauto:

[root@dbserver01 jan21cpu]# cd 32126842/32226239/
[root@dbserver01 32226239]# $ORACLE_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Sun Mar 14 04:02:07 2021
Session log file is /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/opatchauto2021-03-14_04-02-10AM.log
Resuming existing session with id 1J89

Start applying binary patch on home /u01/app/oracle/product/19c/db
Binary patch applied successfully on home /u01/app/oracle/product/19c/db


Start applying binary patch on home /u01/app/oracle/product/19c/grid

Binary patch applied successfully on home /u01/app/oracle/product/19c/grid


Performing postpatch operations on CRS - starting CRS service on home /u01/app/oracle/product/19c/grid
Postpatch operation log file location: /u01/app/oracle/product/crsdata/dbserver01/crsconfig/hapatch_2021-03-14_04-27-58AM.log
CRS service started successfully on home /u01/app/oracle/product/19c/grid


Preparing home /u01/app/oracle/product/19c/db after database service restarted
No step execution required.........


Trying to apply SQL patch on home /u01/app/oracle/product/19c/db
SQL patch applied successfully on home /u01/app/oracle/product/19c/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:dbserver01
SIDB Home:/u01/app/oracle/product/19c/db
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Reason: This patch is not applicable to this specified target type - "oracle_database"


==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/db/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-02-36AM_1.log


Host:dbserver01
SIHA Home:/u01/app/oracle/product/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /ora02/soft/jan21cpu/32126842/32226239/29340594
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218454
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32218663
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32222571
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

Patch: /ora02/soft/jan21cpu/32126842/32226239/32240590
Log: /u01/app/oracle/product/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2021-03-14_04-11-34AM_1.log

OPatchauto session completed at Sun Mar 14 04:31:48 2021
Time taken to complete the session 29 minutes, 43 seconds

And here is the relevant point: This has been happening to me on several environments and servers across the recent weeks. Always for 2021 January CPU.
My guess is that this might have something to do with this CPU binaries set or, most likely, with the latest OPatch version:

[oracle@dbserver01 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.24

I hope it helps you as well!

5 Best Practices for Setting Dispatchers for Shared Connections

Hi all,

Here are 5 Best Practices / Tips for when setting dispatches with Shared Connections:

1. Set local_listener on both instances on the database

alter system set LOCAL_LISTENER=”(address=(protocol=tcp)(port=1521)(host=yourhost))” scope=both sid=’instance_name’;
Ref: Shared Server: Dispatchers Are Not Registered With Listener (Doc ID 465881.1)

2. Dispatchers parameter should be set to utilize the VIP name of the host

alter system set dispatchers='(address=(protocol=tcp)(host=node1-vip))(dispatchers=2)’ scope=both sid=’instance_name’;
Ref: How To Configure Shared Server Dispatchers For RAC Environment (Doc ID 578524.1)

3. Dispatchers count should be set appropriately considering the number of sessions expected to connect to the database

A general rule of thumb is that 1 dispatcher can handle 50 shared server connections with minimal performance impact.
Ref: Shared Server Only: TNS-12518, TNS-12564 and TNS-12602 Errors at Connect Time (Doc ID 1539104.1)

4. Arguments can be used with the dispatchers parameter for closer control of how the shared server sessions are used

SESSIONS – Determines the max sessions allowed for each dispatcher.
CONNECTIONS – The maximum number of network connections to allow for each dispatcher.
Ref: https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DISPATCHERS.html#GUID-DCBCCF94-8A73-4805-9138-412DA413FC7C

5. Shared_servers parameter can be set to control total number of shared servers spawned by the database

shared_servers set to 1 –> This will enable shared server sessions on the database.
max_shared_server  –> Specifies the maximum number of shared servers that can run simultaneously.
shared_server_sessions  –> Specifies the total number of shared server user sessions that can run simultaneously. Setting this parameter enables to reserve user sessions for dedicated servers.
Ref: Automatic Shared Server Configuration (Doc ID 265931.1)

See you next post!

Loading “Views” from One Database to Another

Hi all,

This seems to be a simple request, right?  Let’s go to the regular cases / simple answers:

  • Use DBMS_METDATA.ET_DDL to get the view code: DBMS_METADATA.get_ddl (‘VIEW’, <VIEW_NAME>, <OWNER>)
  • Use datapump with include=VIEW Or even specifying the list INCLUDE=VIEW:\”IN (\’VW_EXAMPLE1\’, \’VW_EXAMPLE2\’)\” Or part of the name: INCLUDE=VIEW:”LIKE ‘VW_%EXEMPLE%'”

However, when supporting a client with this need I see this may have some caveats. The options above load the view purely, not the “content” of the view from an application perspective.

And here let’s leave it clear, a view doesn’t have any data but instead stores a query to retrieve the data from regular tables. We could have underlying tables as support mechanisms in case of materialized views,  but in essence, they are transitory build from the actual database tables. Now how to load views to other databases without the source tables including the underlying data?

This question has 2 answers:

  • Exporting the views as tables.

In general lines I’d do it manually: Create tables from views and then export them. This can mean I would need to have space for creating those tables, which can be a lot, though.

But then I see we have VIEWS_AS_TABLES clause in Datapump which makes exactly that. As a reference: https://docs.oracle.com/database/121/SUTIL/GUID-E4E45E81-5391-43BE-B27D-B763EF79A885.htm#SUTIL3904

However, in this case, the import will bring the views as tables, not as views. Which will theoretically resolve the issue from a data perspective, but may not be what is required: What if I need the views as views on the destination database?

Well, in this case it’s needed to load all the underlying tables. How to extract them? The hard way is opening all the view codes and listing them. The smart way is the second option:

  • Listing the tables referenced by Views from dba_dependencies and exporting them.

select owner as schema_name,
name as view_name,
referenced_owner as referenced_schema_name,
referenced_name,
referenced_type
from sys.dba_dependencies
where type = 'VIEW'
-- excluding some Oracle maintained schemas
and owner not in ('ANONYMOUS','CTXSYS','DBSNMP','EXFSYS', 'LBACSYS',
'MDSYS', 'MGMT_VIEW','OLAPSYS','OWBSYS','ORDPLUGINS', 'ORDSYS','OUTLN',
'SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSTEM','TSMSYS','WK_TEST','WKSYS',
'WKPROXY','WMSYS','XDB','APEX_040000', 'APEX_PUBLIC_USER','DIP',
'FLOWS_30000','FLOWS_FILES','MDDATA', 'ORACLE_OCM', 'XS$NULL',
'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR', 'PUBLIC')
order by owner, name, referenced_name, referenced_owner, referenced_type;

Aaaand, that’s what I actually needed. With the list in place, it’s a matter of exporting with datapump including the tables on the list and the views (as mentioned above).

I hope it helps!

ORA-12537: TNS:connection closed – When nothing else works!

Sounds desperate?

Well, when you are sure all the TNS, listener (SID_LIST_LISTENER on this case), services, db_domain are fine but still facing this issue when trying to access remotely an open database… what to do?

[oracle@greporasrv admin]$ sqlplus sys/******@MYDB as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 3 17:45:29 2021

Copyright (c) 1982, 2013, Oracle. All rights reserved.

ERROR:
ORA-12537: TNS:connection closed

My friend here is a quick tip that, if you are an Oracle DBA for time enough you know that solves several issues, maybe affecting this case again:

[oracle@greporasrv admin]$ cd $ORACLE_HOME/bin
[oracle@greporasrv bin]$ ls -lrt oracle
-rwxr-x--x. 1 oracle oinstall 242993632 Dec 18 19:20 oracle
[oracle@greporasrv bin]$ chmod 6751 oracle
[oracle@greporasrv bin]$ ls -lrt oracle
-rwsr-s--x. 1 oracle oinstall 242993632 Dec 18 19:20 oracle

The CHMOD 6751 did it again!

It happened to me because, somehow, these permissions were wrong in my LOCAL host (the one I was trying to use the SQLPlus* from.

I hope it helps!

 

Removing user from SQL Server database

So, have you ever been in the need to drop a user from the SQL Server database, but you stuck with the following related error?

Error: 15138 The database principal owns a schema in the database, and cannot be dropped.

If you don’t have access to the SSMS to see which schema or objects the user owns, the following SQL should do the job.

USE [DATABASENAME]
GO
SELECT so.name OBJECT, su.name OWNER
FROM sys.schemas so
INNER JOIN sysusers su ON (so.principal_id = su.uid)
WHERE su.name = 'username'
GO
SELECT so.name OBJECT, su.name OWNER, so.xtype TP
FROM sys.sysobjects so
INNER JOIN sysusers su ON (so.uid = su.uid)
WHERE su.name = 'username'
GO

Once you have the owned objects/schemas, you can change them with the following SQL (schema example):

USE [DATABASENAME]
GO
ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [newowner] /* new owner username */
ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [newowner]
GO

Then you’re ready to drop the user DROP USER [username].

Bonus SQL: The following will list the schemas and their owner.

SELECT schema_name(schema_id) as SCHEMANAME,
user_name(s.principal_id) as USERNAME
FROM sys.schemas AS s

Stop installing MySQL 5.6!

From time to time, I get a customer request to configure/optimize a MySQL server, and usually, when the customer has already installed the MySQL Server, usually, it’s an “old” version. It’s ok if your software uses features that only work on some specific version of a Database, but keep an eye open for its end of life support. And that EOF is coming to MySQL 5.6. Below you can see a table with the MySQL versions and their ending date.

So, next time you need to configure a new server, keep an eye on the version you’re using.

Version Ending date
5.6 02/05/2021
5.7 10/21/2023
8.0 04/xx/2026

Failed Logon Delay Causing Performance Issues

On the other day when I got to the office I was called to check a database that was running slow. They had implemented a new process there and wanted to make sure it was not impacted.

When checked I saw this issue using OEM

 

User SYS causing a strange wait event Failed Logon Delay

Someone had  created a process running with the user SYS but they did not fully configured and a part of the process was trying to connect with the wrong password.

While they were looking in the configuration files and servers to see from where the issue was coming from, I started my own investigation to speed up the process.

1st I had to enable audit as it was disabled for unsuccessful loging attemps

SQL> audit session whenever not successful;

Audit succeeded.

 

Than I was able to see from where the failed connection came from, I just needed to look for the return code 1017 as ORA-1017 is invalid username/password; logon denied on sys.aud$

col ntimestamp# for a30 heading "Timestamp"
col userid for a6 heading "Username"
col userhost for a15 heading "Machine"
col spare1 for a10 heading "OS User"
col comment$text for a80 heading "Details"

select ntimestamp#, userid, userhost, spare1, comment$text,returncode from sys.aud$ where returncode=1017 or returncode=28000;

 

Oldie but goldie =)

Hope it helps,

Elisson Almeida

MySQL won’t start [ERROR] Found option without preceding group in config file

Hey folks,

have you ever received a call for a MySQL on windows that stopped working after someone did something at their my.cnf? Then you try to start the service by cmd and get the following error.

mysqld: [ERROR] Found option without preceding group in config file C:\ProgramData\MySQL\MySQL Server 8.0\my.ini at line 1.
mysqld: [ERROR] Fatal error in defaults handling. Program aborted!

Well, for some reason, the editor that was used (no idea which one was), threw some random byte at the beginning of the file. To solve that (on windows at least), open the file on Notepad++, go to Format > Convert to ANSI. Save the file and start again the service.

What was the weirdest thing that happened to you on a Windows Server?