Managing database changes with Sqitch

Nowadays everybody talks about continuous integration, automated test tools, and stuff like that. But almost every time it’s about testing applications. Let’s talk a little about Sqitch, “a database change manager” as they use to call themselves.

Let’s say you’re starting a new project and want to ensure every database change are managed and secured with tests? Just like with an app, you can write tests to ensure everything works and you can have a “revert” method for the changes too. Suppose you are changing a field type and start creating a backup column (fallback ;)), but something goes wrong on the meantime and for some reason, you can’t do a simple rollback… revert it and be happy.

What if your CI says that the application it’s fine and can be released to the production server but “a wild bug appears” and you NEED to revert to a previous version of your service, easy huh? With Docker and one command line, the service is up and running again. But what about the database changes your team did? Just call the revert and be happy :).

Have I mentioned that you can have multiples connection URI at the project and have different environments to apply the changes with a single command?!. Let’s see a few commands to create a simple MySQL database…

Create a folder, initialize GIT in it and initialize the Sqitch project.

mkdir awesomeapp
cd awesomeapp
git init .
sqitch init flipr --uri https://github.com/theory/sqitch-mysql-intro/ --engine mysql

Now you have a ready to go sqitch structure. Now let’s do some basic creates.

sqitch add createcustomeruser -n 'Creation of Customer table'

Add the create table script to the file inside the “deploy” dir.

CREATE TABLE customer (
id INTEGER PRIMARY KEY,
name VARCHAR(50)
);

Add the verify script, can be a simple select. Add the revert script, guess what’s the command, and after all, you can run the commands below and see the result.

sqitch deploy db:mysql://root@/awesomeapp
sqitch verify db:mysql://root@/awesomeapp
sqitch revert db:mysql://root@/awesomeapp

This was a quick explanation about Sqitch so go ahead and read more at their docs and tutorials, they are pretty good.

ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB

Hi.

When you try to start the Goldengate Extraction Process in MSQL server, and you receive the following error.

” ERROR OGG-05290 The Oracle GoldenGate CDC cleanup job is not enabled for database Msql_DB Create the Oracle GoldenGate CDC cleanup job prior to starting the capture process. “

   To create cleanUP job for Goldengate SQL Server, use the .bat script in the GOLDENGATE home directory.

Comand Sintax

ogg_cdc_cleanup_setup.bat createJob [goldengate user] [goldengate password] [database name] [database host] [instance]

Exemple

ogg_cdc_cleanup_setup.bat createJob GGATE welcome1 Msql_DB msql-db01.net dbo

 

In some cases, it may return an error, stating that the process already exists.

” Msg 50000, Level 16, State 1, Server msql-db01, Line 34 The specified @name (‘OracleGGCleanup_Msql_DB_Job’) already exists. “

In this case, you may drop and recreate the Job. just change “createJob” for “dropJob”

The following is the success message of job creation

” INFO OGG-05281 Current OGG cleanup Job Settings – Job Name: OracleGGCleanup_Msql_DB_Job, JobSchedRec: , JobSchedFreq: , DatabaseName: Msql_DB, Tranlogoption managecdccleanup: 1, threshold: 500, retention: 4.320. “

Hope this helps!

Oracle Heterogeneous Services for PostgreSQL (ODBC Driver): Error (HY000,NativeErr = 1)

Hi all!
Some time ago a client reported issues when integrating Oracle and PostgreSQL. The error:

ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
No query has been executed with that handle;
Could not send Query(connection dead) {HY000,NativeErr = 1}

Braking down errors:
Oracle: Connection from ORACLE to a non-Oracle system returned this message
PostgreSQL: {HY000,NativeErr = 1} Could not send Query(connection dead)

Both are general errors:
– Oracle’s one is for any error returned by remote service, when using Oracle Heterogeneous Services (old Database Gateway).
– Postgre’s one is for connections ended.

I also found some other similar errors:
Oracle: ORA-28511: lost RPC connection to heterogeneous remote agent using SID
PostgreSQL: {08S01,NativeErr = 26} Error fetching next row

In summary, the root cause was a firewall configuration ending connection. But what I want to share with you the workaround. 🙂
You know that sometimes, due certain rules, firewall rules may require formal change requests and procedures, so what you can do on database side is basically disable parameter UseDeclareFetch (default is false) in ODBC descriptor, as per below.

Continue reading

Converting Between SQLServer, Oracle, PostgreSQL, MySQL, Sybase and others…

Hi all!
I was asked to make a conversion from T-SQL (MSSQL) Procedure to PL/PGSQL. Regarding how boring is this task, the follow link helped me:
http://www.sqlines.com/online

I highly recommend it. The site has a commercial solution to convert all database, but some code can be converted online for free. 🙂
The conversion not fixed at all, but make a good part of the work… And all help is helpful…

Continue reading

MySQL Network Connections on ‘TIME_WAIT’

Hi all!
Recently I caught a bunch of connections in ‘TIME_WAIT’ on a MySQL Server through ‘netstat – antp 3306’…
After some time, we identified this was caused by the environment not using DNS, only fixed IPS (uuugh!)…

As you know, for security measures MySQL maintains a host cache for connections established. From MySQL docs:

“For each new client connection, the server uses the client IP address to check whether the client host name is in the host cache. If not, the server attempts to resolve the host name. First, it resolves the IP address to a host name and resolves that host name back to an IP address. Then it compares the result to the original IP address to ensure that they are the same. The server stores information about the result of this operation in the host cache. If the cache is full, the least recently used entry is discarded.”
9.12.6.2 DNS Lookup Optimization and the Host Cache

For this reason, there is a DNS ‘reverse’ lookup for each login was hanging this connections.

The solution?
Right way: Add an A type registry in DNS for the hosts. Use DNS!
Quick way: Add on /etc/hosts from database server the mapping for the connected hosts, avoiding the DNS Lookup.
Quicker way: Setting the skip-name-resolve variable at /etc/my.cnf. This variable avoids this behavior in database layer for new connections and solve the problem.

This is a good (portuguese) post about it: http://wagnerbianchi.com/blog/?p=831

See ya!
Matheus.

Monitoring MySQL with Nagios – Quick View

Hi all!
As you know, we have some commercial solutions to monitoring/alerting MySQL, like MySQL Enterprise Monitor or Oracle Grid/Cloud Control.

But, regarding we are using MySQL instead of Oracle Database, we can assume it’s probably a decision taken based on cost. So, considering Open Source solutions, we basically have Nagios, Zabbix, OpenNMS…

MangagedMonitoringConsole

 

Thinking on Nagios, in my opinion the “supra sumo” is mysql_health_check.pl.
Below whitepaper and presentation:
White Paper
Presentation
Code
Good one by Sheeri Cabral and posted here!

Any way, with theese two we can make lots of magic:

1. check_mysql.pl
– Check status of MySql server (slow queries, etc)
– Queries per second graph

2. check_db_query.pl
– Allowes to run SQL Queries and setting thresholds for warning e critical. Ex:

check_db_query.pl -d database -q query [-w warn] [-c crit] [-C conn_file] [-p placeholder]

Ex for Nagios call:

define command{
command_name    check_db_entries
command_line    /usr/local/bin/perl $USER1$/check_db_query.pl -d "$ARG1$" -q "$ARG2$" $ARG3$
}

So, now it’s just make your queries and implement your free monitoring on MySQL! 🙂
Matheus.

Export/Backup directly to Zip using MKNOD!

We all faced that situation when we have to make a logical backup/export and haven’t so much area to do that, right?
We know the export usually compress a lot on zip/gzip… It wouldn’t be great if we can export directly to compressed file?

zip
This situation become much more common because of Datapump, that requires a directory accessible by database server. If you have not possibility to make a mounting point or any other area, this can help…

## BKP with MKNOD
BKP_DEST=/just/example
DATE=`date +%Y%m%d%H%M`
cd $BKP_DEST
mknod bkp_$DATE.dmp p
gzip  bkp_$DATE.dmp.gz &
### Uncomment and Ajust one of:
## MySQL:
#mysqldump -u $user -p$password $database > bkp_$DATE.dmp
## Oracle (Datapump or EXP)
expdp \"/ as sysdba\" dumpfile=bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE logfile=log_bkpzipped.log compress=y
#expdp $user/$password dumpfile=bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE logfile=log_bkpzipped.log
#exp \"/ as sysdba\" file=bkp_$DATE.dmp log=log_bkpzipped.log compress=y [tables=owner.table,..] [owner=schema1,..] [...]

Hugs!
Matheus.

Optimize fragmented tables in MySQL

It happens on MySQL, as you know. Run an Optimize Table solve the question.
BUT, be careful! During the optimize the table stay locked (writing is not possible).

Fragmentation-table
(Fragmented Table)

So what?
To not cause a lock in every table, the script below shows and runs (if you want to list but not run, comment the line) only for tables that have fragmentation.

It was very useful to me! 🙂

#!/bin/sh
echo -n "MySQL username: " ; read username
echo -n "MySQL password: " ; stty -echo ; read password ; stty echo ; echo
mysql -u $username -p"$password" -NBe "SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do
mysql -u $username -p"$password" -NBe "SHOW TABLE STATUS;" $database | while read name engine version rowformat rows avgrowlength datalength maxdatalength indexlength datafree autoincrement createtime updatetime checktime collation checksum createoptions comment ; do
if [ "$datafree" -gt 0 ] ; then
fragmentation=$(($datafree * 100 / $datalength))
echo "$database.$name is $fragmentation% fragmented."
mysql -u "$username" -p"$password" -NBe "OPTIMIZE TABLE $name;" "$database"
fi
done
done

The resul will be like:

MySQL username: root
MySQL password:
...
mysql.db is 12% fragmented.
mysql.db optimize status OK
mysql.user is 9% fragmented.
mysql.db optimize status OK
...

This script is a full copy from this post by Robert de Bock.
Thanks, Robert!

Matheus.