MySQL InnoDB Buffer

Hi all,
Do you have a MySQL server running somewhere? I’ve seen many and many MySQL servers running with the default configuration, even the mysql_secure_installation command being ignored. Let’s talk about a tip to tune your server.

innodb_buffer_pool_chunk_size=134217728

This config, tells the server how many memory it can use, the default (using 8.0 and 5.7) its 128MB, that’s way less then it could be, in general. I usually set about 75% of the total ram of the server IF you just have the database on that server. With that, you have enough memory to accommodate OS processes and the MySQL. You can set this in your my.cfn file.

Be aware this is not a silver bullet, and if your server has a lot of ram, let’s say > 100GB, if you set at 75%, you still have about 25GB free, and that’s way more than the SO needs. So it’s all a matter of your server memory size.

Make sure you review this point on next time!

MySQL InnoDB Cluster setup

Hey folks, today I would like to show how easy is to do a HA cluster with MySQL InnoDB Cluster.

First a few things we must know: You’ll need at least 3 servers (for database) to do the job. We’ll need MySQL Shell on every server. Every node needs a two-way connection with each node (he need to be able to connect to the other node, and other nodes needs to be able to connect to the one). You will need a fourth server to work as a router. I’ll assume you already have a well configured MySQL Server on all three servers. Today I’ll be using version 8.

To be easier, every server will have an entry to the others on the hosts file. So let’s begin creating a common user on every server and give privileges enough to do the JOB. Be aware that I’m no focusing on security issues, like allowing the user to access the database from any IP, the best solution would be set the servers IPs.

CREATE USER 'syncron'@'%' IDENTIFIED BY 'Abacaxi22';
GRANT ALL PRIVILEGES ON *.* TO 'syncron'@'%' WITH GRANT OPTION;

1) Creating the cluster

Now let’s fire MySQL Shell on every node and prepare the server to join/create a InnoDB Cluster. If you never used this Shell, give it a try, it’s an awesome tool and it’s really easy to use and learn. We’ll basically use the dba commando for all of our needs today. Run dba.configureInstance() on every node and then, on the “main” server run the following.

You can run the Shell with mysqlch --uri=syncron@mysql1:3306 and voila. You’re on the server 1.

cluster = dba.createCluster('tasks') --Cluster name, take note
dba.getCluster('tasks').status()

After each command, you should see the following results. Look that I’m actually running the command on a server named mysql1.

Look at the status result, and see that after creating the cluster, we already have a node running and he’s a R/W node. And look at the ‘statusText’.. The cluster still not fault tolerant.

2) Adding a node to the cluster

Now let’s connect to the second server. On the second server, instead of connecting to itself, we’ll connect to the mysql1 server, so use the same command as before to spin the Shell. After that, let’s add the second instance to the cluster.

cluster = dba.getCluster('tasks') -- the cluster name, remember?
cluster.addInstance('syncron@mysql2:3306') -- now we use the second server uri

IF everything works as expected you’ll see the following as a result.

You can run the status on the cluster again to see how it’s now. But it still not fault tolerant. Do the same job on the third server (mysql3 on my environment) and THEN you can get the status and see the magic (or at least the status page of it).

Be aware that, the server now IS fault tolerant BUT, can tolerate just one. But take a closer look and see that we have just one node with R/W permission, the other two are R/O… and that’s how it works (in this case at least). How we handle the connections now that we have three different servers running? Easy… use the Mysql Router… Now to the fourth server.

3) The Router

That’s the easy step… just run .\mysqlrouter.exe --bootstrap syncron@mysql1:3306 and you’ll get the config file generated. That’s just the default configuration, take a look at the manual to achieve better results. The router will give you two port to handle the connections, one with R/W permission and the other one with R/O. It’ll handle the “reconnect” for your application if needed.

Easy huh? Go ahead and shut down one instance and look at the cluster status again. Now two extra tips..

  • If you get the following when trying to configure or join a cluster, you need to check the grant of the user, here we used the most of it, but you can manage the permissions to achieve a better (and more secure) login.

Access denied; you need SYSTEM_VARIABLES_ADMIN and PERSIST_RO_VARIABLES_ADMIN privileges for this operation (MySQL Error 3630)
  • If you plan to upgrade your server, do it with caution. You MUST do the R/W node at the END. Before shut down the server run set persist group_replication_start_on_boot=0; and set to true again after the upgrade and then restart it again.

Cheers!

Managing database changes with Sqitch

Nowadays everybody talks about continuous integration, automated test tools, and stuff like that. But almost every time it’s about testing applications. Let’s talk a little about Sqitch, “a database change manager” as they use to call themselves.

Let’s say you’re starting a new project and want to ensure every database change are managed and secured with tests? Just like with an app, you can write tests to ensure everything works and you can have a “revert” method for the changes too. Suppose you are changing a field type and start creating a backup column (fallback ;)), but something goes wrong on the meantime and for some reason, you can’t do a simple rollback… revert it and be happy.

What if your CI says that the application it’s fine and can be released to the production server but “a wild bug appears” and you NEED to revert to a previous version of your service, easy huh? With Docker and one command line, the service is up and running again. But what about the database changes your team did? Just call the revert and be happy :).

Have I mentioned that you can have multiples connection URI at the project and have different environments to apply the changes with a single command?!. Let’s see a few commands to create a simple MySQL database…

Create a folder, initialize GIT in it and initialize the Sqitch project.

mkdir awesomeapp
cd awesomeapp
git init .
sqitch init flipr --uri https://github.com/theory/sqitch-mysql-intro/ --engine mysql

Now you have a ready to go sqitch structure. Now let’s do some basic creates.

sqitch add createcustomeruser -n 'Creation of Customer table'

Add the create table script to the file inside the “deploy” dir.

CREATE TABLE customer (
id INTEGER PRIMARY KEY,
name VARCHAR(50)
);

Add the verify script, can be a simple select. Add the revert script, guess what’s the command, and after all, you can run the commands below and see the result.

sqitch deploy db:mysql://root@/awesomeapp
sqitch verify db:mysql://root@/awesomeapp
sqitch revert db:mysql://root@/awesomeapp

This was a quick explanation about Sqitch so go ahead and read more at their docs and tutorials, they are pretty good.

RabbitMQ on a Micro-service Architecture

Microservices ain’t a boom word right now, at least not as it was before… But we still can see a few things about it, and that’s what I’ll talk about today.

When we talk about microservices and how they work and scale, we think how the services will exchange data between them. We have a few options to do it, we can send the data to a reverse proxy that will handle the load balance for us, for example. There is also the option to send directly to a docker service name and let swarm handle it. We can use Kubernets and the service discovery that it provides to us (or use Consul for that).

All these options are fine, but what if we need a huge task to be done, and the service that was dealing with that job crashes? Did we lose our job? Do we need to manage that crash on the communication protocol our selves? How’s the load on the service? Is the service handling the request or they’re slowing? Too many questions right?

The point here is that even a simple and fast service can fail/crash and throw some error to our user or to the consuming service and that’s bad. Here’s where RabbitMQ comes in! it’ll handle the request between our services and ensure every single request has an answer.

Rabbit handle queues and delivery they to the right consumer. Once a job is “delivered” to a consumer, it has two main options:
1) Automatic acknowledge
2) The consumer sends the acknowledge.

The second one is the better way to say that some job was achieved. After the job, the consumer may give an answer back to the producer with some data.

Wanna know the load avg of the services?
Every service has it’s own queue and rabbit can show you how every single queue is doing. With that, you can see if you need more consumers from the same service or can scale down any of them.

You have another approach to solve those questions? Let me know 😉

Change display settings on linux with Disper

From time to time, I change the Linux distro on my laptop or just do a fresh install on it. And once in a while, have random problems with external displays. It can be something really “simple” like don’t detecting the external monitor or something crazy like the image below.

screenshot of the bug

As you can see at the image, the mint detected the display but mirrored it in a crazy way that works but doesn’t at the same time. If you try anything and doesn’t get working, or just wanna skip the whole job of configuring complexes text files, give a try to Disper. Download the latest version. Extract it on any folder, and make install it (on the extracted folder).

make install

After that, you can start using it… There are a few options that will serve you well.

disper -e #extend your display
disper -c #clone your display
displer -s #only your external display

 

Embeded server for Java Web App and a New author

Hi, my name is Maurício and it’s the first time I’m posting here. I’m a Senior Software Engineer and I live in Brazil. I have six years working with web development and love Java and Javascript. I’ll be posting here from time to time development stuff. Hope you enjoy it!

Our subject for today: Once in a while, I do a small Java Web project and every time I think on which application server will host the app, Tomcat, GlassFish, WebLogic, WildFly, Payara and there are plenty more servers. Nowadays almost everybody uses a Docker container to host app’s, and that’s nothing wrong with that, but why load a container if I can just embed the server in a jar file?

If you have a Maven project, there’s almost no work to be done, just add a plugin that targets the package goal and there you go. In the end, you will have a jar file with your app-name-swarm.jar (i know, it’s a WildFly swarm, not a fully WildFly, but it weights just 64Mb). Run it like a normal jar and you will have a running web app.

<plugin>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>wildfly-swarm-plugin</artifactId>
  <version>2018.2.0</version>
  <executions>
    <execution>
      <goals>
        <goal>package</goal>
      </goals>
    </execution>
  </executions>
</plugin>

After all, if you want, that jar can be loaded in a Docker container and launched at a production server. There are other options like Tomcat embedded, Payara microserver and a few others. Go ahead and give a try to any one of them and share your experience, I’m sure you have something to say about them.

But remember, these options aren’t a full application server, they’re engineered to be small and focused on specific features so read their docs to see what they can and can’t do before using. 😉