Microservices ain’t a boom word right now, at least not as it was before… But we still can see a few things about it, and that’s what I’ll talk about today.
When we talk about microservices and how they work and scale, we think how the services will exchange data between them. We have a few options to do it, we can send the data to a reverse proxy that will handle the load balance for us, for example. There is also the option to send directly to a docker service name and let swarm handle it. We can use Kubernets and the service discovery that it provides to us (or use Consul for that).
All these options are fine, but what if we need a huge task to be done, and the service that was dealing with that job crashes? Did we lose our job? Do we need to manage that crash on the communication protocol our selves? How’s the load on the service? Is the service handling the request or they’re slowing? Too many questions right?
The point here is that even a simple and fast service can fail/crash and throw some error to our user or to the consuming service and that’s bad. Here’s where RabbitMQ comes in! it’ll handle the request between our services and ensure every single request has an answer.
Rabbit handle queues and delivery they to the right consumer. Once a job is “delivered” to a consumer, it has two main options:
1) Automatic acknowledge
2) The consumer sends the acknowledge.
The second one is the better way to say that some job was achieved. After the job, the consumer may give an answer back to the producer with some data.
Wanna know the load avg of the services?
Every service has it’s own queue and rabbit can show you how every single queue is doing. With that, you can see if you need more consumers from the same service or can scale down any of them.
You have another approach to solve those questions? Let me know 😉