HAProxy Load Balancing Docker Swarm


Docker Swarm enables us to easily scale up and down our servers with containers, but how do we take advantage of all of our containers? Preferably we would want to spread out the load across the multiple containers. With a HAProxy this becomes possible.

In this article, we are going to look at how we can use a HAProxy to do the load balancing across our containers. We are going to use docker-compose version ‘3.3’ and the docker stack to deploy our containers. If you don’t even know what docker is, I recommend checking out my docker introduction before you proceed with this article.

I created a small web application running on port 12000 that can be used in this article and uploaded it to the public docker hub, but you are free to use any image but you will have to make to make adjustments with the port configurations.


The docker-compose version that we will use is ‘3.3’, so according to the Docker Compatability matrix a docker version with the 17.06.0+ Docker Engine is required. I am running Docker version 17.09.1-ce and I recommend running this, or a later version. You can probably run an earlier version but that might require you to edit the docker-compose file slightly for compatability reasons.

Why do we need a HAProxy in Swarm mode?

Docker Swarm actually has load balancing in swarm mode with replicas. However, it uses DNS round-robin which is why a HAProxy is usually suggested as a better option. With DNS round-robin if a client or server attempts to connect to a failed docker host, they have to retry against another DNS entry. With a HAProxy using VRRP there is no delay in case a host has failed. Additionally, a HAProxy, lets us set up more advanced rules for routing traffic.

HAProxy docker-compose

Let’s start out by creating our docker-compose.yml file.

We are using the HAProxy provided by the docker cloud since it fits our use case very well and I wrote some comments in the docker-compose file to explain the configuration.

Starting our docker swarm

Open a terminal and navigate to your docker-compose.yml file. Start out by initializing a swarm and deploying our stack.

This deploys our stack with the name haproxy, let it boot up for a couple of seconds issue the command: docker ps. You should now have 3 app services, and the HAProxy running. curl -X GET localhost:12001 should give you a similar response as:

The name object is the unique identifier of the app service, and if you continue to send the same request you should see that a different ID shows up. If that works, that means that you have successfully set up load balancing to your containers using a HAProxy.

HAProxy load balancing to 3 apps

Thanks to Docker Swarm, it is now incredibly easy to scale up and down, and we do not need to worry anymore about the load balancing since the HAProxy will react to new nodes being added or nodes disappearing.

To scale up or down you can issue the following command: docker service scale haproxy_app=5 to scale it to 5 nodes, as an example.

Final words

We have learned how we can use a HAProxy in Docker Swarm with docker-compose to easily scale up and down our services and have it load balanced. This is, of course, quite a simple example and there is a lot more cool stuff what you can do with both Docker and a HAProxy. But once you have it up and running it is a lot easier to experiment by yourself.

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *