HAProxy Basic Configuration
In this article I’ll briefly go over the haproxy configuration file, then we will make a simple TCP mode configuration followed by a HTTP configuration.
I’ll be using two virtual machines throughout this tutorial, the specs are as follows:
Hostname: loadbalancer-01 OS: CentOS 7.3.1611 IP: 192.168.1.181 SELinux: Permissive
Hostname: webserver-01 OS: CentOS 7.3.1611 IP: 192.168.1.180 SELinux: Permissive
The webserver is running nginx with the default config, listening on port 80. This is pretty simple to set up, so I won’t cover it here.
To start with we will install haproxy using yum, it’s in the default CentOS 7 repositories, so unless you’ve disabled them you can just run:
yum install -y haproxy
The install process adds a bunch of stuff, the systemd unit file, the directory /etc/haproxy, the executable in the /usr/sbin directory and creates a logrotate file along with its libraries. We only need to change the configuration file: /etc/haproxy/haproxy.cfg. To start with you it’s worth glancing at the default config to see some of the options available, afterwards we will wipe the file with:
The configuration file has the following sections:
- global – contains global settings
- default – contains the default values for proxy blocks
These groups sit below the global/default and contain the configuration for your virtual proxies, they are grouped into:
- frontend – this section is where you define a frontend server, such as which ip address/port to listen on, and which backend to send traffic to
- backend – defines backend server pools, server pools and how traffic is split between them
- listen – listen blocks are a combination between backend and frontend blocks, they use the same syntax but live all in the same block, they are good for simplifying configuration files
Although you can do many things with haproxy we will start with a simple and functional configuration to split load evenly between 2 servers, here is the config we will use:
global daemon maxconn 256 defaults timeout connect 5000s timeout client 50000s timeout server 50000s frontend in bind :80 default_backend out backend out mode tcp balance roundrobin server ws-01 192.168.1.181 weight 1 server ws-02 192.168.1.180 weight 1 listen stats :8080 mode http stats enable stats show-node stats uri haproxy?stats
This config file is fairly self explanatory, first we’re specifying that the software should run a daemon when started (aka a service), accept a maximum of 256 connections, timeout clients after a certain length of time and split traffic evenly between 2 backends.
The ‘listen stats’ block specifies the location of the http stats pages that comes with haproxy, this allows us to get a detailed view of what the haproxy process is actually doing. In order to view this, we need to start the service (we might as well enable it too):
systemctl start haproxy && systemctl enable haproxy && systemctl status haproxy
Hopefully this will output ‘active (running)’, and now we can see if our service is working, to do this open the IP address you have bound to the haproxy server in a web browser, you should be presented with one of your backend webservers like so:
If this didn’t work, it’s probably due to your firewall/selinux configuration. You can also check by running
curl localhost like so:
Note that it’s returning a 403, that doesn’t matter, as long as it’s returning something we can continue. You should run the command multiple times to ensure your connection is being split between the backends correctly.
Lastly we should check that we can reach the haproxy stats page, navigate to:
http://192.168.1.180:8080/haproxy?stats and you should be greeted with a stats page like so:
If that worked – great, try and familiarise yourself with the stats screen, it’s an incredibly useful tool for troubleshooting problems at a glance, I’ll go much more in depth on the logging and stats of haproxy in another article. If you’ve had any problems, the section below is to help you resolve them.
If you’re having problems connecting to your frontend servers you should ensure that the haproxy service is starting correctly, running
systemctl status haproxy -l will give you the stdout of the process when it started, it’s usually pretty clear what the problem is and it will specify which part of the configuration file is wrong. You can also check the systemd journal using
journalctl -xe which can give some more verbose feedback.
Most often you’ll have problems either because firewalld is blocking it (this is a dev environment, so just run
systemctl stop firewalld && systemctl disable firewalld, or that SELinux is blocking the process from accessing certain files or memory it needs, to stop it just run
setenforce 0, note you will need to restart the machine in order to restart SELinux.
If you’re having trouble connecting to the backend you should can use something like
tcpdump to see if the traffic is actually coming through, it’s not installed by default so install it with yum and run:
tcpdump tcp port 80
Any tcp traffic coming into the server on port 80 will be displayed in stdout, if you connect to the webpage now you should see some tcp traffic coming through, like so:
You can also use the same method to test if the proxy is correctly getting traffic to the backend by logging onto one of the webserver and running the same tcpdump command shown above.