Monday, April 6, 2015

Introduction to Load Balancing with HAProxy

HAProxy, or High Availability Proxy, is an open source load balancer for TCP and HTTP services, allowing you to route incoming traffic destined for one address to a number of different backends. It's perfect for HTTP load balancing due to the fact that it supports session persistence and layer 7[1] processing. Session persistence, or sticky sessions, can be done by inserting a cookie, allowing session data to exist across servers. If you're looking for a load balancing solution that's managed for you, Rackspace provides Cloud Load Balancers and AWS provides Elastic Load Balancing (ELB).

Prerequisites: You'll want to first get familiar with Vagrant. Also check out this article on Cloud System Architecture Examples. This guide was written for Mac Users.

This guide is going to show you how to set up a simple load balancing solution that involves one load balancer routing requests to two web servers with nginx installed. This will be the predecessor to our a two load balancer with heartbeat solution. Please check back for that article, in the future. For now, the architecture is as follows:

Keep in mind this is going to be a mostly manual process, which I hope will give you a fundamental understanding of this system. In addition to more complexe load balancing solutions, we'll also be following up this guide with a saltstack-powered, automated solution. Let's begin.

Setting Up the Virtual Machines

# create the directory
mkdir -p ~/vagrants/loadbalancer && cd ~/vagrants/loadbalancer

# initialize your Vagrantfile
vagrant init ubuntu/trusty64

Now replace the contents of your Vagrantfile with the following:

# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config |

  config.vm.provision "shell", inline: "apt-get update"

  config.vm.define "lb" do |lb|
    lb.vm.box = "ubuntu/trusty64"
    lb.vm.network "private_network", ip: "192.168.50.2"
  end
  config.vm.define "webserver1" do |webserver1|
    webserver1.vm.box = "ubuntu/trusty64"
    webserver1.vm.network "private_network", ip: "192.168.50.3"
  end
  config.vm.define "webserver2" do |webserver2|
    webserver2.vm.box = "ubuntu/trusty64"
    webserver2.vm.network "private_network", ip: "192.168.50.4"
  end
end

All we're really doing here is setting up 3 Ubuntu 14.04 VMs with defined private IPs. Every step from here on will be manual. When you're ready, run vagrant up.

Setting Up the Web Servers

Now let's install nginx on both of our webserver instances. Perform the following steps for webserver1:

vagrant ssh webserver1

sudo apt-get install nginx

# edit the default index file
sudo vi /usr/share/nginx/html/index.html

Change all occurrences of "Welcome to nginx!" on http1 to "Welcome to Webserver1" in the markup file. This will allow us to distinguish between the two web servers when we request them through the load balancer. Now exit out of that VM entirely and repeat for webserver2. After both web server VMs have been configured, you'll next want to set up your load balancer.

Setting Up the Load Balancer

Installation

vagrant ssh lb
sudo apt-get install haproxy

Configuration

sudo vi /etc/default/haproxy

# set ENABLED to 1
ENABLED=1
sudo mv /etc/haproxy/haproxy.cfg{,.original}
sudo vi /etc/haproxy/haproxy.cfg

# Add the following content
global
    log 127.0.0.1 local0 notice
    maxconn 2000
    user haproxy
    group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000

listen website 0.0.0.0:80
    mode http
    stats enable
    stats uri /haproxy?stats
    stats realm Strictly\ Private
    stats auth user1:password
    stats auth user2:password
    balance roundrobin
    option httpclose
    option forwardfor
    server webserver1 192.168.50.3:80 check
    server webserver2 192.168.50.4:80 check

If you'd like a break down of all of the above directives, please see the official documentation.

Testing Load Balancing and Failover

# start haproxy
service haproxy start

# send a request
curl http://0.0.0.0

Now if you run that last curl http://0.0.0.0 multiple times, the markup that gets returned will sometimes contain "Welcome to Webserver1" and other times "Welcome to Webserver2". That means your load balancer is working. Now run vagrant ssh webserver1 and stop nginx with sudo service nginx stop and curl again from the load balancer server. The load balancer will now always hit webserver2.

References:


1. ^ Wikipedia (26 June 2015). "OSI Model Application Layer"

No comments:

Post a Comment