Configuring HAProxy loadbalancer on the dediserve cloud

HAproxy offers a flexible, powerful loadbalancer solution for clients looking for High Availabillity and failover between multiple web or application servers.

With this quick and easy guide, you can be up and running in minutes!

First of all, you'll need a server deployed in the same cloud as your web servers. We recommend the CentOS 6.4 x64 Template with 2GB or 4GB RAM as the ideal starter Loadbalancer, depending on your traffic. Like all dediserve servers, you can easily and quickly scale on demand if you need more capacity!

Installing HAProxy

For most distributions you can install haproxy using your distribution's package manager. 

CentOS 6

Run the commands:

[root@LB01 ~]# yum -y install haproxy

Install a base config

Once installed backup the HAProxy config file and download the dediserve default config:

[root@LB01 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
[root@LB01 ~]# wget http://dediserve.com/haproxy/haproxy.cfg -O /etc/haproxy/haproxy.cfg
chkconfig haproxy on

Configuring HAProxy

Editing /etc/haproxy/haproxy.cfg - There are a number of items that need to be changed in order to get HAProxy functional. These will be outlined below. Keep in mind you need to edit these values to reflect the server's IP's.

First and foremost change

listen webfarm 0.0.0.0:80

to

listen webfarm 127.0.0.1:80

Edit 127.0.0.1 to reflect your load balancer's public IP.

Now you can add your web servers. In the following you will want to replace the 10.0.0.X IP address with that web servers

server web1 10.0.0.1:80 check # Active in rotation
   server web2 10.0.0.2:80 check # Active in rotation
   server web3 10.0.0.3:80 check # Active in rotation
   server backup1 10.0.0.4:80 check backup # Not active "sorry server" - this one comes live if all web heads are down

Above is an example of what a four server config would look like. Once you have completed this portion you can then start HAProxy and start serving pages(assuming your web servers are ready).

service haproxy start

Below is the default configuration template for haproxy.cfg:

   #global options
   global
      
       #logging is designed to work with syslog facility's due to chrooted environment
       #log loghost    local0 info - By default this is commented out
      
       #chroot directory
       chroot /usr/share/haproxy
      
       #user/group id
       uid 99
       gid 99
      
       #running mode
       daemon
   defaults
      
       #HTTP Log format
       mode http
       #number of connection retries for the session
       retries 3
      
       #try another webhead if retry fails
       option redispatch
       #session settings - max connections, and session timeout values
       maxconn 10000
       contimeout 10000
       clitimeout 50000
       srvtimeout 50000
   #Define your farm
   #listen webfarm 0.0.0.0:80 - Pass only HTTP traffic and bind to port 80
   listen webfarm 0.0.0.0:80
      
       #HTTP Log format
       mode http
       #stats uri /haproxy - results in http:///haproxy (shows load balancer stats)
       stats uri /haproxy
       #balance roundrobin - Typical Round Robin
       #balance leastconn - Least Connections
       #balance static-rr - Static Round Robin - Same as round robin, but weights have no effect
       balance roundrobin
       #cookie  prefix - Used for cookie-based persistence
       cookie webpool insert
       #option httpclose - http connection closing
       option  httpclose
       #option forwardfor - best stated as "Enable insertion of the X-Forwarded-For header to requests sent to the web heads" aka send EU IP
       option forwardfor
          
       #Web Heads (Examples)
       #server WEB1 10.0.0.1:80 check - passes http traffic to this server and checks if its alive
       #server WEB1 10.0.0.1:80 check port 81 - same as above but checks port 81 to see if its alive (helps to remove servers from rotation)
       #server WEB1 10.0.0.1:80 check port 81 weight 100 - same as the above with weight specification (weights 1-256 / higher number higher weight)
       #server WEB1 10.0.0.1:80 check backup - defines this server as a backup for the other web heads
       #Working Example: *USE THIS HOSTNAME FORMAT*
       server WWW1 10.0.0.1:80 cookie webpool_WWW1 check port 81 # Active in rotation   
       server WWW2 10.0.0.2:80 cookie webpool_WWW2 check port 81 # Active in rotation
       server WWW3 10.0.0.3:80 check # Active in rotation
       server WWW4 10.0.0.4:80 check backup # Not active "sorry server" - this one comes live if all web heads are down
   #SSL farm example
   #listen https 0.0.0.0:443
   #    mode tcp
   #    server WEB1 10.0.0.1:443 check

Session Persistence with SSL

If the you wish to also balance SSL traffic, you will need to set the balance mode to "source" This setting takes a hash of the client's IP address and the number of servers in rotation, sending traffic from one IP address to the same web server consistently. The persistence will be reset if the number of servers is changed.:

listen https 0.0.0.0:443
mode tcp
balance source
server WEB1 10.0.0.1:443 check


Was this article helpful?

mood_bad Dislike 6
mood Like 1
visibility Views: 12088