Denial of Service (DOS) attacks can be especially effective against certain types of web application. If the application is highly dynamic or database intensive it can be remarkably simple to degrade or cripple the functionality of a site. This blog article describes some simple methods to mitigate single source IP DOS attacks using HAProxy. I've described how you would implement the techniques using the Loadbalancer.org appliance but they are easily transferable to any HAProxy based cluster.

The most important thing when blocking brute force attacks is not to block any legitimate traffic. Now if you know that your site is low traffic such as an internal application with authentication we can set some fairly strict and specific rules about blocking brute force access, but in most scenarios for a public facing web site we have to be fairly lenient.

We can usually make some generic assumptions for web browser traffic such as:

  1. Web browsers wont generate more than 5-7 concurrent connections per domain.
  2. Web browser users won't click more than 10 pages in 10 seconds!
  3. Genuine web users are unlikely to generate 10 application errors in 20 seconds - unless your application is broken and then you have bigger problems!

So lets construct some rules based on source IP address rate limiting. In order to do this with the Loadbalancer.org appliance you would create a Layer 7 Manual Configuration and ensure that the labels for the cluster "Secure_Web_Cluster" and the labels for SecureWeb1 & SecureWeb2 match your XML so that the system overview can still be used to manage the manual cluster configuration. For more information see custom configuration on page 105 of the Loadbalancer.org administration manual.

# MANUALLY DEFINE YOUR VIPS BELOW THIS LINE:
listen Secure_Web_Cluster
bind 192.168.64.29:80 transparent
mode http
timeout http-request 5s #Slowloris protection
# Allow clean known IPs to bypass the filter (files must exist)
#tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
#tcp-request content reject if { src -f /etc/haproxy/blacklist.lst }

I've remarked out the blacklist and white list here as the files don't exist on my system, however the whitelist would be very important if you have some customers behind a large proxy such as a corporate firewall i.e. they all appear to come from a single IP address.
Now lets move on to some actual blocking:

# Dont allow more than 10 concurrent tcp connections OR 10 connections in 3 seconds
tcp-request connection reject if { src_conn_rate(Abuse) ge 10 }
tcp-request connection reject if { src_conn_cur(Abuse) ge 10 }
tcp-request connection track-sc1 src table Abuse

These rules are pretty self explanatory, simply reject the tcp connection if this source IP has more than 10 concurrent connections or more than 10 requests in 3 seconds (as defined in the Abuse backend stick table).

You can test this with a simple apache bench command:

To test > 10 requests in 3 seconds:

ab -n 11 -c 1 http://192.168.64.29/

or to test concurrency blocking:

ab -n 1 -c 10 http://192.168.64.29/

So how about doing something more at the application level such as actual HTTP requests:

# ABUSE SECTION works with http mode dependent on src ip
tcp-request content reject if { src_get_gpc0(Abuse) gt 0 }
acl abuse src_http_req_rate(Abuse) ge 10
acl flag_abuser src_inc_gpc0(Abuse) ge 0
acl scanner src_http_err_rate(Abuse) ge 10
# Returns a 403 to the abuser and flags for tcp-reject next time
http-request deny if abuse flag_abuser
http-request deny if scanner flag_abuser

The above section is slightly more complex in that the first time a user is flagged as an abuser we return a 403 error message, and for subsequent abuse we reject the tcp connection. This is a bit kinder to friendly robots that may be crawling the site (you may also want to white list the Google bot.... etc.)

Because we are blocking HTTP requests here you can simply test it by opening a browser and hitting refresh 6 or 7 times to trigger the response, obviously you will need to generate an error on the web server if you wish to test the http_err_rate acl.

Now the following section simply configures the servers in the cluster, you can choose to use cookies or stick tables for persistence as required for your environment:

errorfile 403 /usr/share/nginx/html/index.html
balance leastconn
option redispatch
option abortonclose
maxconn 40000
timeout connect 5s
timeout server 10s
timeout client 30s
server SecureWeb1 192.168.64.13:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions
server SecureWeb2 192.168.64.15:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions

Now because HAProxy only allows one stick table per front end we will store all our abuse information in a completely separate section:

backend Abuse
stick-table type ip size 1m expire 30m store conn_rate(3s),conn_cur,gpc0,http_req_rate(10s),http_err_rate(20s)

The important thing to be aware of is that this table will potentially grow to 1 million entries and consume approx 100Mb of RAM. You may also want to be very lenient with the amount of time a user is blocked for. I have set the block time at an aggressive 30 minutes but you may well find 30 seconds is enough of a deterrent for most hacker bots to move on to easier targets :-).

If you want to inspect the table entries simply do a socat command:

[root@lbmaster ~]# echo "show table Abuse" | socat unix-connect:/var/run/haproxy.stat stdio
# table: Abuse, type: ip, size:1048576, used:1
0x968c9c: key=192.168.64.12 use=0 exp=1795159 gpc0=4 conn_rate(3000)=0 conn_cur=0 http_req_rate(10000)=13 http_err_rate(20000)=4

As a side note you should definitely try and get your DOS rules as close to the edge of your network as possible so if you have a firewall in front of the load balancer you can use simple layer 4 rate limiting rules on that instead of / as well as rules in HAProxy i.e.:

/sbin/iptables -A INPUT -p TCP --syn --dport 80 -m recent --set
/sbin/iptables -A INPUT -p TCP --syn --dport 80 -m recent --update --seconds 10 --hitcount 10 -j DROP

The above rule blocks more than 10 http connections in 10 seconds by watching for the connection setup syn packet, try here for extra information on iptables rate limiting.
NB. One obvious gotcha is that this will only work if keepalive is turned off on your web servers :-).

At Loadbalancer.org we are very cautious about making any of these defensive strategies our default behaviour as our clients have many different security requirements. You should carefully consider the potential unintended consequences of these mitigation techniques as everyone knows the quickest way to DOS a web site is by installing aggressive security :-).

You can download the example of our manual HAProxy configuration here.