Denial of Service (DOS) attacks can be especially effective against certain types of web application. If an application is highly dynamic or database-intensive it can be remarkably simple to degrade or cripple the functionality of a site. This blog describes some simple methods of mitigating single-source IP DOS attacks using HAProxy.

I've focused on how you would implement the techniques using the Loadbalancer.org appliance,but they are easily transferable to any HAProxy based cluster.

Rules to allow good traffic

The most important thing when blocking brute force attacks is not to block any legitimate traffic. If you know that your site receives low traffic (such as an internal application with authentication) you can set some fairly strict and specific rules to block brute force access. But in most scenarios for a public-facing website you have to be fairly lenient.

You can usually make some generic assumptions about web browser traffic, such as:

  1. Web browsers won't generate more than 5-7 concurrent connections per domain.
  2. Web browser users won't click more than 10 pages in 10 seconds!
  3. Genuine web users are unlikely to generate 10 application errors in 20 seconds - unless your application is broken, and then you have bigger problems!

So let's construct some rules based on source IP address rate limiting. In order to do this with a Loadbalancer.org appliance, create a Layer 7 Manual Configuration and check that labels for the cluster "Secure_Web_Cluster" and for SecureWeb1 & SecureWeb2 match your XML. This ensures that the system overview can still be used to manage the manual cluster configuration. (For more information, see custom configuration on page 105 of the Loadbalancer.org administration manual.)

# MANUALLY DEFINE YOUR VIPS BELOW THIS LINE:
listen Secure_Web_Cluster
bind 192.168.64.29:80 transparent
mode http
timeout http-request 5s #Slowloris protection
# Allow clean known IPs to bypass the filter (files must exist)
#tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
#tcp-request content reject if { src -f /etc/haproxy/blacklist.lst }

I've remarked out the blacklist and whitelist here as the files don't exist on my system. However, the whitelist would be very important if you have customers behind a large proxy such as a corporate firewall, making them appear to come from a single IP address.

Rules to block bad traffic

Now let's move on to some actual blocking:

# Dont allow more than 10 concurrent tcp connections OR 10 connections in 3 seconds
tcp-request connection reject if { src_conn_rate(Abuse) ge 10 }
tcp-request connection reject if { src_conn_cur(Abuse) ge 10 }
tcp-request connection track-sc1 src table Abuse

These rules are pretty self-explanatory: simply reject the TCP connection if this source IP has more than 10 concurrent connections or more than 10 requests in 3 seconds (as defined in the Abuse backend stick table).

You can test this with a simple Apache bench command.

To test > 10 requests in 3 seconds:

ab -n 11 -c 1 http://192.168.64.29/

Or to test concurrency blocking:

ab -n 1 -c 10 http://192.168.64.29/

You could do something more at the application level, such as actual HTTP requests:

# ABUSE SECTION works with http mode dependent on src ip
tcp-request content reject if { src_get_gpc0(Abuse) gt 0 }
acl abuse src_http_req_rate(Abuse) ge 10
acl flag_abuser src_inc_gpc0(Abuse) ge 0
acl scanner src_http_err_rate(Abuse) ge 10
# Returns a 403 to the abuser and flags for tcp-reject next time
http-request deny if abuse flag_abuser
http-request deny if scanner flag_abuser

The above section is slightly more complex in that the first time a user is flagged as an abuser we return a 403 error message, and for subsequent abuse we reject the TCP connection. This is a bit kinder to friendly robots that may be crawling the site (you may also want to whitelist the Googlebot...).

Because you are blocking HTTP requests you can test by opening a browser and hitting refresh 6 or 7 times to trigger the response. Obviously you will need to generate an error on the web server if you wish to test the http_err_rate acl.

Configuring servers

The following section demonstrates how to configure the servers in the cluster. You can choose to use cookies or stick tables for persistence as required for your environment:

errorfile 403 /usr/share/nginx/html/index.html
balance leastconn
option redispatch
option abortonclose
maxconn 40000
timeout connect 5s
timeout server 10s
timeout client 30s
server SecureWeb1 192.168.64.13:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions
server SecureWeb2 192.168.64.15:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions

Rules to remember

Because HAProxy only allows one stick table per front end, you should store all abuse information in a completely separate section:

backend Abuse
stick-table type ip size 1m expire 30m store conn_rate(3s),conn_cur,gpc0,http_req_rate(10s),http_err_rate(20s)

It's important to be aware that this table will potentially grow to 1 million entries and consume approx 100Mb of RAM. You may want to be very lenient with the amount of time a user is blocked for. I have set the block time at an aggressive 30 minutes, but you may well find 30 seconds is enough of a deterrent for most hacker bots to move on to easier targets.

If you want to inspect the table entries simply do a socat command:

[root@lbmaster ~]# echo "show table Abuse" | socat unix-connect:/var/run/haproxy.stat stdio
# table: Abuse, type: ip, size:1048576, used:1
0x968c9c: key=192.168.64.12 use=0 exp=1795159 gpc0=4 conn_rate(3000)=0 conn_cur=0 http_req_rate(10000)=13 http_err_rate(20000)=4

Rules on the edge

As a side note you should definitely try and get your DOS rules as close to the edge of your network as possible. This means that if you have a firewall in front of the load balancer you can use simple layer 4 rate limiting rules instead of/as well as rules in HAProxy:

/sbin/iptables -A INPUT -p TCP --syn --dport 80 -m recent --set
/sbin/iptables -A INPUT -p TCP --syn --dport 80 -m recent --update --seconds 10 --hitcount 10 -j DROP

The above rule blocks more than 10 http connections in 10 seconds by watching for the connection setup syn packet. Have a look here for extra information on iptables rate limiting. Note that this will only work if keepalive is turned off on your web servers.

At Loadbalancer.org we are very cautious about making any of these defensive strategies our default behaviour as our clients have many different security requirements. You should carefully consider the potential unintended consequences of these mitigation techniques, as everyone knows the quickest way to DOS a web site is by installing aggressive security :-).

You can download an example of our manual HAProxy configuration here.