How to rate limit with HAProxy Stick Tables and the WAF

How to rate limit with HAProxy Stick Tables and the WAF

HAProxy Published on 6 mins Last updated

A while ago I was asked if it would be possible to apply some general rate limiting in HAProxy and the WAF, in order to help prevent DOS-style attacks on a customer's servers.

A large telecoms customer was having issues with hackers trying to brute force the logins for individual phone numbers. For technical reasons they were struggling with protecting the network via a VPN. We solved this for them quickly with a custom set of rules to prevent the brute force logins and keep detailed logs. This worked fine and they were very happy.

However, some months later they started complaining of system slowness and the logs showing thousands of connections. When I investigated this I found that the brute force protection rules were working fine. However, the rules would allow through what is deemed to be "undangerous" (i.e. the traffic would be seen as noise, but couldn’t cause a security issue). The customer was now suffering from hackers throwing hundreds of thousands of random requests i.e. a small application level DOS attack. Not a major security issue but definately anoying, as it was tying up processing time on the backend servers and filling the logs.

After a brief discussion of the options we decided that we should implement the rate limitting as close to the attacker as possible in HAProxy. But to keep the monitoring and logging in the ModSecurity WAF for easy analysis, and the greatest future flexibility.

With the challenge set, I went away and spoke to my best friend (Google and the HAProxy Manual) where I stumbled upon the fantastic blog post HAProxy Rate Limiting: 4 Examples.

This article on its own was already a fantastic resource and allowed me to achieve what I wanted, but, because I am who I am, I wanted to take this even further.

With that knowledge in hand, I set to work. I won’t bore you with the details of all of my efforts, but I will go into what I’ve implemented to achieve what I wanted here (and hopefully explain it).

ModSecurity Configuration

Firstly, I added the following ModSecurity rules to my WAF rule set:

# Start Trial Rate Limiting Rules
# -- Set variables
SecAction \
	"id:5010000,\
	phase:1,\
	pass,\
	nolog,\
	setvar:'TX.rate_limiting_threshold_burst=2000'"

SecAction \
	"id:5010005,\
	phase:1,\
	pass,\
	nolog,\
	setvar:'TX.rate_limiting_period=1 hour'"

# Rate Limiting service protection master switch: 0 = off, 1 = on (default: 0)
SecAction \
	"id:5010010,\
	phase:1,\
	pass,\
	nolog,\
	setvar:'TX.general_rate_limiting=1'"

# Test if the header has been set (only if we are performing general rate limiting)
SecRule TX:general_rate_limiting "@eq 1" \
	"id:5010015,\
	phase:1,\
	pass,\
	nolog,\
	chain"
	SecRule REQUEST_HEADERS:X-RateLimit "@eq 1" \
		"setvar:'TX.general_rate_limiting_block=1'"

# Return a 403 to HAProxy
SecRule TX:general_rate_limiting_block "@eq 1" \
	"id:5010020,\
	phase:1,\
	deny,\
	status:403,\
	nolog,\
	setvar:'TX.general_rate_limiting_alert=1'"

# Set an entry in the log
SecRule TX:general_rate_limiting_alert "@eq 1" \
	"id:5010025,\
	phase:5,\
	pass,\
	log,\
  msg:'New block applied: IP address temporarily blocked for (%{TX.block_time_burst} seconds - exceeded the allowed threshold of %{TX.rate_limiting_threshold_burst} HTTP requests in a %{TX.rate_limiting_period} period).',\
  tag:'GENERAL_RATE_LIMITING/NEW_BLOCK',"
# End Trial Rate Limiting Rules

I won’t tell you what all of the above means as there are far better resources out there that can explain it, but I will focus on a few key points:

SecRule TX:general_rate_limiting "@eq 1" \
	"id:5010015,\
	phase:1,\
	pass,\
	nolog,\
	chain"
	SecRule REQUEST_HEADERS:X-RateLimit "@eq 1" \
		"setvar:'TX.general_rate_limiting_block=1'"

Here, the WAF will look for a specific set header in the request (X-RateLimit), if this is set, then it sets “TX.general_rate_limiting_block” to “1”, this is then forwarded onto the next rule:

SecRule TX:general_rate_limiting_block "@eq 1" \
	"id:5010020,\
	phase:1,\
	deny,\
	status:403,\
	nolog,\
	setvar:'TX.general_rate_limiting_alert=1'"

This rule is simply returning a 403 response to HAProxy and sets the variable “TX.general_rate_limiting_alert=1” to “1”, this is then forwarded onto the final rule:

SecRule TX:general_rate_limiting_alert "@eq 1" \
	"id:5010025,\
	phase:5,\
	pass,\
	log,\
  msg:'New block applied: IP address temporarily blocked for (%{TX.block_time_burst} seconds - exceeded the allowed threshold of %{TX.rate_limiting_threshold_burst} HTTP requests in a %{TX.rate_limiting_period} period).',\
  tag:'GENERAL_RATE_LIMITING/NEW_BLOCK',"

Here we are creating a log entry into the WAF logs to inform the administrator that a block has occurred. In reality, this is superficial and just there for a logging and auditing purpose, the real blocking is performed by HAProxy which I will discuss next.

The log returned by the WAF will look similar to the below:

[Thu Jan 06 19:28:46.229491 2022]
        [:error]
        [pid 5085:tid 140267298473728]
        [client 10.10.0.30:12702]
        [client 10.10.0.30]
      • ModSecurity: Warning. Operator EQ matched 1 at TX:general_rate_limiting_alert.
        [file "/opt/httpd-waf/modsecurity.d/usr_WAF-RATE_LIMIT_TEST_rules.conf"]
        [line "1483"]
      • [id "5010025"]
        [msg "New block applied: IP address temporarily blocked for (3600 seconds - exceeded the allowed threshold of 2000 HTTP requests in a 1 hour period)."]
        [tag "GENERAL_RATE_LIMITING/NEW_BLOCK"]
        [hostname "testing.script.com"]
        [uri "/testURI/1a1fe7f993a6.jpg"]
        [unique_id "YddC7n8AAAEAABPduKEAAADI"]

HAProxy Configuration

For the less savvy among us (it took me a while to get my head around this), the way we implement the WAF is as follows (we lovingly refer to this as the WAF sandwich):

  1. We usually create a virtual service with real servers behind it (the application cluster).
  2. Then protect it with a WAF which is an implementation of ModSecurity which sits behind another HAProxy virtual service

Hopefully, the diagram below better explains this layout:

Why do we use a WAF sandwich?

  1. WAFs often need scaling out in Enterprise networks, the sandwich allows you to add more idential WAFs very quickly without effecting other services.
  2. If you get a large application DOS attack, your WAF may get overloaded and the sandwich allows you to bypass on failure.

Now, onto the manual HAProxy configuration, we will apply the following:

listen WAF-RATE_LIMIT_TEST
bind 192.168.98.1:80 transparent
mode http
balance leastconn
stick on src
stick-table type ip size 100k expire 1h store http_req_rate(180s),gpc0
tcp-request connection expect-proxy layer4 if src_stunnel
server backup 192.168.98.1:65435 backup  non-stick 
timeout client 900000
timeout server 901000
option http-keep-alive
timeout http-request 5s
http-request del-header X-Forwarded-For
http-request track-sc0 src
http-request set-header X-RateLimit 1 if { sc_http_req_rate(0) gt 2000 }
http-response sc-inc-gpc0(0) if { status eq 403 }
http-request deny deny_status 429 if { sc_get_gpc0(0) eq 1 }
option forwardfor
timeout tunnel 1h
option redispatch
option abortonclose
maxconn 40000
option httplog
server WAF-RATE_LIMIT_internal 127.0.100.1:80 id 1254866153  weight 100  check  inter 10000  rise 2  fall 2  slowstart 8000 minconn 0  maxconn 0  on-marked-down shutdown-sessions

The above configuration may seem daunting, but actually, the majority of it is rather standard for an HAProxy configuration, what I would like to do is walk you through the highlighted items.

stick-table type ip size 100k expire 1h store 
http_req_rate(180s),gpc0

This configuration item initiates an HAProxy stick table where it is storing the request rate over a 180s (3 minute) period into a custom counter called gpc0 for each unique IP address. The stick table entry will expire after 1 hour.

Next up we have:

http-request set-header X-RateLimit 1 if { sc_http_req_rate(0) gt 2000 }

This item sets a header called “X-RateLimit” to “1” if the request rate is greater than 2000; this means that someone has hit the service over 2000 times in a 3 minute period, which is quite a lot (obviously you’d want to tweak this to your provided service). If you remember from earlier on, in the WAF configuration, we are looking for this header and then returning a 403 response which leads us to the next item:

http-response sc-inc-gpc0(0) if { status eq 403 }

This sets and increments the stick table counter for gpc0 on the return of the 403 response, which finally leads us onto the final item:

http-request deny deny_status 429 if { sc_get_gpc0(0) eq 1 }

Which on any other request from the IP address will result in HAProxy sending a 429 response to the user.

If after 1 hour no more requests have been made, then the entry will be removed from the stick table and the user can make additional requests.

Stick Table Management

Using the Loadbalancer.org web user interface, the stick table can be managed in “Reports > Layer 7 Stick Table”.

From the command line you can use the following to display the stick table:

echo "show table WAF-RATE_LIMIT_TEST" | sudo socat stdio /var/run/haproxy.stat
# table: WAF-RATE_LIMIT_TEST, type: ip, size:102400, used:1
0x1f4db68: key=192.168.65.185 use=0 exp=3272440 server_id=1254866153 gpc0=1 http_req_rate(180000)=2173

Likewise, from the command line you can use the following command to remove the entry from the stick table:

echo "clear table WAF-RATE_LIMIT_TEST key 192.168.65.185" | sudo socat stdio /var/run/haproxy.stat

Conclusion

And that is it, this is only just scratching the surface, there are so many different and complex things that you can do with this in order to rate limit potential attackers using the power of HAProxy and the WAF.

Now go forth and limit connections :-)

Need help?

Our experts are always here