Security guard in black jacket

The role of load balancers in zero trust architectures

Security Published on 7 mins Last updated

Do you agree with the principles of 'zero trust'? Most probably. After all, we all want systems that are more secure and resilient.

But getting on board is the easy part. Actually applying these principles to your architecture is less black and white. ‘Zero trust’ isn't simply a checklist of fancy new networking kit to purchase. What you have is, more often than not, less important than what you actually do with it, as I'll explain.

What is zero trust architecture?

According to the National Security Agency

“Zero Trust is a security model, a set of system design principles, and a coordinated cybersecurity and system management strategy based on an acknowledgement that threats exist both inside and outside traditional network boundaries.” NSA

At its core, ‘zero trust’ means being as suspicious of what's inside your network, as you are of what's outside your network. This means abandoning the traditional notion that “everything inside our network perimeter is trusted by default”. It means verifying every request before trusting it — regardless of its origin. It means strong authentication ("are we confident we know who the request is coming from?") combined with robust authorisation ("is a user definitely allowed to do what they're asking?"). In other words, assume that every request could be malicious until proven otherwise: don't trust, always verify.

Zero trust is a good fit for modern systems, where it's often difficult or impossible to draw a dotted line to separate the ‘outside world’ from the ‘inside trusted network’.

But how can you have a meaningful traditional network perimeter when...

  • You have infrastructure scattered across several clouds,
  • When you have third party service integrations,
  • When you have multiple physical sites, and
  • When you have IoT devices in the field that are phoning home?

Zero trust = never trust by default… but surely we can trust our own servers on our own internal network…?

A relatively recent addition to the OWASP Top Ten explains why we shouldn't blindly trust even our own servers under our control. Malicious actors are increasingly focusing their attention on attack strategies such as server-side request forgery (SSRF) in an effort to trick ‘trusted’ servers into becoming unwitting accomplices to their nefarious activity.

How an SSRF attack can penetrate your internal architecture

With a server-side request forgery (SSRF) attack, a malicious actor is able to exploit a trusted server within your infrastructure by tricking the server into sending requests to URLs of the attacker's choosing. But there are valid reasons why a server might make additional requests on behalf of a user. For example, if a service makes use of a third party, external API then it will need to send requests to the outside world to access that API.

Problems arise when the URL for an outbound request from the internal server is built using part of the original request sent to the server from the attacker. If an attacker can deduce that this is happening then they can craft their requests to the server to make it access any URL they wish. If the server is being blindly trusted then the malicious requests it makes on behalf of the attacker will be blindly trusted too!

In this way, a server can be fooled into participating in a denial of service attack by launching requests at a target of the attacker's choosing. A server can also be tricked into accessing other servers and services on the internal network, potentially revealing sensitive information that the attacker would otherwise never be able to access. And, because it's a “trusted” internal server making these requests, nobody bats an eyelid!

A famous example of this type of attack is the 2019 SSRF attack on Capital One, which affected approximately 100 million individuals in the United States and 6 million in Canada. An unauthorised actor was able to use SSRF to obtain:

  • Credit card application data
  • Customer data
  • Transaction data
  • Social Security numbers
  • Linked bank account details

The lesson here is that if you assume that communication between (micro-)services inside the network perimeter are completely trustworthy then you're opening the door to potential SSRF attacks. Blindly trusting a service and all of its outgoing traffic may give an attacker carte blanche access to your internal network and everything on it.

On the other hand, with a zero trust approach, you assume that all traffic (both internal as well as external) is untrustworthy by default. This means that all privileges to allow services to interact with each other are explicitly defined and restricted: you disallow all access by default and then explicitly open only the network protocols and ports that are required. This means that local/internal requests undergo the same validation checks as external requests i.e. we assume that any requests could be malicious.

This concept of restricting access to the bare minimum is known as the principle of least privilege. Users, processes, and applications should be granted only the minimal set of permissions that they strictly require in order to function correctly and complete their assigned tasks. This dramatically limits the scope of damage from compromised systems and accounts.

It makes perfect sense, when you think about it: why allow a service or user to access things that there's no legitimate need for them to have access to? For example, should the cafeteria cook have keys to the HR office? Equally, does it sound wise for an appointment booking portal to have access to the payroll system? That's exactly that kind of vulnerability that a bad actor would exploit to then make off with valuable, confidential information.

What are the implications of zero trust for load balancing?

Let me start by saying, you should not use our load balancer as a perimeter device. I'll say that again: our load balancers should NOT be used in lieu of a proper network firewall at the edge of your network. Firewalls exist for a reason and you should have one as your first line of defense, not a load balancer!

Do I say this because our ADCs are somehow inferior to our competitors? No, absolutely not. We believe that a load balancer should be used as a load balancer, not as a first line of defense, so we always recommend to our customers that their load balancers should sit behind a proper network firewall. Here's why:

  1. A load balancer is not a security appliance. Yes, a decent load balancer should support your security posture, and we take measures to ensure that our load balancers themselves are secure. That doesn't mean that just because a load balancer can be made to be more security-focused that it should be used as the first line of defense — again, if that's what you're looking for then you need to invest in a security device from a security vendor.
  2. You want to squash attacks as close to the edge of your network as possible. A decent firewall is the best tool to do that. Yes, our load balancer has a web application firewall (WAF) built in, and yes, we can employ rate limiting techniques to defend against denial-of-service (DoS) attacks, but these should be considered as secondary defenses. They raise the bar for how difficult it is to attack your services, but you still need a bona fide firewall at the network edge to do the heavy lifting. No matter how secure your castle is, if you can repel the attackers out at the city walls that's always going to be preferable!
  3. Network firewalls are specifically engineered to act as a first line of defense. They're built to handle, filter, and keep track of tens (or hundreds!) of millions of concurrent connections and they can do this more effectively than a load balancer. Using stateful inspection, a network firewall can monitor all active connections and determine which packets should be allowed past the firewall, in both directions. A firewall will typically work its magic at the network and transport layers — rejecting traffic here is far more efficient than evaluating traffic at the application layer (and this work can often be offloaded to hardware to process astonishing numbers of packets per second).

Beyond “don't put it at the edge”, the question of where you put a load balancer in your network can be an important decision if you're wanting to take a zero trust approach.

Ok, so where should you put an ADC in a zero trust architecture?

In a traditional network setup, the load balancer might sit in the DMZ, sandwiched between an internal firewall and an external firewall. Equally, a load balancer might be positioned inside the trusted network, sitting alongside the back end servers it's load balancing.

In contrast, with a zero trust setup you're more likely to be dealing with a network that's been sliced and diced into smaller logical segments. This is highly effective at preventing ‘lateral movement’ whereby an attacker could use a compromised server as a beachhead to then compromise other servers and services. If the walls that separate each different service are equally as secure as the perimeter between the outside world and our network then it becomes much more difficult for an attacker to hop around our network if any one system has been compromised.

In the segmented network scenario, you might have multiple, smaller load balancers provisioned in order to keep traffic as separated as possible, with each load balancer sitting in the same network segment as the servers it needs to load balance. (Did you know: we offer an attractive site licence which allows you to spin up as many virtual load balancers as you need, ideal for working in a segmented network.)

Alternatively, you might want to work with a single pair of load balancers which you allow to communicate with each of the different network segments of interest. Then, on the load balancers, you could use access control lists (ACLs) to control which load balanced services can be accessed from where — for example, ACL policies like ‘the load balanced card payment API service can only be accessed from the web store network segment’, and so on.

Whatever your network architecture looks like, our load balancer is flexible and adaptable for any scenario. As long as you can successfully route traffic from client to load balancer and from load balancer to back end servers then all is well. Our ADCs are well suited to both traditional and modern network architectures.

Zero trust is a risk management strategy, not an architecture checklist

A zero trust model should apply to all applications, networks, data, devices, and identities, and is not a piecemeal approach or architectural add-on. It is therefore not a strategy that can be satisfied by purchasing an ADC from one of our competitors, claiming to be able to satisfy your "zero trust" requirements. No matter how many security buzzwords their Marketing spiel may contain...

As I have explained, a load balancer is not inherently a security appliance. Yes, a decent load balancer can support your security posture, and Loadbalancer.org takes measures to ensure that our load balancers are secure. But that doesn't mean that just because a load balancer can be made to be more security-focused that it should be used as the first line of defense.

The bottom line? The only thing you can trust is for us to help you load balance your applications securely.

If you made it this far, please consider any combination of high fives, thoughtful poses, commenting, or following us on LinkedIn.

Want more on security?

Enhance app security with customizable WAF and user protection