What is network interface bonding and what might you use it for?

What is network interface bonding and what might you use it for?

High Availability Published on 5 mins Last updated

This came up recently whilst talking with a customer, so I thought I would share it here in case it was of interest to a wider audience...

What is a bonded network interface?

Sometimes referred to as Link Aggregation, LAGs, LACP, or 802.3ad / 802.1ax, Network Interface Card (NIC) bonding joins two or more physical network interfaces together into a single logical interface. If done properly (including at the switch level), bonding allows failover in case one of your connections fails through any number of failures up to and including a failed ethernet switch (assuming the switches are paired and configured properly).

Why and when might you want to do interface bonding?

There are several ways you can bond a network interface and two primary reasons why you might want to do so:

  1. Redundancy (load balancing) — If there is a problem with your ethernet switch, switch port or the physical connection, then traffic fails over automatically to other switch ports. This is normally the main reason why you would choose to bond an interface.
  2. Speed  This is a fortunate side effect of load balancing where, by bonding two or more (preferably more) interfaces together, you benefit from load balancing across multiple interfaces and thus increased throughput.

When bonding interfaces you will see an increase in maximum throughput, however, there will also be an increase in overhead. For example, bonding two interfaces together might see a 50%-60% increase in available headroom on the virtual interface (or bond) (i.e. 2 x 1GigE might give you 1.5GigE of bandwidth after additional protocol overheads).  The same with 10GigE, 100GigE, etc.

In the Internet Service Provider (ISP) world, we would "bond" (LAG) switch uplinks together when we had more traffic than a single port could carry, and also ensure resilience in case of a hardware outage — thus delivering both redundancy and also increased bandwidth. Generally, we would build our network fabric from a virtual chassis, which would mean we can plug a server's physical GigE ports into multiple switches - for example, ethernet port 1 into switch 1, and ethernet port 2 into switch 2. You could then, theoretically, send more than 1GigE of traffic to/from, the server. More importantly though, if a switch were to fail for whatever reason (overheats, PSU dies, memory leak, cable failure, etc) then there should be no downtime as the server would be able to continue to send and receive packets over the second (backup/standby) interface.

And the good news is that the Loadbalancer.org appliance's default installation includes all the required packages for you to initiate your interface bonding configuration easily through the WebUI.  All modern Linux distributions have bonding available; you simply need to install the ethtool package to start configuring bonding on your own server using the system's package manager (apt/yum/snap/etc).

What mode of bonding should I use?

As you can see below, there are many different ways of bonding that can be implemented across the network:

Mode 0 — balance-rr

Packets are transmitted in sequential order from the first available standby interface.  This can increase the total throughput even for a single connection, however, this can also create out-of-order issues for TCP connections and thus reduce the efficiency in a worst-case scenario. Requires a static EtherChannel to be enabled (not LACP negotiated).

The end result?  Load balancing and failover using round-robin.

Mode 1 — active-backup

For fault tolerance only. Data is sent and received through the first available standby interface in the bond.  The other interface is only utilised if the active interface fails. Requires autonomous ports.

The end result? Failover only.

Mode 2 — balance-xor

The hash policy is used to select and transmit packets. In this mode traffic destined for specific peers will always be sent over the same interface.  If all traffic passes through a single router then this mode will be sub-optimal.  Alternate transmit policies may be selected via the xmit_hash_policy option, described below.

The end result? Load balancing and failover.

Mode 3 — broadcast

Everything is transmitted on all standby ethernet interfaces.  Requires static EtherChannel to be enabled (not LACP negotiated).

The end result? Failover only.

Link aggregation according to IEEE 802.1AX (formerly IEEE 802.3ad) is a standard for bundling multiple network connections in parallel and makes use of multiple standby ethernet interfaces on the active aggregator.  Ports must run at the same speed (minimum 1Gb/sec) and use parallel point-to-point connections connecting to the same physical switch (or virtual chassis Cisco/Juniper).  Your switch must support link aggregation protocols (802.1ad) (aka LAG/LACP). Outbound has selection may be changed from XOR via the xmit_hash_policy option, described below.

The end result? Redundancy against a switch/port/cable failure and the side effect/benefit of load balancing across switch ports.

Mode 5 — balance-tlb (transmit load balancing)

Adaptive transmit load balancing.  Channel bonding that does not require any special switch support.

If configuration tlb_dynamic_lb = 1, then load is distributed according to the existing load carried by each standby interface.

If configuration tlb_dynamic_lb = 0, hash distribution is used to distribute the load, conversely if tlb_dynamic_lb=1 then the outgoing traffic is distributed according to the current load (computed relative to the speed) on each member interface.

Requires support in the base drivers to retrieve the speed of each physical interface. (No special configuration required).

xmit-hash-policy can be used to select the appropriate hashing for the setup

The end result? Transmit load balancing (TLB).

Mode 6 — balance-alb (adaptive load balancing)

This works in a similar way to balance-tlb bonding plus receive load balancing for IPv4 traffic (not currently IPv6) and does not require any special switch support.  Receive load balancing is handled by Address Resolution Protocol (ARP) negotiation and table mapping to the relevant group interface.  Receive traffic is also redistributed when a new interface is added to the bond group with the load being distributed sequentially (round-robin).

The difference here is that ARP negotiation is used to bond both send and receive traffic. The local system's ARP replies are intercepted by the bonding driver as they leave the network. The bonding driver then replaces the source hardware address with the address of one of the standby interfaces in the bond. Again, this requires autonomous ports and also ethtool in order to retrieve information about the performance of each standby interface for this to be successful. This mode is only suitable for local addresses known to the kernel bonding module and thus cannot be used behind a bridge with virtual machines.

Requires ethtool support for retrieving the speed of each physical interface, base driver support for setting the hardware address of the device while it is open.

The end result? Adaptive load balancing (ALB).

xmit_hash_policy types

layer2 — the default setting, uses the XOR of hardware MAC addresses to generate the hash.  Is 802.3ad compliant.

layer2+3 — Generates a hash based on an XOR of the MAC and IP addresses taken from the layer2 and layer3 protocol information. Is 802.3ad compliant.

layer3+4 — Uses the upper layer protocol information (when available) to generate a hash.  This allows traffic to span multiple interfaces, while a single connection would not be able to span them all at the same time. Omits the source and destination port information for all fragmented TCP/UDP packets and all other IP protocol traffic. Not fully LACP/802.3ad compliant and may create unordered packet traffic.

Summary

And there you have it. Perhaps a useful reminder, if nothing else, that load balancing and failover are two distinct features.

For maximum throughput you should probably look at balance-rr or 802.3ad with layer3+4 hash policy, however, thorough testing should be undertaken to prevent any possible issues with out-of-order packets.

Please do reach out to our technical team with any queries on this or your server environment more generally.

Want to discuss this with a human?

Speak to our technical team