Load balancing Cloudian HyperFile NAS Storage
About Cloudian HyperFile
Cloudian HyperFile is a scale-out NAS platform that provides file system protocols for clients and transparent data tiering to object storage (Cloudian HyperStore). Client applications write data to HyperFile which then manages the underlying storage tiers, leveraging its native information lifecycle management capabilities.
Key benefits of load balancing
Here are a few key benefits:
- Guaranteed application uptime
- Ensures a truly scalable environment to meet growing data demands
- Enterprise-grade application delivery, security and visibility
- Data is protected and accessible at all times
Cloudian and Loadbalancer.org both specialize in reducing infrastructure complexity through intelligently designed and infinitely scalable network architecture. Users across many industries – including medical and media – are facing rapid increases in data, with storage demands intensifying as technologies progress.
The Cloudian and Loadbalancer.org partnership offers an answer to this challenge. Customers can enjoy the benefits of NAS and object storage in their own data centers, thanks to indestructible, enterprise, scale-out solutions.
How to load balance Cloudian HyperFile
To allow a Cloudian HyperFile deployment to be load balanced, the HyperFile nodes must be deployed in a multi-controller configuration sharing an NFS volume. Source IP address persistence is required to successfully load balance Cloudian HyperFile. This is true for both the layer 4 DR mode and layer 7 load balancing scenarios described in this document. To provide load balancing and HA for Cloudian HyperFile, a single VIP is used which covers all of the ports needed.
Note: The load balancer can be deployed as a single unit, although Loadbalancer.org recommends a clustered pair for resilience & high availability. Please refer to section 1 in the Deployment Guide appendix for more details on configuring a clustered pair.
The load balancer can be deployed in 4 fundamental ways: Layer 4 DR mode, Layer 4 NAT mode, Layer 4 SNAT mode, and Layer 7 SNAT mode.
For Cloudian HyperFile, using either layer 4 DR mode or layer 7 SNAT mode is recommended. If using NFS version 3 and below, layer 4 DR mode should be used due to the wide range of ports that are used in these older versions of the NFS protocol.
Where possible, we recommend that Layer 4 Direct Routing (DR) mode is used. This mode offers the best possible performance since replies go directly from the Real Servers to the client, not via the load balancer. It’s also relatively simple to implement. Ultimately, the final choice does depend on your specific requirements and infrastructure. If DR mode cannot be used, for example if the real servers are located in remote routed networks, then SNAT mode is recommended.
Note: If the load balancer is deployed in AWS or Azure, layer 7 SNAT mode must be used as layer 4 direct routing is not currently possible on these platforms.
Layer 4 DR Mode
One-arm direct routing (DR) mode is a very high performance solution that requires little change to your existing infrastructure.
- DR mode works by changing the destination MAC address of the incoming packet to match the selected Real Server on the fly which is very fast
- When the packet reaches the Real Server it expects the Real Server to own the Virtual Services IP address (VIP). This means that you need to ensure that the Real Server (and the load balanced application) respond to both the Real Servers own IP address and the VIP
- The Real Server should not respond to ARP requests for the VIP. Only the load balancer should do this. Configuring the Real Servers in this way is referred to as Solving the ARP Problem. Please refer to the instructions in section Configuring for Layer 4 DR Mode in the Deployment Guide for more information
- On average, DR mode is 8 times quicker than NAT for HTTP, 50 times quicker for Terminal Services and much, much faster for streaming media or FTP
- The load balancer must have an Interface in the same subnet as the Real Servers to ensure layer 2 connectivity required for DR mode to work
- The VIP can be brought up on the same subnet as the Real Servers, or on a different subnet provided that the load balancer has an interface in that subnet
- Port translation is not possible in DR mode i.e. having a different RIP port than the VIP port
- DR mode is transparent, i.e. the Real Server will see the source IP address of the client