Load balancing Scality RING
Scality builds the most powerful storage tools to make data easy to protect, search and manage anytime, on any cloud. Scality gives you the autonomy and agility to be competitive in a data-driven economy, helping you prepare for the challenges of the fourth industrial revolution.
Scality RING software deploys on industry-standard x86 servers to store objects and files whilst providing compatibility with the Amazon S3 API. Scality RING architecture supports High Availability (HA) clustering by putting a load balancer in front of it. Load balancers monitor and perform health checks on a node to ensure traffic is routed correctly to healthy nodes. Without the use of a load balancer, an off-line or failed node would still receive traffic, causing failures.
A variety of load balancing methods are currently supported by Scality RING, dependent on customer infrastructure, including layer 4, layer 7, and geo GSLB / location affinity. The RING service that should be load balanced is the S3 component.
The function of the load balancer is to distribute inbound connections across a cluster of Scality RING nodes, to provide a highly available and scalable service. One virtual service is used to load balance the S3 aspect of RING. Client persistence is not required and should not be enabled.
SSL termination on the load balancer is recommended for load balancing Scality RING. The S3 service uses the “Negotiate HTTP (GET)” health check. For multi-site RING deployments, it is possible to use the load balancer’s GSLB functionality to provide high availability and location affinity across multiple sites. Using this optional, DNS based feature, in the event that a site’s RING service and/or load balancers are off-line then local clients are automatically directed to a functioning RING cluster at another site.
For deployments that are read-intensive, it is possible to use an alternative load balancing method known as direct routing. This allows reply traffic to flow directly from the back end servers to the clients, thus removing the load balancer as a potential bottleneck for reply traffic. Direct routing can benefit read-intensive deployments with a large reply traffic to request traffic ratio.
The load balancer can be deployed as a single unit, although we recommend a clustered pair for resilience and high availability. Details on configuring a clustered pair can be found on page 25 of our deployment guide, below.