MinIO is a high performance open source S3 compatible object storage system designed for hyper-scale private data infrastructure and can be installed on a wide range of industry standard hardware. Although it can run as a standalone server, it’s full power is unleashed when deployed as a cluster with multiple nodes – from 4 to 32 nodes and beyond using MinIO federation.
Using a load balancer ensures that connections are only sent to ready/available nodes and also that these connections are distributed equally, minimizing downtime, allowing seamless maintenance and ensuring effortless scalability.
Loadbalancer.org complements intelligently designed storage systems, making sure that data isn’t just protected, but accessible at all times. Thanks to our object storage expertise, we can help businesses to meet growing data demands through scalability and interoperability.
To create a MinIO cluster that can be load balanced, MinIO must be deployed in Distributed Erasure Code mode. This enables multiple disks across multiple nodes to be pooled into a single object storage server. Object data and parity is striped across all disks in all nodes. All objects can then be accessed from any node in the cluster.
The load balancer is deployed at Layer 7. This mode offers high performance and requires no configuration changes to the load balanced MinIO Servers.
For MinIO Server, the load balancer’s client and server timeouts are set to 10 minutes.
The following table shows the port(s) that are load balanced:
Note: Port 9000 is the default port for MinIO but this can be changed if required by modifying the node startup command – see the Deployment Guide more details.
To enable secure communication, SSL/TLS is terminated on the load balancer.
As mentioned in the MinIO Monitoring Guide, MinIO includes 2 un-authenticated probe points that can be used to determine the state of each
MinIO node. In this guide, the heath checks are configured to read the readiness probe /minio/health/ready
Note: The load balancer can be deployed as a single unit, although Loadbalancer.org recommends a clustered pair for resilience & high availability. Please refer to section 1 in the Deployment Guide’s appendix for more details on configuring a clustered pair.