How to enable single root I/O virtualization (SR-IOV) for high-performance load balancing

How to enable single root I/O virtualization (SR-IOV) for high-performance load balancing

Performance Published on 5 mins Last updated

The demand for high-performance networking within cloud and virtualized environments has never been greater. As technologies continue to evolve, integrating physical and virtual environments becomes imperative yet increasingly challenging.

Here I outline one solution to this problem: Single root I/O virtualization (SR-IOV). SR-IOV provides a method of extending physical network adapters to virtual appliances — unleashing higher network speeds for virtual workloads.

What is single root I/O virtualization (SR-IOV)?

Single root I/O virtualization (SR-IOV) is an extension of the PCI Express (PCIe) specification, allowing a single physical network card (NIC) to appear as multiple, separate virtual functions.

Unlike other virtualization methods, SR-IOV allows direct hardware access, significantly reducing the overhead associated with virtual switch processing. It does this by allowing a device such as a network adapter to separate access to its resources among different PCIe hardware functions, such as a physical (PF) and virtual (VF) function.

What are the key components of SR-IOV?

There are two main PCIe functions:

  1. PCIe Physical Function (PF) - SR-IOV capabilities are shown in this function, which is the device's primary function. In a virtualized environment, the PF is the 'parent' partition, share amongst all virtual functions.
  2. PCIe Virtual Functions (VFs) - Each VF (and there can be more than one) is associated with the device's PF. These are virtual representations of the physical network interface controller, assigned to a specific vitual machine. The VF shares the device's memory, network port, and other physical devices with the PF and other VFs. In a virtualized environment, the VF is the 'child' partition.

How do the hardware and virtual machines share resources?

An I/O memory management unit (IOMMU) differentiates between different traffic streams using a unique PCI Express Requester ID (RID) assigned to each PF/VF. This allows traffic streams to be delivered to the appropriate Hypervisor parent or child partition, ensuring non-privileged traffic isn't accidentally sent to the wrong VF.

What are the benefits of SR-IOV?

All of this results in the following benefits:

  • High-speed networking: SR-IOV minimizes latency and enhances network throughput by providing direct hardware access, a key requirement of high-performance applications.
  • Better resource utilization: SR-IOV reduces the load on the hypervisor and virtual switch, meaning resources can be shared more efficiently.
  • Segregation: Each virtual function is independent and therefore isolated, resulting in the secure separation of virtual workloads.

So what does SR-IOV mean in real numbers?

To get a rough perspective on what SR-IOV can do. I knocked up a quick test lab with 3 fairly powerful servers. One to push the traffic, one to receive the traffic and an ESXi server in the middle running our loadbalancer virtual appliance.

Without any tuning using the standard VMXNET 3 drivers I got 15Gbps throughput (which is more than enough for most applications).

However, As soon as I activated SR-IOV that jumped to 76Gbps!

Your mileage may vary, as that was just a quick test on hardware I had lying around. But I think you get the point, you really need to be using SR-IOV if you are pushing more than 10G through a virtual load balancer.

SR-IOV implementation process for performance enhancement

Step 1: Hardware and software requirements

Before diving into the implementation, ensure that your hardware supports SR-IOV and that the necessary software components are in place. This includes a compatible network card, a hypervisor that supports SR-IOV (such as VMware vSphere or Microsoft Hyper-V), and updated device drivers.

Step 2: Enable SR-IOV in BIOS/UEFI

Access the BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface) settings of the host machine and enable SR-IOV support for the NIC. This step may vary depending on the hardware vendor, so consult the documentation for specific instructions.

Step 3: Configure the hypervisor

Enable SR-IOV support on the hypervisor and allocate the required number of virtual functions to the NIC. This step typically involves using management tools or command-line interfaces provided by the hypervisor.

Step 4: Assign virtual functions to virtual appliances

Within the virtualization platform, allocate specific virtual functions to individual virtual appliances. This can usually be done through the hypervisor's management interface or command-line tools.

Step 5: Update virtual appliance settings

Within each virtual appliance, update the network settings to use the assigned virtual function. This step ensures that the virtual appliance communicates directly with the dedicated virtual function, bypassing the virtual switch.

Step 6: Test and optimize

After configuring SR-IOV, thoroughly test the virtual appliances to ensure proper functionality and performance improvements. Monitor network speeds, latency, and resource utilization to fine-tune the configuration if necessary.

And there you have it! A solution to the need for modern applications to have high-speed networking. SR-IOV bridges the gap between physical and virtual environments.

How to enable SR-IOV on ESXi

  1. Install an SR-IOV-compatible NIC (network interface card).
  2. Enable the SR-IOV setting in the BIOS (Basic Input/Output System).
  3. Check the necessary ESXi prerequisites are met: virtual functions (VFs) are present on the host; the virtual machine (VM) is compatible with EXSi 5.5+;  the guest operating system creating the VM is Windows or Red Hat Enterprise Linux 6+; and the PCI Device list in Settings contains active pass-through networking devices.
  4. Once the VM has been deployed, add three PCI Device NICs and connect them to your networks.
  5. SR-IOV then needs to be enabled on one of the NIC types: Intel NIC SR-IOV on ESXi, or Mellanox NIC SR-IOV on ESXi.
  6. Once the deployment is complete, enable SR-IOV on the guest.

How to enable SR-IOV on KVM for Mellanox ConnectX

  1. Check the necessary hypervisor prerequisites are met: IntelVT and SR-IOV are enabled in the host BIOS; and power settings are optimized (C-State power controls disabled; speed-stepping is turned off; and power management is set to 'Performance').
  2. Modify the Linux grub file to add IOMMU support (Intel Input–Output Memory Management Unit).
  3. Update the firmware to enable SR-IOV
  4. Initialize driver VFs

HPE Ezmeral is a good example of a workload that definitely needs SR-IOV

For more detail on how we've helped customers like HPE Ezmeral Data Fabric with single root I/O virtualization, or for a list of the required network adapters, contact our technical team.

Alternatively, to find out more about how our product can be used to solve this problem for high-performance workloads, check out HPE's blog: Boost HPE Ezmeral Data Fabric Software performance with load balancing.

Try it out for yourself

By following the steps outlined in this blog, you can now unlock the full potential of your virtual appliances, exposing them to higher network speeds and ensuring optimal performance for demanding workloads.

If you're ready to give it a go, why not download a free 30-day trial of our load balancer appliance?

The Loadbalancer Enterprise VA Max offers unrestricted throughput from an underlying virtual CPU/memory perspective. And because it supports SR-IOV, it allows the underlying network adapters to be exposed to the virtual appliance directly.

Need to integrate hardware and virtual environments?

Supercharge your high-performance apps with our ADC