3 Replies Latest reply on Oct 9, 2015 7:04 AM by MKguy

    VM Guest Bonding

    jamesdrake Lurker
    Visibility: Open to anyone

      Can I team guest interfaces to a vSwitch (either vds or standard)?

       

      I would like to prototype a virtual router that could replace, at least some, expensive LB functions.

      There is no ability to perform L3 routing before the VM, so in effect we have a single VIP (or router/nexthop etc....) that the VM guest will host on a L2 network.

      I have a LAG (LACP) connection between the vSwitch and the external physical (4x10Gbps or QSFP+ interfaces - so we have (>)40Gbps connection to the host), plus the connections will be statistically spread across all interfaces (~estimate > 1M unique connections, so LACP is balanced).

      Meaning we have a possible 40Gbps L2 network between the physical switch and the VM guest, but I need the same LACP ability between the guest and the vSwitch. The guests definitely support it ( as LACP bonding in linux), and its confirmed as I made the same OS bare metal on the host and it handled the network load as expected, but its not a very efficient use of the host hardware for 95% of the time (the average load is ~20Gbps and the host CPU/RAM are never used more than 25% at full load).

       

      Is it possible? Any other options for a single guest to handle more than the 10Gbps for a single MAC address the unbonded vmxnet3 would allow?

        • 1. Re: VM Guest Bonding
          MKguy Virtuoso

          A VM can't form an etherchannel/LAG with a vSwitch, in fact the vSwitch is completely invisible to the VM, but this should not be necessary in the first place:

          It seems like your general assumption is because the VM has a single vNIC that emulates a 10GBASE-T link, it is not able to exceed this bandwidth. This is a common misconception.

          VMs can very well exceed the bandwidth of their virtual links.

           

          In theory and in the physical world, the maximum data rate would be 10 Gbit/s, since vmxnet3 emulates a 10GBASE-T physical link. This bitrate is governed by physical signalling limitations on the wire of said standard, however these don't apply in a virtual setup.

          For example, guests on the same host and vSwitch/port group are able to exceed well beyond 10Gbit/s with a single vNIC. I know one would think that e.g. the e1000, which presents a 1Gbps link to the guest, is limited to 1Gbit/s maximum; or vmxnet3 is limited to a maximum of 10Gbit/s. But that is not the case. They can easily exceed their "virtual link speed". Test it with a network throughput tool like iperf and see for yourself.

           

          Guest OSes don't artificially limit traffic to match the agreed line speed unless it is physically required.

          To give you an example, I'm able to achieve 25+Gbit/s with iperf between 2 Linux VMs with a single vmxnet3 vNIC on the same host and port group. The same should apply to traffic that exits to the physical network through a LAG, given you have multiple high bandwidth connections with enough entropy to be spread across all links.

          • 2. Re: VM Guest Bonding
            jamesdrake Lurker

            Thanks! That was exactly my assumption - also looking at almost all forums discussing speed thay all mention that the e1000 driver would never go larger than 1Gbps, and that changing to vmxnet3 would allow up to 10Gbpps.

            Ill setup the system and do some tests.

            Does that also imply the vNIC used is irrelevant (ie; maybe I should use the e1000 drivers for better guest compatibility)?

            • 3. Re: VM Guest Bonding
              MKguy Virtuoso

              Does that also imply the vNIC used is irrelevant (ie; maybe I should use the e1000 drivers for better guest compatibility)?

              If you're going to drive lots of bandwidth through the VM then you should definitely use the vmxnet3 vNIC, since it decreases the CPU load caused by processing many packets significantly.

              Compatibility is more or less a moot point, as every modern Linux distribution from the last 2-3 years already includes built-in vmxnet3 kernel modules. On Windows you need to install the VMware Tools, but you would probably do that anyways. Also the e1000 vNIC has caused a lot of bugs in recent times.

               

              Here are some whitepapers you should check out:

              http://www.vmware.com/files/pdf/VMware-vSphere-PNICs-perf.pdf

              http://www.vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf

              https://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf