VMware Cloud Community

Network teaming or split interfaces

We've got a 8GB dual-CPU quad-core PowerEdge system which has two network cards. This system is running ESX 3.5 hosting two identical Windows 2003 standard VMs used for Terminal Server/Citrix XenApp. The configuration will be "out of the box" which I suspect will be just one network card connected - the server is down for maintenance at the moment so can't check.

I understand how teaming works when running on the metal but in this virtual environment, there are more options. Initially I thought there were two options:

  1. ESX server bonds the two NICs together and presents one virtual network interface to Windows 2003

  2. Two network interfaces are presented to Windows 2003 and Windows 2003 itself does the teaming

Reading around, it appears that #1 is the preferred (and only?) option. I'm assuming that without getting into VLANs, when the two NICs are bonded together in ESX server, it works in much the same way as software NIC teaming under Windows itself, i.e. there is a primary channel used for both transmit and review but the secondary channel is only used for transmission?

However, as we have two identical VMs running on there, another option is to dedicated one NIC to CItrix server #1 and the second NIC to Citrix server #2.

Would there be any performance benefits of this? One advantage must be that we're utiliziing both send & receive on both network interfaces which I'm sure must help. However, the advantage of bonding a NIC and then sharing it would be that if one server was particularly busy, it could potentially send at 2 x 1GBit if the other server happened to be less busy.

Cheers, Rob.

0 Kudos
2 Replies

I would say that the biggest advantage of teaming on vSwitch is that you've got redundancy setup. And with the way VMware does teaming and load balancing at this point in time a VM or data stream never gets more than 1GB bandwidth anyway with a 1GB network that is.

I would recommend to do teaming on vswitch level and just leave the load balancing to "virtual port id" this way the fist vm uses the first nic, the second vm the second nic, the third vm the first again and so on. Although it's really static it's a proven technology which is used all over the world with success.



If you find this information useful, please award points for "correct" or "helpful".

0 Kudos

Ok so at a simple level this is how networking through an ESX server to a VM looks....

ESX NIC1+NIC2 --> vSwitch0 --> VM NIC1

You can have many physical NICs in an ESX server.

A vSwitch can have many physical NICs as active or standby adaptors (this is where the teaming takes place).

A physical NIC can only belong to one vSwitch.

A virtual NIC can only belong to one vSwitch.

A VM can have up to 4 virtual NICs.

The vSwitch controls load balancing. The best type of which is based on IP hash (network kit needs to support 802.3ad and be configured for the right ports). Depending on the load balancing method you choose it may or may not distribute the load. If the traffic is going to/from the same server/client then the originating port id wont help. Based on mac would be better but you would see that improve load if there were several different clients.

Basically you would not really have more than one virtual network adaptor in a VM unless you need connection to different networks. As you could add more adaptors in a windows VM for example and team them but in the end its only going to go out of one physical adaptor depending on your ESX network configuration.

There are many options and some can get quite complex. I myself go for 4 pNICs per ESX server then use 802.1q trunking and 802.3ad to aggregate the traffic to two core switches balanced on IP hash.

Andy, VMware Certified Professional (VCP),

If you found this information useful please award points using the buttons at the top of the page accordingly.

Andy Barnes
Help, Guides and How Tos... www.VMadmin.co.uk

If you found this information useful please award points using the buttons at the top of the page accordingly.