I've got a few VMs which are hitting my GigE network very hard and end up saturating the link at times for one or two hosts in a cluster in our datacenter. The portgroup in question has two vmnics (consistent across all hosts in the cluster). Assuming that I don't want to make any hardware additions at this time (to go 10GigE, for example), what are my options for removing network bandwidth at the host as a bottleneck? I could probably throw a third physical NIC in the mix, I do have one available in each host, but my question (and where my knowledge is a bit light) is, would the VM be able to take advantage of that? I think that the VM would just bond to one pNIC and that's all it gets.
Open to ideas and corrections on my assumptions here!
BTW, standard dvswitches only for this cluster, in case that matters.
thanks
Are your 2 NICS setup on the switch for Etherchannel and is your NIC teaming policy set to Route based on IP Hash? I believe this is required for a virtual NIC to be able to send data down both of your pNICS. This would accomplish the same as adding a 2nd virtual NIC to your guest. I believe you can use 3 or 4 NICS in your team for added throughput.
I didn't even know anything about Etherchannel and its relationship to the port load balancing settings, so I think this sounds like exactly what I need to learn more about. Thanks a ton.
My NW engineer already has everything in place. Sounds like I'm the one in the way.
Let us know what you find. Looks like you have checked all the usual suspects.
Ok, done some research baserd on msemon1's advice. Here is a snippet from the Virtual Networking Concepts PDF (which unfortunately doesn't look to have been updated for v4):
• Route based on IP hash — Choose an uplink based ona hash of the source and destination IP addresses of eachpacket. (For non-IP packets, whatever is at those offsets is usedto compute the hash.)Evenness of traffic distribution depends on the number ofTCP/IP sessions to unique destinations. There is no benefit forbulk transfer between a single pair of hosts.You can use link aggregation — grouping multiple physicaladapters to create a fast network pipe for a single virtualadapter in a virtual machine.When you configure the system to use link aggregation,packet reflections are prevented because aggregated ports donot retransmit broadcast or multicast traffic.The physical switch sees the client MAC address on multipleports. There is no way to predict which physical Ethernetadapter will receive inbound traffic.All adapters in the NIC team must be attached to the samephysical switch or an appropriate set of stacked physicalswitches. (Contact your switch vendor to find out whether80 . ad teaming is supported across multiple stacked chassis.)That switch or set of stacked switches must be 80 . ad-compliant and configured to use that link-aggregation standard instatic mode (that is, with no LACP). All adapters must be active.You should make the setting on the virtual switch and ensurethat it is inherited by all port groups within that virtual switchLooks like that is the best that you can do with a standard vSwitch. If you check out the What's New in Virtual Networking v4.1 PDF, it talks about Load-Based Teaming which is even better, but it requires dvSwitch (and 4.1 of course). It's not clear to me if LBT would better handle the case mentioned above about "bulk transfer between a single pair of hosts".