VMware Cloud Community
AKreider
Contributor
Contributor

ESX 3.5, NIC teaming, and Network Slowness

We have several ESX 3.5 servers in house. Each has 6 physical NICs in them. One NIC is dedicated to the console, one to vmotion, and the rest to vm's. We present the four vm NICs in a single vSwitch with multiple VLANs. The four NICs are split between two HP switches for complete redundancy (same broadcast domain, all using 802.1q and presenting the same VLANs). We use the default load balancing in vmware. For most applications/guests, we do not see issues and the network performs fine. However, one application we have prints directly to multiple Epson thermal printers without any queuing on the server. We have consistently seen "lost" print jobs from this application that never make it to the printers. From an application support side, we have confirmed that this is from network connectivity issues to the printers. There are several network hops from the application server to the printers and one firewall that it traverses, so we do have other suspects. We are troubleshooting this from all sides, but wanted some feedback on our ESX configuration to help rule out the ESX networking as the culprit. Can anyone lend some insight/experience to how we have our NICs and vSwitch setup or if anyone has seen issues connected to HP switches. I do understand from a utilization and max throughput standpoint the default load balancing may not be optimal, but it keeps our CPU overhead down. We do see network traffic going out on two of the four NICs in the vSwitch and in truth there is not much traffic at all being generated on the host so more aggressive load balancing is not really needed. One of our concerns is that housing 4 NICs in one vSwitch connected two physical switches may be causing some issue. Any comments are appreciated.

0 Kudos
2 Replies
NWhiley
Enthusiast
Enthusiast

That config sounds reasonable and shouldn't present any issues.

For test purposes, I would be inclined to create a new vSwitch, set up the right port groups, "borrow" one of the pNICs for that switch and move the affected server to that vSwitch.

Then see how you go with the VM basically having its own pNIC.

Neil VCP
0 Kudos
AndreTheGiant
Immortal
Immortal

Your solution will work, BUT you loose network failover.

A best solution is have, for each vSwitch, at least 2 physical NICs.

Best practice also sayto separate (if possible) storage traffic (when iSCSI is used), vmotion traffic and VM traffic.

A typical tradoff is to use the same vSwitch for management and vmotion (but using diffent local network).

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos