I've setup a number of VMware hosts and am finding the load balancing aspects of ESX very distrubing.
I've scoured the internet and VMware's site itself to find a good example of true load balancing on either network interfaces or iSCSI interfaces. It seems that using IP hash along with trunked and channelled Ethernet (dot1Q, nonegotiate, mode on) to always provide a "fill and splil" method of load balancing, instead of spreading the traffic along the provided interface from the get go. The interfaces show "in bundle on the cisco switches, so it looks to be correct. Does ESX just not do true load balancing (spreading out traffic evenly across the bundeled interfaces) ??
Load testing always shows that the first interface will take all the traffic and then goto the next interface if necessary. Has anyone been able to get ESX to do true load balancing?
thx in advance.
ESX does not do "traditional" load balancing.
With all load balancing approaches - other than IP hash - each vNIC is affiliated with a given pNIC and ALL traffic originating from that vNIC will traverse the same pNIC until there is a failure or some other event causes a change. The load balancing algorithms vary, but - in general - the vNICs get somewhat evenly distributed across the pNICs.
With IP hash load balancing (where you have to configure your pSwitch to use static 802.11ad), the load balancing is done on a "per conversation" basis. This means that, for each source/destination IP address pair, an affiliation is made to a pNIC and ALL outbound traffic for that address pair will stay pinned to that pNIC until there is a failure or some other event causes a change.
There is hope that change is coming (no, this is not a statement that a future version of ESX will have true load balancing) - at this point, we can only hope.
Technical Director, Virtualization
VMware Communities User Moderator