VMware Cloud Community
bigmetraton
Contributor
Contributor

oh no another question about load balancing

Hi to all!

I think i read nearly all topics / docs about load balancing trunking in vmware... and here we are.... with our new blade + 4 cisco 3020 and our core external switch 4750 with a full new module og 48 GBE.

Our enclosure is powered with 10 blades, all with quad ethernet gb nics. Every nic is harcoded by backplane with the equal port of every cisco, that is, blade 3 nic 1 with cisco 1 internal nic 3, blade 3 nic two with cisco 2 nic 3 blade 3 nic three with cisco 3 nic 3 ..... and so on. With that harcoded config i must implement an effective load balancing policy.

So, due to the hardcoded implementation i can't use etherchannel with any physical nics of any blade to the internal ports, every nic goes to a different switch (all cisco internal ports are trunk ports and let flow 3 vlans) . I use etherchannel in ALL external ports (also a trunk) of the 3020 with 4 channels of 6GB (one for every 3020) to my core 4750 (i defined 4 etherchannels also).

With this config i created that virtual scenario:

SERVER A one ESX with one vswitch with 4 virtual machines and three physical nics bonded to the vswitch the load balancing method (i tested two) are ip hash and virtual port.

SERVER B another ESX with one vswitch 1 virtual machine and three physical nics bonded to the vswitch the load balancing method (i tested two) are ip hash and virtual port.

Results: the 3 VM in A openning connections to the VM in B use only one nic outbound (?) as i read three different connections must use three different nics.

the VM in B opening connections to the 3 VM in A use only one nic.

If i kill the vmnic1 (is the nic used) i fails and uses NIC2 (failover rules!)

With this scenario is there any chance i can balance (not failover) traffic inbound ? seems that the inbound traffic (from external to blade machines) allways uses one etherchannel.

Why my outbound traffic is not balanced between nics?

Somebody with a similar config in vmware with 4 internal 3020 cisco switches??, for me passtrough directly to 2 external cisco 3750 + ONE etherchannel to my 4750 seems a best load balance correct solution (with that scenario i can etherchannel inbound and outbound).

My config with 4 3020 cisco's seem very habitual in other datacenters using vmware,

Sure i'm missing something!

0 Kudos
3 Replies
RParker
Immortal
Immortal

> With this scenario is there any chance i can balance (not failover) traffic inbound ? seems that the inbound traffic (from external to blade machines) allways uses one etherchannel.

vSwitch traffic is OUTBOUND ONLY. You can't load balace inbound.

0 Kudos
kukacz
Enthusiast
Enthusiast

In my opinion there is no other option than EtherChannel to balance incoming traffic.

You can send a feature request to VMware to implement open standard 802.3ad LACP instead of proprietary Cisco EtherChannel. With LACP you would be able to team across multiple switches.

--

Lukas Kubin

0 Kudos
metraton
Contributor
Contributor

we continued or tests and discovered that internally between machines in the same vlan and different blades vmware DOES NOT BALANCE outbound. If the machine is outside the blades (and the same vlan) vmware balances outbound (sometimes...). Seems that you MUST do etherchannel to balance internally between virtual machines outbound. In the four 3020 cisco solution no inbound balance is possible.

So, at least , due to our experience, be careful about buy the 4 expensive internal cisco's for vmware configs. i prefer a passtrough and some external (cisco) switches. I can then do etherchannel for every 4 nics in every ESX and do balance outbound/inbound.

Some ideas?

0 Kudos