We are in the planning stages to deploy vSphere 4 on a Dell m1000 bladecenter with m610s and m710 blades, connected to a Cisco 3130 switch on the backend. The blades have 2 on-board nics each, and 4 additional nics on the adapter cards. For redundancy, we are planning to Etherchannel vmnic0 and vmnic1, then add vlans for the service console (vlan172), vmotion (vlan173), and Fault Tolerance (vlan174) on two of the six nics. When I build an ESX from scratch and I add the SC to the service console vlan on the Etherchannel vswitch, it is not able to communicate to anything. But if I add the SC to the vmnic0 not in an Etherchannel setup, communications works.
We are able to add the SC to an vswitch Etherchannel after the Etherhannnel has been configured, but we are not able to sdd the SC to an existing Etherchannel. (I hope I have explained that right!).
We have set the vswitch properties to route based on IP hash. Are we missing something here? Should I be able to add the host to the vlan which is in an Etherchannel configuration as I am building it?
The first question I have to ask is "Why Etherchannel?" While the fact that the nics are teamed does provide redundancy, the Etherchannel is really just for load balancing. Etherchannel might (emphasis on might) make sense on a VM Network or when attaching to NFS or iSCSI arrays over different IP addresses.
However, Etherchannel and "Route based on IP Hash" doesn't gain you much on "service console", "vmotion", and "fault tolerance logging" ports. For an awesome explanation of why Etherchannel may not behave as you expect it to and why the other load balancing policies may suit you better, see this excellent post from Ken Cline:
I hope this helps.
Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too).