VMware Cloud Community
bpete
Contributor
Contributor

Sharing an Etherchannel between one vSwitch and one DVS -- is it possible?

My networking skills around link aggregation and Etherchannel are not particularly robust, so I've have been researching the posts here but have not found an answer that directly relates to my questions. Network gurus, kindly offer your input for my dilemma below.

Background: There are several production vSphere 5.1 clusters, each of which consists of two ESXi hosts, and the hosts presently have correctly configured Etherchannel connectivity to the upstream Cisco switches. We are not experiencing any issues, everything performs as expected. vCenter Server 5.1 is managing all clusters, no problems there either.

We are getting ready to migrate ESXi hosts off a Windows vCenter 5.1 implementation to a new vCenter 5.5 appliance. Because we are moving to a new vCenter instance rather than upgrading an existing installation, I'm not sure how to proceed with migrating the networking, but my initial plan is to keep the Etherchannel in place until after everything is on the new vCenter. To accomplish that, my inclination right now is to configure a standard vSwitch on each host for Etherchannel connectivity, move some uplinks off the DVS and over to the vSwitch, and then migrate VM networking from the DVS to the vSwitch; lastly, I would move the remaining uplinks from the DVS to the standard vSwitches and then remove the DVS altogether.

My first question is, does it make sense to dismantle the VDS  and move VM networking to identically-configured standard vSwitches as part of moving ESXi hosts to a new vCenter? What would you do?

Next question: on a host with eight uplinks configured as a single Etherchannel, can I break out four of those uplinks and move them to an identically configured standard vSwitch without losing connectivity for any VMs that might be using those uplinks? Again, what would you consider the best approach?

Finally, instead of moving four uplinks into a standard vSwitch configured for Etherchannel, what if I instead proposed removing the Etherchannel configuration from the DVS altogether, in preparation for moving the networking to standard vSwitches. Should I expect loss of network connectivity as I configure port groups away from "Route based on IP hash" to something else?

As you can tell, I'm especially interested in making "seamless" changes that will not impact network connectivity on production servers.

Any critique and advice welcomed.

Thanks,

Brian

0 Kudos
1 Reply
bpete
Contributor
Contributor

Follow-up to my original post:

It is NOT advised to try and share an Etherchannel between multiple vSwitches, problems WILL manifest.

We ran the following test:

On an ESXi host, eight vmnic uplinks are members of a DVS that is properly configured with Etherchannel to the upstream Cisco 6509-E switch. In our test, we removed four of the vmnic uplinks and attached them to a standard vSwitch on the ESXi host that we created specifically for this test and provisioned with the same port groups and VLAN configurations as the DVS. After swinging the ports to the standard vSwitch, we moved one virtual machine's networking to the standard vSwitch.

With this configuration in place, we almost immediately noticed problems with several virtual machines. The VM connected to the standard vSwitch responded to ping initially, but lost connectivity after several seconds. Other virtual machines still on the DVS also lost connectivity, presumably because the uplinks they had been using were moved to the standard vSwitch. Also, as near as we can tell, there was no Etherchannel failover for virtual machine networking to uplinks still on the DVS; perhaps I have an incorrect understanding of Etherchannel, but it seems to me like this is something that should have happened.

Because this is a production environment, we were unable to do further testing or do VM restarts to see if networking could be restored  to the affected virtual machines. Moving uplinks back to the DVS and restoring the virtual machine networking to its original state resolved the lost connectivity alarms.

The testing we did, while not as comprehensive as I would have liked, indicates that we are looking at removing the Etherchannel configuration from the upstream Cisco switches before we can migrate ESXi hosts into a new vCenter. The test also indicates that this is unlikely to be a seamless process, and that virtual machine network connectivity could be temporarily impacted no matter what steps we take to mitigate.

If anyone knows about another approach to seamless migration of ESXi hosts between vCenter servers without losing VM network connectivity, especially in an Etherchannel environment, please contribute your knowledge to this topic.

Thanks,

Brian

0 Kudos