VMware Cloud Community
jak1975
Contributor
Contributor

Moving ESXi hosts between vCenters with vDS and EtherChannel

Hi there,

I have a requirement to move approximately 20 Dell M630 blade servers from one vCenter to another. The blades themselves are using Cisco B22 networking in the M1000 chassis, which is in turn connected to Cisco Nexus 5K switches. The networking is presented as vmnic0 and vmnic1 at 10GB speed. On the switch side the cards are setup in an EtherChannel. The blades are running ESXi 6.0 and connected to vCenter 6.0.

Currently the hosts are connected to a vDS in their current vCenter, with teaming configured as "Route based on IP hash".

My plan to migrate the ESXi hosts and their VMs to the new vCenter was as follows:

  • Remove vmnic1 from the vDS and add to to VSS
  • Migrate VMs from vDS to VSS
  • Migrate vmk1 (vMotion) and vmk0 (management) from vDS to VSS
  • Remove vmnic0 from the vDS and add to VSS
  • Remove host from vDS
  • Detach host from vCenter
  • Add host to secondary vCenter
  • Run vDS migration wizard to migrate all networking from VSS to existing vDS on secondary vCenter.

I know EtherChannel is a problem. For example when I attempted this in our lab with EtherChannel enabled, I couldn't migrate vmk0 from the vDS. So my idea was to request our network to disable EtherChannel on the impacted blades at the beginning of the change.

However, I have a concern this could also cause problems for the VMs if the load balancing is left as "route based on IP hash" on the vDS.

So I was thinking an approach could be drop vmnic1 from the vDS, change the teaming algorithm on the vDS portgroups, and then ask networking to drop the EtherChannel configuration.

Could anyone advise if this would work, and if it would, what teaming algorithm should I set the vDS portgroups to use?

Unfortunately I have a requirement to leave the VMs running on the ESXi hosts, and we can't easily afford downtime for a large number of VMs.

Many Thanks in advance.

0 Kudos
2 Replies
daphnissov
Immortal
Immortal

So I was thinking an approach could be drop vmnic1 from the vDS, change the teaming algorithm on the vDS portgroups, and then ask networking to drop the EtherChannel configuration.

That's probably what I'd recommend doing, and since there will only be vmnic0 as an uplink at that point, you can basically use any other teaming policy, for example route based on originating virtual port ID or even route based on physical NIC load. Expect that when you break the EtherChannel configuration you will have some dropped packets or momentary high latencies as the chassis switches over, but there's nothing you can really do about that. It's in situations like these that EtherChannel tends to complicate things. You may wish to avoid it in the future and just configure your vDS with the teaming policy to route based on physical NIC load as it provides actually better NIC utilization and doesn't require any special configuration upstream.

jak1975
Contributor
Contributor

Many thanks for the response. I've been asked to get involved in a late stage, so I'm not really sure why EtherChannel is being used. For the type of VMs being hosted I don't see any real benefits. What I have repeatedly seen is various network complications, issues and problems that have always related back to EtherChannel. I think going forward we will look at removing it and using route based on physical NIC load.

0 Kudos