VMware Cloud Community
Steve_ZN
Contributor
Contributor
Jump to solution

Moving to LACP on vSAN without downtime

Hi All

I recently took over a vSAN environment with 2x Dell S4128 10Gb switches and 6 hosts running ESX 6.7u2 with 8 NICs each.

Originally the vSAN hosts were just connecting to 1x Dell S4128 (sw1) with a single uplink (vmnic0) for vSAN traffic and 2 uplinks to another switch for VM traffic (vmnic4 and vmnic5)

I have now put in 2x S4128s (sw2 and sw3) in a VLT configuration and created LACP channels to the hosts on vmnic1 and vmnic2 to each switch (sw2 and sw3).  The LACP channel is established and working. I have uplinked this VLT 'Stack' to the original sw1 and confirmed traffic is passing across all switches.

My thinking was that I would add the LACP channel as a standby uplink, then one at a time put the hosts into Maintenance Mode, disconnect vmnic0 making the LACP active until all hosts were on the new switches.

When I tried to do this I got the error that this is only supported as an intermediate step.  I've researched this and discovered this caveat.  Because only one LAG or only standalone uplinks can be active.  Now this sounds to me that if one host starts using the LAG, all hosts will start using the LAG which may cause downtime, or make all of the vSAN hosts unreachable from each other.

Please can someone advise me on the best way to achieve what I am trying to do without downtime?

My next option will be to arrange downtime and shut down all VMs and make the change, but I'm hoping to avoid this.

Thanks

Steve

1 Solution

Accepted Solutions
Steve_ZN
Contributor
Contributor
Jump to solution

Thanks for the suggestion Dyadin.

After discussing it with VMware support, the solution was a little simpler, I just created a new port group, on the same DVS, that used the LAG as the active uplink and no standby adapters, and migrated the VMK of each host onto the new DPG....

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

View solution in original post

0 Kudos
2 Replies
dyadin
Enthusiast
Enthusiast
Jump to solution

The answer is simple, you'll need 2 vSwitches for vSAN, one is your current vSwitch that are using vmnic0, let's say it's vSwitch1, and another is a new vDS that are using vmnic1 & vmnic2.

vSAN vmkernal is currently in vSwitch1.

You need to configure LAG  for the new vDS, and add a new port group called vSAN-PG, set up VLAN id, set up failover order so the only uplink of vSAN-PG is lag0, vmnic1&vmnic2 is unused. In order to test all the configured is correct, you can add a test vm and let it use vSAN-PG, then setup a vSAN IP address see if the vm can connect to all the VSAN vmkernal addresses in cluster.

And last, do a vmkernal migration, select vDS, right click >manage Hosts>migrate VMKernel.

Please consider marking this answer "correct" or "helpful" if you think your query have been answered correctly. Cheers, Matt Zhang VCIX-NV | VCP-NV-CMA-DTM | CCDA | CCIE R&S
Steve_ZN
Contributor
Contributor
Jump to solution

Thanks for the suggestion Dyadin.

After discussing it with VMware support, the solution was a little simpler, I just created a new port group, on the same DVS, that used the LAG as the active uplink and no standby adapters, and migrated the VMK of each host onto the new DPG....

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

0 Kudos