VMware Cloud Community
tekhie123
Contributor
Contributor
Jump to solution

Considerations NFS vmk ports when moving a HOst from 1 Distributed switch to another

hi everyone, i wonder if you can assist with the following ...

I currently have a number of ESX Hosts which are attached to dvswitch0 for Management and NFS connectivity.  VM networking has already been migrated to new dvswitch. Are the following assumptions correct ?

I need to remove a vmnic from the current dVS, so that i can create a standard vswitch and migrate vmk0 (management network) to it before removing Host from dvswitch0 ?

When i have done this and i remove the Host from dvSwitch0, will i lose connectivity for my NFS vmk's ? Or will they continue to work as i have the ip info configured locally on the ESXi Host ? I have recreated the PG's for NFS on the new dvswitch with the same name and config as the dvswitch i want to move from

If connectivity will be lost, i assume i need to migrate my NFS vmk's to a standard vswitch before removing from the vDS, the same as my management network. And then migrate them all over using the wizard when i add my Host to the new dwswitch

If connectivity will not be lost, i assume when i add the Host to the new dvswitch and migrate my vmk0, the NFS vmk's will appear in the PG's on the new DVS as my portgroups have been confiigured with the same name ?

Are the above correct ?

Thanks in advance


1 Solution

Accepted Solutions
chriswahl
Virtuoso
Virtuoso
Jump to solution

If connectivity will not be lost, i assume when i add the Host to the new dvswitch and migrate my vmk0, the NFS vmk's will appear in the PG's on the new DVS as my portgroups have been confiigured with the same name ?

As long as there is a valid vmnic attached to your destination vSwitch, and a port group on the correct VLAN, the migration of your NFS vmk interface will be successful.

I'd suggest doing this sort of work while the host is in maintenance mode; there will be a small window of time during the migration where the host loses connectivity to the NFS datastore.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators

View solution in original post

0 Kudos
4 Replies
chriswahl
Virtuoso
Virtuoso
Jump to solution

If connectivity will not be lost, i assume when i add the Host to the new dvswitch and migrate my vmk0, the NFS vmk's will appear in the PG's on the new DVS as my portgroups have been confiigured with the same name ?

As long as there is a valid vmnic attached to your destination vSwitch, and a port group on the correct VLAN, the migration of your NFS vmk interface will be successful.

I'd suggest doing this sort of work while the host is in maintenance mode; there will be a small window of time during the migration where the host loses connectivity to the NFS datastore.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
tekhie123
Contributor
Contributor
Jump to solution

thanks very much for the response ... very useful. I have one further question if i may ... i hope it makes sense 😉

I currently have 6 Hosts in a cluster attached to dvswitch0 which is providing Management and vmotion portgroups. I have some new ESX Hosts that will be added to the cluster on a new dvswitch1. The vmotion network for the existing hosts is x.x.131.x, and a new vmotion network has been provisioned on x.x.144.x which is what the new ESX Hosts going into the cluster are currently configured for. The end result is to reconfigure the exisitng hosts in the cluster so that they are no longer on dvswitch0, but are on dvswitch1 which will be providing portgroups for management, ip storage, vm guest traffic etc etc. All Hosts in the cluster ultimately end up on dvswitch1

The proposed solution, to prevent vm downtime, is to create the new 144 vmotion portgroups on dvswitch0. Change the portgroup on the exisitng hosts dvswitch0 to the new 144 pg's and change the IP's for the vmotion portgroups from x.x.131.x to x.x.144.x.  (this way the vmotion ip addresses for the current Host is on the same subnet as the new Hosts being added to the cluster)

Evacuate an exisitng Hosts vm's onto one of the new ESX Hosts added to the cluster, and then reconfigure the exisitng Host from dvswitch0 to dvswitch1. Repeat for the remaining 5 exisitng hosts in the cluster. Once all the Hosts are on dvswitch1, rebalance the vm's and remove dvswitch0

So my question ultimately is as follows ... if i have an esx host attached to dvswitch0 with vmotion pg's configured for x.x.144.x subnet, and another esx host on dvswitch1 with vmotion pg's configured for x.x.144.x subnet .... can i vmotion between the 2 Hosts ? or will the fact that the vmotion pg's are on 2 different dvswitches prevent this from happening in some way ?  the vmotion portgroups btw will have the same names on both dvswitches, and there will be 6 of them to facilitate multi-nic vmotion

I hope thats clear 😉  any info would be much appreciated

0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

if i have an esx host attached to dvswitch0 with vmotion pg's configured for x.x.144.x subnet, and another esx host on dvswitch1 with vmotion pg's configured for x.x.144.x subnet .... can i vmotion between the 2 Hosts ? or will the fact that the vmotion pg's are on 2 different dvswitches prevent this from happening in some way ?

It doesn't matter which port group or vSwitch the vMotion vmkernel interface is on; vMotion traffic will flow so long as the source and target host can reach one another via the assigned IP addresses.

It's rather common to see this for migration scenarios. Here's an example from a workload migration post I wrote a while back.

vmotion-no-shared-vds-fun.png

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
tekhie123
Contributor
Contributor
Jump to solution

thats awesome ... thanks so much for the info. just what i was looking for 😉

0 Kudos