VMware Cloud Community
denisb0
Contributor
Contributor

Help migrate NFS storage VMK from Standard switch to Distributed.

   Hi!

I am having trouble migrating from Standard Switch to Distributed Switch.

Currently there's the following setup in place:

1 blade server with esx 6.5.

Blade has 4 NICs on the aggregators. 

2 aggregators uplink to VLT stack access switches, and 2 aggregators to VLT stack storage switches, auto-vlan all host ports, auto lag on all uplink ports.

vcsa and psc run on this blade, and vmdisks are located on NFS storage.

to simplify lets say:

vmnic0 and vmnic1 of the blade server are part of uplink group for on distributed "access switch", and VMK0 port on this switch works fine, I can manage the blade through it.

vmnic2 is part of standard switch - "tmp-storage-switch", there's a VMK1 on this switch, vlan 1160 - it is used to connect to NFS storage, that's how VCSA and PSC were deployed in the first place.

I tried 2 following options:

1. I make vmnic3 part of a distributed switch "distributed-storage-switch" - then I create a port group "distribute-vlan-1160" and attempt to migrate the VMK1 to this port group. Action fails.

2. I make vmnic3 part of a distributed switch "distributed-storage-switch" - then I create a port group "distribute-vlan-1160", add a new VMK2 and connect it to this port group. Assign IP from same subnet as VMK1, and attempt to disconnect VMK1 - host loses storage and I can't manipulate dvs settings.

So: What would be the correct procedure to migrate storage connection from Standard switch to Distributed switch, considering the vCentre VM storage is on the NFS server accessed via this very same storage connection.

Thanks in advance for help.

0 Kudos
4 Replies
berndweyand
Expert
Expert

i hope you have more than one host ?

take one host in maintenance, migrate vmkernelport and bring the host back online

so you have no risk with all path down

0 Kudos
denisb0
Contributor
Contributor

Well yes, I have a bunch chassis full of them. Its a new Vcentre deployment, currently joined chassis is a swing gear and I am moving VMs to it (and as the blades free up in the old Vcentre - moving them too)

Swing gear cluster is to host few hundred VMs, then to empty one chassis in old deployment, then join it as a new cluster to new Vcentre, and so on, eventually moving 4 chassis / 64 blades.

In the original deployment, Vcentre was originally installed on a local storage to one blade, then migrated to shared storage, hence the above problem wasn't there.

But now the blades have no local storage and vcsa/psc are already on an NFS share.So as soon as I attempt to move vmk port it can't commit the change to Vcentre DB as it's unavailable intermittently I guess?

I wanted to set the infrastructure up on one host and then join the rest, but I guess that won't work or there's a path I should take that I don't know / didn't manage to google yet.

So I am going to include more hosts, set them up with dvs, vmotion the Vcentre VM there, and then join the first host to dvs I guess - hope that works.

0 Kudos
berndweyand
Expert
Expert

must read this several times to understand.

but as written before - if you take the host in maintenace, configure it correct and take online you have no effect on running machines.

the fact that some hosts connect to nfs through vss and some through vds doesnt matter

0 Kudos
denisb0
Contributor
Contributor

Hey!

Well, that was my particular problem - any manipulation to the host / storage / vm - makes VM unable to commit changes to the storage, thus unable to change dvs config : )

In the end I just ended up adding more blades with same CPU (EVC was off) - connected them to storage dvs, setup vmotion vmks and vmotioned the vcsa and psc VMs to those blades.

Thought maybe there was some "Reference process" for doing the change from standard to dvs on single host, but I guess this doesn't matter anymore in my case.

Thanks for the help nonetheless!

0 Kudos