logiboy123
Expert
Expert

Migrate from vDS to vSS with least downtime

I need to migrate 8 hosts in a cluster off a single vDS and onto two vSS.

Current configuration is 4 x 1GB NICs for Management, vMotion and VM Networking. Storage is via FC. vSphere 4.1 and the hosts are ESXi.

The final configuration I need for each host is as follows:

vSwitch0

Management - vmnic0 active / vmnic3 standby

vMotion - vmnic3 active / vmnic0 standby

vSwitch1

VM Networking - vminc1/vmnic2 active/active

I'm looking for the least disruptive way to go about making this change. Scenario's I have come up with so far are as follows. I would greatly appreciate it if anyone could find issues/flaws/outages required in any of these steps:

Option 1

Add two new hosts to the cluster.

Make one host a final host and the second a staging host.

Configure the staging host with two vSS and also attach it to the vDS - One uplink per vSS and two uplinks in the vDS.

vMotion all guests from a single legacy host to the staging server.

Change the port assigned for each VM to use a vSS.

vMotion all VMs on the staging server to the final server.

Disconnect the single legacy host from the vDS and reconfigure solely with vSS using all four uplinks.

Add host back into cluster, it is now another Final host.

Rinse and repeat process for each host until all hosts are using vSS instead of vDS.

I need to keep the original management IP and move it to vSwitch0. I'm thinking it might be better to vMotion all VMs off the host first, modify host networking from the console of the ESXi box and then vMotion several VMs back to it before commencing option 1.

Option 2

On each host in the cluster remove 2 uplinks from the vDS.

Create two new vSS on each host and attach a single NIC per switch.

Use powershell to change the port used on all VMs with a specified vDS portgroup to the coresponding vSS portgroup.

Repeat the above step for all VMs

Remove all nics from the vDS and attach to the corresponding vSS.

Option 3?

Feedback/concerns/thoughts all welcome. The best ideas and considerations will get points by the end of the week. I need to be able to do this work spaced out over a week, with as little disruption to the environment as possible. My biggest concern atm is what happens when I take vmnic0 (primary management nic) and move it to the vSS. I'm pretty sure that is going to disconnect me from the host. Will I need two management IP addresses? If so will this cause problems etc?

Cheers,

Paul

8 Replies
rickardnobel
Champion
Champion

Hello Paul,

for me the Option 2 seems fully functional and should work.

My biggest concern atm is what happens when I take vmnic0 (primary management nic) and move it to the vSS. I'm pretty sure that is going to disconnect me from the host. Will I need two management IP addresses? If so will this cause problems etc?

Are you thinking of that the vCenter would lose contact with the host during the time the vmnic is move from vDS to vSS? As you mention, you call always create a second VMKnic with Managment flag and another temporary IP on the standard switch, and verify that you can attach to that IP before.

I might be able to test the movement of managment vmnic later this week, however only on ESXi 5.

My VMware blog: www.rickardnobel.se
logiboy123
Expert
Expert

To cater for the host disconnect I could use the following process;

1) Log into blade chassis console and confirm availability of access at the console level.
2) Update cluster to temporarily disable alerting, host isolation response and set DRS to manual mode.
3) Put host into maintenance mode and evacuate all VMs off host.
4) In host configuration, remove vmnic3 from vDS, then remove vmnic0 as well. At this point the host will become disconnected from vCenter.
5) Remove host from vCenter inventory.
6) Enter the admin console on blade chassis remote session.
7) Create management vSS with vmnic0 and vmnic3. Assign all IP addressing information back onto management vSS.
😎 Join host back into vCenter inventory.
9) Bring host out of maintenance mode.
10) vMotion test VM to host and test connectivity on VM Networking vDS.
11) Repeat steps 3 through 10 for each host in the cluster.
12) Upgrade cluster to re-enable alerting, host isolation response and set DRS to Automatic.

Then at a later date I could migrate VM Networking using the following process;

1) Drop vmnic1 uplink from vDS.

2) Create vSS2 and add vmnic1.

3) Create VM Networking port for each VLAN required.

4) Use powershell to migrate the port being used for each VM.

5) Drop last vmnic2 out of vDS.

6) Attach vmnic2 onto vSS2.

If you could confirm the management network disconnect for me that would be awesome. I have a feeling that it would still drop the host unless I leave the original IP address on the vDS, so at some point I would have to transfer this to the vSS and I don't think there is any way to do this without losing host to vCenter connectivity.

Regards,

Paul

0 Kudos
keiooz
Enthusiast
Enthusiast

I would want to know the big difference between vDS and vSS?

0 Kudos
logiboy123
Expert
Expert

vSS = virtual standard switch

vDS = virtual distributed switch

vDS is only availble in Enterprise Plus licensing.

0 Kudos
AnthonyChow
Hot Shot
Hot Shot

Kelly Ooz wrote:

I would want to know the big difference between vDS and vSS?

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=101055...

0 Kudos
rickardnobel
Champion
Champion

Kelly Ooz wrote:

I would want to know the big difference between vDS and vSS?

The largest difference is in my opinion that vDS has more easy management for many hosts with many network portgroups.

My VMware blog: www.rickardnobel.se
0 Kudos
Texiwill
Leadership
Leadership

Hello,

vDS has minimally the following functionality over the standard vSwitch:

* IP Pools

* NetIOC

* Load Based Teaming

* PVLANs

* Port Mirroring (vSphere 5 req)

* vCenter Required as it is the management plane

* vCenter Data Center wide vSwitch (ease of management)

* Netflow

* Network vMotion

How to move from a vDS to a  standard vSwitch is easy, create the vSwitch on each host with the same name (case sensitive), create the necessary portgroups and then go to each VM and change its network to be the name of specific portgroups.

Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011

Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.
vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
logiboy123
Expert
Expert

I agree that changing the VMs over to the vSS is going to be easy. However changing the management uplinks over to the vSS is the challenge I'm working atm. Thank you for confirming the VM migration process though.

Cheers,

Paul

0 Kudos