Moving VMKerenel ports

Currently we have two vswitches that we would like to modify:


-Management VMKernel port

Connected with 2 physical nics: vmnic0 and vmnic3


-FT VMKernel port

-Vmotion VMKernel port

Conncted wih 2 physical nics: vmnic4 and vmnic5

We would like to combine all those ports with vmotion and manangement using one primary vmnic, and another as a standby, and then FT using the reverse.

The question is which way should I move them? Can you even remove the Management port off of vswitch0, and if you do, wony you lose your vsphere client connection?

I'm leaning towards moving FT and vMotion off onto Vswitch0, especially since we haven't actually deployed any FT machines yet, we've just set it up for a future project coming up. I'm guessing all I'd need to do is remove those ports from vswitch1 and then add them to vswitch0 and then set the expicit failover settings.

Can I do this live? we have 11 hosts in the cluster and I know that vmotion will potentially fail until all the changes are made. I'm thinking that we should put DRS in manual mode while we do this, or just turn off HA/DRS alltogether temporarily.

Does anyone see anything wrong with this? My main question is whether we can move the management port or not. And assuming we don't need vmotion while the changes are being made, if this will have any affect on the VM's or hosts if we move the vmotion ports instead?

0 Kudos
2 Replies

Before you decide how to configure your traffic I recommend reading the following article which will give you a good guide on working with vSS.

You would definitely want to move your vMotion and FT and not your Management. If you change your management layer it will break the connection between vCenter and ESXi. You can do this change anytime without interruption to service, but I would set DRS to manual mode during the changes.

I would use 3 NIC's in your situation as the 4th NIC is wasted unless you have massive volume transfers going on. FT recommends 1GB and if you do performance testing you will find that a 1GB NIC can support roughly 3-4 VM instances at a time, so scale your solution accordingly.


Management - VLAN1 - active/standby/standby - vmnic0/vmnic3/vmnic5

vMotion - VLAN2 - standby/active/standby - vmnic0/vmnic3/vmnic5

FT - VLAN3 - standby/standby/active - vmnic0/vmnic3/vmnic5




Thanks that's helpful. Yeah, I thought moving the management port would cause issues that I didn't want to deal with, which is why I wanted to move the vmotion and FT instead.

I'm not using 4 nics for the management/vmotion/FT vswitch, I'm only using 2.

Vmotion/Management configured for vmnic0 as active with vmnic3 as standby.

FT with vmnic3 as as active and vmnic0 as standby.

The other 4 nics will be used for network/backups and a third party app that requires a dedicated NIC.

0 Kudos