VMware Cloud Community
bradtech519
Contributor
Contributor

Migration of vmotion vmk from Cisco Nexus 1000v to vSphere 5.1/5.5 Distributed Switch

Hello guys,

We are currently in the process of getting to vShere 5.5. We are also a vCloud Director shop and already complete on getting to the latest 5.5 vCD.  During this process and talking with VMware support we decided to migrate to a vSphere Distributed Switch in 5.1 and then upgrade it after going to 5.5 to get it current. I've went through and created the portgroups & tagged the vlan IDs accordingly. We still have to get all the HP VirtualConnect profiles correct on the additional NICs we are going to attach to the new vDS while maintaining connectivity on the 1000v side. These NICs will mirror the DMZ/System uplinks with tagged VLAN traffic

I am looking for opinions on a stage of the process I am at now. We have vmk1 providing VMotion to our vCD and vSphere hosts on the 1000v. It was recommended to move VMotion to the standard switch we have for management traffic only currently. Since there is no supported migration from Distributed Switch to Distributed Switch. The 1000v interface providing VMotion is also our secondary management interface. I was thinking of putting an ESXi host into maintenance mode, and changing over VMotion over to this standard switch which is VLAN backed on a different VLAN than the other hosts still on the 1000v. Then it hit me that if I do that there is no way to get those Virtual machines back using VMotion since this new VMotion VLAN on the standard switch is a different VLAN all together. We've ran into a hardware limitation with HP on Virtual connect with having the same VLAN assigned to NICs that are on the same LOM. After talking with HP VC engineer it seems to be some PCI-Express NIC emulation. It's really only two physical NICs but emulating eight. We are currently using DRS partially automated for both clusters. Would it be against best practice but maybe feasible to disable vmotion on the 1000v, and then just check it on the vswitch 0 properties under Managemen network where it has the option for VMotion all while not being in maintenance mode. Since I hit a point of no return if I vacate VMs, and switch to a different VLAN I can't vmotion back to the ESXi hosts now using vmotion on a different vlan. I could even disable DRS all together on a cluster I'm working on and get through it quickly after testing votion works on the first two hosts I switch over.  Thanks for your opinions.

Tags (5)
Reply
0 Kudos
0 Replies