VMware Cloud Community
neil_murphy
Enthusiast
Enthusiast
Jump to solution

redundancy for VMotion

Hi,

I have a customer who is looking to increase the resilience of his VMotion connections. Which option would be better/possible:

A: Add a second nic to the VMotion vswitch.

B: Create a second VMotion vswitch on each server with its own nic.

Reply
0 Kudos
1 Solution

Accepted Solutions
christianZ
Champion
Champion
Jump to solution

I would prefer the second nic for vswitch.

Never heard somebody creates 2 vswitches for vmotion.

View solution in original post

Reply
0 Kudos
11 Replies
christianZ
Champion
Champion
Jump to solution

I would prefer the second nic for vswitch.

Never heard somebody creates 2 vswitches for vmotion.

Reply
0 Kudos
MR-T
Immortal
Immortal
Jump to solution

For resilliance, adding a second NIC to the virtual switch is the best way to go.

virtech
Expert
Expert
Jump to solution

Def a second nic assigned and also connected to two different physical switches

P_Blackmore
Contributor
Contributor
Jump to solution

If you connect two pnics to two physical switches and you have one port group for vmotion using these two nics, and then one nic fails on one of your hosts, vmotion traffic then has to traverse both switches. If you had two vmotion port groups with one nic in each connected to the same physical switches, if one nic failed on one host, then traffic would (most likely) go across the alternative nic on both hosts and therefore the traffic stays in one switch, 2 hops instead of 3. Not a big deal, but, if say for example your switches were edge switches in a blade enclosure, it's possible that traffic would have to traverse 4 switches if a nic failed (2 blade switches and 2 core switches) i.e. 5 hops instead of 2.

Does this solution make sense?

Reply
0 Kudos
IRQ2006
Enthusiast
Enthusiast
Jump to solution

Hi ChristianZ and others who suggested adding a 2nd NIC to the same Vswitch, I also think this is the correct option but not clear how to set it up as each vmotion NICs in my setup has its own IP address

Would you pls clarify how to add a second NIC to the same vmotion Vswitch ? does that required the vmotion ports to be a trunk ports ?

Or can I configure the 2nd Nic on a new 2nd Vswitch first to setup it's IP address then move this 2nd NIC to the first Vmotion Vswitch I already have ?

Thx

Reply
0 Kudos
Erik_Zandboer
Expert
Expert
Jump to solution

Hi, In ESX/ESXi, NICs themselves do not have an IP address. So you can add one without issue. Preferably, send it over another physical switch for optimum redundancy. Also, you might want to consider to make the second uplink "standby". This way, you force VMkernel traffic to go down the current NIC (as long as the link is up of course). Now you can make sure that all vmotion traffic goes to the same physical switch, so you don't load your inter physical switch links. While you're at it, you might consider to run the service console down the other link (set as active for the SC), and choose the other link (which is active for vmotion) to be standby for the SC. Now you use both links, you separate SC and vmotion traffic, and have full redundancy for both SC and vmotion.

Visit my blog at http://erikzandboer.wordpress.com

Visit my blog at http://www.vmdamentals.com
Reply
0 Kudos
IRQ2006
Enthusiast
Enthusiast
Jump to solution

Thanks For your reply Erik .. very helpful

With regard to seperation ..we run different physical switches and different VLANs for Vmotion and management (SC) but I will look into the active/passive options once I get the system setup

Reply
0 Kudos
domboy
Contributor
Contributor
Jump to solution

I like Erik Zandboer suggestion to have the VMKernel (for Vmotion) and the Service Console use each other's interface/NIC as standby, but I have to wonder why this is better than both links being active at the same time? Is Vmotion not able to take advantage of the extra bandwidth provided by two teamed NICs?? I forget how much bandwidth the Service Console consumes on average (I think 100mbps is recommended), but I believe it is less the the bandwidth provided by a single gigabite NIC, so I would think there would be extra capacity left over, that could potentially be used by the VMKernel for Vmotion, if it can work this way.

All that said, Erik Zandboer's scenario seems to go against the recommended practise of putting the Vmotion traffic in a seperate network/vlan, since it would now be on the same network/vlan as the service console (unless they're trunked ports I suppose). Is it really necessary for the NICs used for Vmotion to be on a separate vlan?

Reply
0 Kudos
Erik_Zandboer
Expert
Expert
Jump to solution

domboy,

Actually it is not against the best practices from VMware (ey, I'm a wannabe VCDX Smiley Wink ). What I do mostly, is to have all physical uplinks to the vSwitch tagged (dot1q) with several or rather all relevant VLANs on it. In this case, you could use one VLAN for VMotion and one VLAN for the Service Console. Inside the vSwitch you would then configure these portgroups (along with their respective VLAN tags), and then specify that one runs over the first nic, with the second nic standing by, and the other the opposite of that.

You then will end up with VMotion running on its own NIC, the Service Console running on the other NIC. If a link fails, the both of them will start to share the remaining uplink. Since the ports run dot1q tagged traffic, it is possible to have both of them sharing a nic, but still being isolated because they are on separate VLANs.

I often create just one big vSwitch, uplink all physical NICs to that single vSwitch, and run dot1q tagged frames over all of them. Only then I start figuring out which traffic should run on what nics, what standby nics to use for certain portgroups (as a failover), or may even forbid some protgroups to failover to certain nics (I often configure vmotion to fail instead of interfering with production (read: vitual machine) portgroups). You get an immense failure resilliance this way.

Only when working with IP storage, I tend to configure a second vSwitch just for the IP storage. This is because mostly you have separate switches anyway for IP storage, mostly not running tagged frames (rather jumbo frames) and I want to keep that islotaed from the rest.

Visit my blog at http://www.vmdamentals.com

Visit my blog at http://www.vmdamentals.com
Reply
0 Kudos
domboy
Contributor
Contributor
Jump to solution

Hey, thanks for the response Erik. After I posted I sat down and mulled over the setup for awhile, and ended up figuring out something very similar to what you posted. I guess I'm just not used to the idea of every last NIC on the ESX server being plugged into a trunk port (especially in regards to the service console)... it's a bit outside of the traditional comfort zone...

Your post help confirm in my mind what to do. So I ended up with:

vSwitch0, 2 NICs, tagged. Service Console and Vmotion VMKernel, two different vlans (each assigned a different active NIC, the other the standby)

vSwitch1 & 2 -SAN

vSwtich3 - 2 NICs, tagged. All the virtual machines use this vSwitch, and we just assign vlans as needed.

It appears to be working!

Reply
0 Kudos
Erik_Zandboer
Expert
Expert
Jump to solution

Yep that setup will work ok. As to your comfort zone: In my previous post I assumed that VLAN separation is enough security-wise for you. Some cutomers just demand a separate network for management. Then you might have to adjust Smiley Happy In such cases, where do you put VMotion? Is that allowed on the management network (in a separate VLAN)? In a lot of cases not, in which case you just might have to end up with 6 physical NICs to have redundancy for all portgroups without interfering with each other.

Visit my blog at http://www.vmdamentals.com

Visit my blog at http://www.vmdamentals.com
Reply
0 Kudos