VMware Cloud Community
HendersonD
Hot Shot
Hot Shot
Jump to solution

Single Nic versus Multi Nic vMotion

Below is our current design for our soon to be vSphere 5.1 deployment

There has a been a good bit of internal discussion about whether to use a single 10GB nic for vMotion or use both 10GB nics for vMotion

Most of the discussion has been surrounding "isolating" vMotion traffic and making it as localized as possible

We do have all vMotion traffic is a separate vlan, vlan127 which you can see in our design

The big question becomes exactly where does vMotion traffic go? What switches/connections does it actually traverse?

Is this correct?

  1. If we go with one vMotion nic then once Vmotion starts traffic will be generated from the host losing the VM and by the host gaining the VM. In this scenario the traffic will traverse one BNT switch. This leads to two conclusions

    1. The traffic never gets as far as the Juniper core
    2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT
  2. If we go with two vMotion nics then both 10GB nics could be involved in vMotion. This means that vMotion traffic between two ESXi hosts could hit one BNT switch, traverse the stack connections (two 10GB connections between the BNTs), and go to the other host via a 10GB nic. This also leads to two conclusions:
    1. The traffic never gets as far as the Juniper core. It either stays isolated on one BNT switch or moves between BNT switches across the two 10GB stack connections
    2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT

Design.png

Reply
0 Kudos
1 Solution

Accepted Solutions
MKguy
Virtuoso
Virtuoso
Jump to solution

vMotion traffic is just unicast IP traffic (well, except for some bug) between ESXi vmkernel ports configured for vMotion, hopefully isolated in a non-routed layer 2 broadcast domain (VLAN). Simple as that. Considering that, traffic will physically traverse whichever physical NICs are configured for the respective vmkernel ports. The path in between obviously depends on the layer 2 switching/STP infrastructure, which in your case would be just the blade chassis switches.

Multi-NIC vMotion essentially implements multiple independent streams between different IPs and MAC, actually  belonging to the same host. Consider the following:

Host A and B with vmk1, using physical vmnic1, connected to pSwitch1 and vmk2, using physical vmnic2, connected to pSwitch2. Both pSwitches directly trunk the vMotion VLANs between them.

If both hosts have only vmk1 is enabled for vMotion, traffic will only ever pass through pSwitch1. If host B has only vmk2 enabled for vMotion or you switch the uplinks, it will pass both pSwitches.

Now if you enable both vmkernel interfaces for vMotion, it's hard to tell how the hosts decide which vmk connects to which. You may end up going through both pSwitches for both streams, or you're lucky and end up with source and destination interfaces residing on the same pSwitch. I don't know how ESXi decides the pairings, this article seems to suggest it's done in a deterministic manner so that in an identical configuration, the same vmk's would connect with each other:

http://www.yellow-bricks.com/2011/12/14/multi-nic-vmotion-how-does-it-work/

Whatever the case, unless you require other hosts on different switches, connected through your cores only, to be able to vMotion between those hosts, there is no need at all to tag the vMotion VLAN on your links between blade chassis and Core switches.

You see, your Multi-NIC vMotion question is completely unrelated to this.

If we go with one vMotion nic then once Vmotion  starts traffic will be generated from the host losing the VM and by the  host gaining the VM. In this scenario the traffic will traverse one BNT  switch. This leads to two conclusions

  1. The traffic never gets as far as the Juniper core
  2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT

1. Yes.

2. Yes.

Traffic *could* traverse both BNT switches though, depending on what I explained above.

If we go with two vMotion nics then both 10GB nics could be involved in  vMotion. This means that vMotion traffic between two ESXi hosts could  hit one BNT switch, traverse the stack connections (two 10GB connections  between the BNTs), and go to the other host via a 10GB nic. This also  leads to two conclusions:

  1. The traffic never gets as  far as the Juniper core. It either stays isolated on one BNT switch or  moves between BNT switches across the two 10GB stack connections
  2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT

1.Yes.

2.Yes.

Personally, I would go with Multi-NIC vMotion and use NIOC with soft shares in your config.

-- http://alpacapowered.wordpress.com

View solution in original post

Reply
0 Kudos
4 Replies
ramkrishna1
Enthusiast
Enthusiast
Jump to solution

Hi  HendersonD,

The big question becomes exactly where does vMotion traffic go? What switches/connections does it actually traverse?.

Generally traffic travel as per tunnel or path configured on router and firewall.

If vlan server need to communicate with internet then definitely it will pass through physical router .

"concentrate the mind on the present moment."
Reply
0 Kudos
HendersonD
Hot Shot
Hot Shot
Jump to solution

Yes, vlan server needs to communitcate with internet so it will hit the Juniper switches. The question becomes will vMotion traffic be isolated to BNT switches

Reply
0 Kudos
MKguy
Virtuoso
Virtuoso
Jump to solution

vMotion traffic is just unicast IP traffic (well, except for some bug) between ESXi vmkernel ports configured for vMotion, hopefully isolated in a non-routed layer 2 broadcast domain (VLAN). Simple as that. Considering that, traffic will physically traverse whichever physical NICs are configured for the respective vmkernel ports. The path in between obviously depends on the layer 2 switching/STP infrastructure, which in your case would be just the blade chassis switches.

Multi-NIC vMotion essentially implements multiple independent streams between different IPs and MAC, actually  belonging to the same host. Consider the following:

Host A and B with vmk1, using physical vmnic1, connected to pSwitch1 and vmk2, using physical vmnic2, connected to pSwitch2. Both pSwitches directly trunk the vMotion VLANs between them.

If both hosts have only vmk1 is enabled for vMotion, traffic will only ever pass through pSwitch1. If host B has only vmk2 enabled for vMotion or you switch the uplinks, it will pass both pSwitches.

Now if you enable both vmkernel interfaces for vMotion, it's hard to tell how the hosts decide which vmk connects to which. You may end up going through both pSwitches for both streams, or you're lucky and end up with source and destination interfaces residing on the same pSwitch. I don't know how ESXi decides the pairings, this article seems to suggest it's done in a deterministic manner so that in an identical configuration, the same vmk's would connect with each other:

http://www.yellow-bricks.com/2011/12/14/multi-nic-vmotion-how-does-it-work/

Whatever the case, unless you require other hosts on different switches, connected through your cores only, to be able to vMotion between those hosts, there is no need at all to tag the vMotion VLAN on your links between blade chassis and Core switches.

You see, your Multi-NIC vMotion question is completely unrelated to this.

If we go with one vMotion nic then once Vmotion  starts traffic will be generated from the host losing the VM and by the  host gaining the VM. In this scenario the traffic will traverse one BNT  switch. This leads to two conclusions

  1. The traffic never gets as far as the Juniper core
  2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT

1. Yes.

2. Yes.

Traffic *could* traverse both BNT switches though, depending on what I explained above.

If we go with two vMotion nics then both 10GB nics could be involved in  vMotion. This means that vMotion traffic between two ESXi hosts could  hit one BNT switch, traverse the stack connections (two 10GB connections  between the BNTs), and go to the other host via a 10GB nic. This also  leads to two conclusions:

  1. The traffic never gets as  far as the Juniper core. It either stays isolated on one BNT switch or  moves between BNT switches across the two 10GB stack connections
  2. vlan127 (vMotion) does not need to be part of the trunk going from the Juniper core to the BNT

1.Yes.

2.Yes.

Personally, I would go with Multi-NIC vMotion and use NIOC with soft shares in your config.

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
HendersonD
Hot Shot
Hot Shot
Jump to solution

Thanks for the detailed explanation, we will be going with multi-nic vmotion and to keep vmotion from completely saturating the 10GB links, we will be using nioc

Reply
0 Kudos