VMware Cloud Community
msnoakesy
Contributor
Contributor

vSphere Distributed Switch question


Hello,

I'm hoping someone could clarify some points for me regarding vSphere Distributed Switches. I'm relatively new to the vSphere and this is purely for my own understanding, so any replies will be appreciated. The training materials I've looked through show the configuration and give an overview of the concept, but I'm still struggling to get things clear in my mind as to the logic of how it's working.

I understand that the vDS is purely a management object - that it manages the physical uplinks for x number of ESXi hosts for centralised configuration. I understand that when adding a host to the switch, you associate the physical nics of that host to an uplink on the DS. I think I'm struggling to understand the logic behind the uplink/physical nic association. I apologise if the questions below seem a little stupid - they're in no particular order, just things I'm trying to get straight in my thoughts.

In the training videos I watched, port groups are configured on the vDS for the management interfaces, vMotion, virtual machine networks etc. The management interfaces of the 3 hosts added to the switch were all added to the "management" port group and NICs associated with uplinks 1 and 2. Firstly, is there a limit as to how many NICs or hosts an individual uplink on the vDS can manage? Could another host be added and have uplinks 1 and 2 on the vDS associated with it's vmotion interfaces instead of the management? Would this still work? Or do the NICs associated with the individual uplinks on the DS all need to be consistent - ie, on the same subnet? How does the uplink/nic association work?

With regards to configuring NIC teaming for the port groups on the vDS, I've seen it configured differently on the 2 x training videos I've watched. On the first, each port group on the vDS used all available uplinks. On the second, the individual port groups only used the uplinks associated with their physical NICs - this makes sense to me. In the first example, how could this possibly work? I'm obviously missing something somewhere with the whole concept of how this works!

If someone could clarify this whole concept for me (or point me in the right direction) it would really be appreciated.

Thanks in advance,

Matt

0 Kudos
1 Reply
MKguy
Virtuoso
Virtuoso

Firstly, is there a limit as to how many NICs or hosts an individual uplink on the vDS can manage?

No, there are no "individual per uplink" limits you are thinking about. However, there are limits that apply to the vDS as a whole or a host in general, like max 1000 hosts per vDS or maximum of 16 vDS per hosts respectively. Have a look at the networking maximums sections of the vSphere configuration maximums guides here:

VMware KB: ESXi/ESX Configuration Maximums

Could another host be added and have uplinks 1 and 2 on the vDS associated with it's vmotion interfaces instead of the management?

Would this still work?

Yes, no problem. The uplinks are just dumb physical NICs that forward traffic generated from VM vNICs or a host's vmkernel ports. As long as you have the VLANs configured correctly on the vDS port group and physical switch side, it will work without problems.

Or do the NICs associated with the individual uplinks on the DS all need to be consistent - ie, on the same subnet?

No, as mentioned the uplinks are just dumb and only provide layer 2 forwarding. They don't care about layer 3 IPs. The same goes for port groups, you can mix any kind of subnets you want inside one port group/VLAN just like you could on a physical switch inside one broadcast domain (if this is a wise thing to do is a different question).

How does the uplink/nic association work?

With regards to configuring NIC teaming for the port groups on the vDS, I've seen it configured differently on the 2 x training videos I've watched. On the first, each port group on the vDS used all available uplinks. On the second, the individual port groups only used the uplinks associated with their physical NICs - this makes sense to me. In the first example, how could this possibly work? I'm obviously missing something somewhere with the whole concept of how this works!

The association of a virtual NIC with a physical vmnic uplink depends on the load balancing mechanism you choose. If you use something other than etherchannel/LACP ("load based on IP-hash"), then every virtual NIC is mapped to a single uplink that is set to "active" in the teaming settings at any given point in time. All inbound and outbound traffic of that vNIC will be forwarded through this physical link. You can actually view that mapping in the esxtop network view.

If you use etherchannel/LACP on the other hand, then you have basically one logical uplink between every host and the physical switch(es). Traffic for a single vNIC can be distributed between all physical uplinks according to the hashing outcome.

I don't know what those training video said/showed, but why should it not work in the first example with all uplinks active? As long as the physical switches have all VLANs tagged on every host-attached port, it doesn't matter which uplink a frame travels through. Also if the link on one of the "active" uplinks failed on Host A, it would use one of the "standby" uplinks instead and you would have the same disparity you seem to be worrying about. Obviously you want this failover to work without issues and it will, provided the physical network ports are configured correctly (VLANs).

These basic concepts apply to both, the standard vSwitch as well as the distributed vSwitch.

I hope I could clear some thing up. I'd recommend reading the Networking for VMware Administrators book by Chris Wahl or checking some of his blog posts:

http://wahlnetwork.com/tag/network/page/5/

-- http://alpacapowered.wordpress.com
0 Kudos