VMware Cloud Community
Cannoli
Contributor
Contributor
Jump to solution

vNetwork Distributed Switch - School me please!

Can someone give me a description as to what's happening with a vNetwork Switch configuration?  My areas of concern are:

What's happening when I add an ESXi hosts active management network interface to a  distributed switch?

How are ESXi hosts with multiple NIC's supposed to be configured on the physical network?

Should I have multiple ESXi host NIC's plugged into separate physical VLAN's?

When and how do I use dvPort Groups?

What is the best practice to deploy a vNetwork Distributed Switch in an existing production environment?

What IP addresses should the VM's that connect to the vNetwork switch have?  Actual routable IP's or a private range that lives on the ESXi hosts only?

I have the ESXi Configuration guide and understand "how" to do all of this but don't understand what is happening when dealing with multiple physical vlans and ESXi hosts not being connected to all of those vlans.  I attempted to configure this a while ago and had lots of issues while losing network connectivity to several ESXi hosts and all the VM's running on those hosts.  I don't want to repeat that mistake!

0 Kudos
1 Solution

Accepted Solutions
logiboy123
Expert
Expert
Jump to solution

You don't have to use trunk ports from your physical switch, it is usually better if you do. When you use trunk ports and assign mutliple VLANs to that port it means that the virtual switch (standard or distributed) can then talk to the physical switch using the VLANs assigned. What this means is that a single physical NIC inside an ESX host can talk to multiple VLANs, if you didn't use trunking then you could only assign one VLAN per ESX NIC which is not very efficient.

Most vSwitches will be talking to several VLANs but will only have one or two uplinks, in this scenario it is almost always required that you use port trunking.

Does this answer your question?

View solution in original post

0 Kudos
6 Replies
logiboy123
Expert
Expert
Jump to solution

vDS have uplink profiles (dvUplinks), that are associated with nic ports from your hosts. So when you attach a host to a vDS your effectively assigning NICs from that host to a common profile that applies to all hosts associated with that vDS. This allows you for example to use vmnic0 on host1 for VM Networking, but also use vmnic9 on host2 for the same purpose, by assigning those NICs from the hosts to the same uplink profile. Generally speaking it is best to have the same hosts with the same number of uplink ports and assigned the same for each profile, but you don't have to.

You should aim for redundancy always, check out some of my network designs for a picture of what I'm talking about:

http://vrif.blogspot.com/2011/10/vmware-vsphere-5-host-network-designs.html

The dvPortGroup is the group of dvUplinks on your switch. The dvUplinks are the profiles assigned to physical nics I mentioned above. Check out the following document for more info:

http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereNetworking_P8_R1.pdf

Regards,

Paul

0 Kudos
Cannoli
Contributor
Contributor
Jump to solution

Thanks for the reply!

So when placing the NIC's in the ESXi host into a vDS, should I only put the non-management NIC's in the vDS or all the NIC's that are physically connected to the network?

How do real VLAN's come into play?  If all the physical NIC's can't be a part of the same physical VLAN, how can they share a common configuration?  How can VM's migrate from one ESXi host to another if the VLAN's aren't the same in some cases?  The IP information in the VM will be incorrect if it's migrated to an ESXi host that doesn't share the same physical VLAN connection as the originating ESXi host, no?

I'm still confused as to how this will all work with multiple physical VLAN's across multiple ESXi hosts.

0 Kudos
logiboy123
Expert
Expert
Jump to solution

Correct, if you can avoid it do not use your management NIC on the vDS.

Multiple VLANs are shared to a virtual switch by the use of trunking on the physical switch. Trunking allows you to say that VLANS 55,45,43 (for example) can all be sent to a port that is connected to your ESXi server. If you provision this the same way across all your hosts and create a switch with exactly the same configuration on each host then your networking will be consistant no matter what host your VM is sitting on. So the ports that are used as uplinks for your hosts have all the VLANs associated with those ports and you can configure your virtual switch to specifically allow access to those VLANs.

Regards,

Paul

Cannoli
Contributor
Contributor
Jump to solution

So the NIC that's used for the vDS needs to be configured as a trunk port on the physical switch?  If so, it now makes sense.  If not, I'm lost :smileysilly:

0 Kudos
logiboy123
Expert
Expert
Jump to solution

You don't have to use trunk ports from your physical switch, it is usually better if you do. When you use trunk ports and assign mutliple VLANs to that port it means that the virtual switch (standard or distributed) can then talk to the physical switch using the VLANs assigned. What this means is that a single physical NIC inside an ESX host can talk to multiple VLANs, if you didn't use trunking then you could only assign one VLAN per ESX NIC which is not very efficient.

Most vSwitches will be talking to several VLANs but will only have one or two uplinks, in this scenario it is almost always required that you use port trunking.

Does this answer your question?

0 Kudos
Cannoli
Contributor
Contributor
Jump to solution

Ok, I have a vSwitch0 with one NIC configured as the management IP and an additional physical NIC in the dvSwitch that is configured as a trunk on the physical switch.  I did this for the three servers in my server pool and all is working wonderfully!

Now I get a warning that I don't have management NIC redundancy.  I read that using 4 physical NIC's is best practice.  I understand how two of those physical NIC's fit in, one as a management NIC in vSwitch0 with a static IP assigned and the other as a trunk in the dvSwitch.  How should the other two physical NIC's be configured?

I remember reading that one NIC is for management, one for vMotion, one for NFS traffic and the other, I'm not sure.  What's the configuration best practice when using 4 physical NIC's?  How should they be configured on the ESXi host?  On the physical switch? On the vSwitch0? on the dvSwitch?

Thank you again!  I've learned a lot from this thread so far!

0 Kudos