VMware Cloud Community
DITGUY2012
Enthusiast
Enthusiast

Separate dvSwitches for traffic types or use NetworkIO Control with port groups?

If you look at traditional networking documents they recommend separating traffic types into separate physical switches. If that's not possible then using a larger core switch with multiple VLANs can accomplish the same task with some minor differences.

I planned on building separate dvswitches for each traffic type in our new vmware cluster (management, vmotion, server lan traffic, dmz traffic, etc) and then came across articles about the beauty of Network IO Control on dvSwitches. The new plan I saw out there was one large dvSwitch with several port groups assigned to various VLANs and then using shares and limits to keep traffic hogs at bay. This seemed much simpler while allowing traffic that needs more bandwidth than 1 or 2 1-Gbps ports could provide. Things like vmotion and management traffic. Plus servers obviously.

However up to now I can only find people on either camp without any up to date documentation. We're using 8-1Gbps ports, not 10-Gbps.

Can anyone say for sure if VMware prefers one way or the other? It seems as if it's moving towards the NetworkIOC with one large dvswitch, but this is non-traditional. In the real world it often makes sense to buy switches dedicated to iSCSI for example rather than plug up ports on the core, especially if not physically near the core. But in VMware there no cabling or distance limitation. The hosts are right next to each other and everything is virtual.

Am I missing something?

Thanks!

Reply
0 Kudos
2 Replies
HeathReynolds
Enthusiast
Enthusiast

I use NIOC or class based WFQ on the 1000v with 10G links and run converged networking. I run either 2 or 4 10G connections and pass FCoE, NFS, vMotion, management, and guest traffic over all of the links.

With a gig design and the VDS I would us the failover order active/standby/unused to separate my traffic based on port groups, but only have a single DVS to manage. This link covers this a little:

vSphere vDS Setup - vNetwork Distributed Switch

My sometimes relevant blog on data center networking and virtualization : http://www.heathreynolds.com
Reply
0 Kudos
DITGUY2012
Enthusiast
Enthusiast

Thanks HeathReynolds. We did realize one limitation. We planned on using 15 ports (5 per host), but etherchannel only supports 2 to 8. Since we're not on 10Gig yet, we had to limit it to 3, 3, and 2 on the hosts. To utilize the other links we created a separate dvswitch for the others and put vmotion and management on those.

That being said we have had tons of trouble getting the management traffic to be part of any dvS. I've followed every article under the sun but it keeps rolling back after losing connectivity to the existing one. Migration doesn't seem to work for management. For now we're just using a regular vswitch on each host for management. Oh well.

Thanks!    

Reply
0 Kudos