DMZ and distributed switches

Does anyone have a recommendation on how to handle DMZ servers w/ distributed switches?

I have two clusters of 5 hosts in Dev and Prod which exist on our core network and I could easily add those to one distributed switch

I have two other hosts which sit on a physically separate network which we use as a DMZ, with firewall ports open to allow vCenter to manage them... they're service console and vmkernel ports are not tied into our core network...

Do I create two distributed switches, one for dev and prod and another for the DMZ hosts or do I just add extra dvuplinks to one distributed switch that contains the NICs to the DMZ servers?

0 Kudos
2 Replies

I have similar questions regarding vDS. Due to security requirements and concerns, it would make sense to have multiple vDS's for Production, and VMotion. I asked this question and was told VMware has yet to adopt a "best practices" model for vDS. If someone can answer this, please do.

Here is a blog entry for vDS from VMware:

"My preference when using vDS is to run in a hybrid mode, keeping the service consoles and vmKernel as a standard switch and moving all the Virtual Machine Port groups to a vDS. This means I handle the service console and vmKernel at installation the same as usual then add my host to the vDS, when I then find the need to add a new portgroup to my hosts I have only got to configure it in one place. In large environments this saves considerable amounts of time and the potential for error"

From this comment I *assume* this is for security concerns but, it seems as though it's too early to tell as vDS is still somewhat young. It might be a good idea to run SC, VMotion and DMZ port groups as individuall standard vSwitches and run Production as vDS. We are trying to hash this out as well from a security perspective...



0 Kudos
VMware Employee
VMware Employee

Hello smokey71 and russ79,

We use vDS for greater flexibility and greater security. The security features provided by vDS (port mirroring) can not be given at vSS level. Also with vDS you have options for QoS for network. So when the above comment said put service console and vmkernel (used for Management network) in vSS that was to deal with other issues or situations.

Normally when we configure vDS or vSS for that matter we put the physical NICS connected to the Host which will be connected to the switch in TRUNK mode. This will allow us to provide VLAN tagging at the portgroup level.

So there are two situations :

1. You have underlying physical NICs in TRUNK mode and then you use VLAN tagging at the portgroup level to segregate the network traffic. This has a flaw in the sense that the DMZ and internal traffic both flows through the same physical NIC (though they will be separate VLAN taggeg)

2. You create two separate vDS or vSS to connect to separate physical NICs altogether and create DMZ and internal portgroups on these two switches. So the underlaying traffic will always be separated. From security perspective this is more secure as the DMZ and internal traffic will never be mixed. But again you will need more NICs

So the main question here, whether to use vDS or vSS should not be decided on these consideration about security. As based on your design from security perspective both can be used (as the security is implemented at the base design). Whether to use vDS or not should be decided based on other considerations.

vDS has lots of advantages and if you have vSphere Ent + you should use vDS, but again my preference is also put Management Network in the basic vSS along with vMotion so that vMotion traffic would also be segregated from the rest of the traffic and use vDS for rest of them. You can use two vDS to host DMZ and internal traffic but make sure to connect them to different NICs altogether so the traffic will not mix at the physical level also (they will be separate by VLAN and the actual flow of traffic will also be separte thus completely segregating both type of traffic).

Hope this helps. For details check: