VMware Cloud Community
architects
Contributor
Contributor

vSphere- Multiple vDS's

Hi,

We have recently added a new 3 host vSphere cluster to our existing ESX 3.5 9 host ESX environment, both controlled under the same VC 4.0

The new hosts have 2 onboard 1Gig NIC's and a PCI Bcomm 10Gig adapter with dual ports. I am planning for the following configuration using vDS.

- Leave the previous 3.5 hosts on the classic vswitches - As these would soon retire and the VM's would move over to vSphere env. so no need to migrate the existing vswitches.

- Create 2 vDS's - One with the 1 Gig uplinks from all the 3 hosts in a failover team for SC & Vmotion type of functions

-Second vDS with the 10Gig uplinks from the 3 hosts (6 uplinks in total) in a aggregated configuration

The question is:

1. Is it recommended to have multiple vDS's in the same DC?

2. Is there a way to dedicate uplinks for a specific type of traffic (e.g. SC only on 1Gig uplinks) cause if that's possible then i can do away with a single vDS and everything on it.

Any advice towards this would be helpful.

Regards,

A

Reply
0 Kudos
6 Replies
marcelo_soares
Champion
Champion

1. There is no problem having multiple vDS's on your environment. I think there isn't any special recommendation, but it is a good networking practice to divide your network traffic in well organized segments.

2. With vSphere 4.1, you can limit such traffic with Network IO Control. 4.0 still does not support it.

Marcelo Soares

VMWare Certified Professional 310/410

Virtualization Tech Master

Globant Argentina

Consider awarding points for "helpful" and/or "correct" answers.

Marcelo Soares
Reply
0 Kudos
peterdabr
Hot Shot
Hot Shot

1. You can have up to 16 distributed switches in vCenter 4.0 (and up to 32 in 4.1). Having multiple dvswitches in the same datacenter is totally fine, keep in mind that each pNIC (or pNICs aggregated link) on an esx host can belong to only one switch, so if you plan to aggregate 1 Gb NICs together and 10Gb ports together as well, you'll end up having two dvSwitches, assuming that all new hosts become part of the same cluster.

2. Upgrade vCenter to 4.1 and that will allow you to manage the shares for different types of traffic (SC, VMotion, FT, etc) using Network I/O Control. You can still dedicate one type of traffic per uplink port on dvswitch by creating a port group with specific traffic type and setting its 'Teaming and Failover' policy to map to only one dvUplink, while moving remaining uplinks to 'Unused dvUplinks' and repeating the same step for remaining traffic types.

Having 2 x 1Gb/s ports and 2 x 10Gb/s ports per host, I would simply create two dvSwitches, 1-gige switch (with nic teaming) being placeholder for 'public network'/'Service Console Public' (with just one 'public' VLAN being present on the switch), while 10-gige switch being 'private network'/'SC Private'/'VMotion' (with 1 private VLAN or even 2 VLAN, second one being VMotion/iSCSI dedicated VLAN). The reason why I specified two Service Consoles each in it own network is actually the best practice for SC redundancy and HA isolation mode redundancy.

Best,

Peter D.

Reply
0 Kudos
architects
Contributor
Contributor

Thanks Peter & Marcelo for confirming that i was going in the right direction.

Peter: Specific to your point of SC redundancy, we have multiple type sof traffic in our DC (Prodcution, Backup, Vmotion, SC & some reserved for FT)

So, to that i was thinking of creating a failover team with the 1 Gig Nic's for the SC & an aggregate team of 2 10Gig's for the rest of the traffic as we handle more than 50 VLAN's in that space so i thought i would leave all the bandwidth for them.

Might take your advice and create a backup SC port group on the 10Gig vDS switch as well.

Thanks!

A

Reply
0 Kudos
peterdabr
Hot Shot
Hot Shot

Please keep in mind that Service Console traffic is realatively light, comparing to other traffic types like backups/VMotion/FT, etc. Most of the time it sends hearbeats between hosts in the cluster and to vCenter server. I wouldn't hesitate adding redundant service console at all, using another network and another nic team. Benefits of having redundant SC is more important to me than the fact that it is additional traffic on the network. Besides, if I'm not mistaken, secondary' Service Console will never be used, unless primary SC becomes unavailable, in which case secondary will be used to send hearbeats to prevent hosts going into isolation mode.

Also, parsing your last note again, I wouldn't put SC as the only traffic type on an aggregated 1Gb/s link. That's definately underutilizing the link :-). You would greately benefit from adding additonal traffic type, like public facing network (of course both SC and public separated using different vlans), or even VMotion there.

Best,

Peter

architects
Contributor
Contributor

yes you are right Peter. A little correction here- I would use the 2 1 Gig connections as a failover team and put SC and Vmotion on it. The rest remains on 10Gig links.

Also, was planning to add an extra SC port group on the 10Gig vDS for SC redundancy.

Regards,

A

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

There is more to think about than just creating a secondary link on another vDS....

1) is this even allowed by your security policy? Remember access to the management lan represented by the SC means access to EVERYTHING and usually in much less than 2 minutes by a hacker.

2) What trust zones are already on that 10G link? If there are no other trust zones, then see #1.

Since SC traffic is relatively light and most chassis still have 1GB pNICS, I often put these onboard pNICs to SC and other lower bandwidth virtualization networks than putting them on the 10G... That way there is some separation, etc.

Just some more food for thought. Moving to 10G implies use of VLANs to segregate traffic, this does NOT imply security.... Check your security policy at the very least.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, 2010

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos