VMware Cloud Community
justinsmith
Enthusiast
Enthusiast

Couple quick questions surrounding vDS and my environment

I have some hosts on 4.1 that I would like to use the vDS ability. Im curious

if it would work for what I'm trying to accomplish. I have a DC that has 6

hosts in it, all the hosts have 2 dual port 10GB cards. 2-10GB ports for

VMNetwork/vMotion/Mgmt Network, and 2-10GB ports for NFS traffic.

Right now, I create a new vSwitch and add the 2-10GB VM Network ports, but

then I have to manually add each VLAN we have in our network (theres about

8). Bad thing is, I have to do this for each host, which could be a pain. I'd

love to create a vDS that has everything configured, then apply that vDS to

each host.... is that the way vDS works? Would vDS accomplish what I'm trying

to do? I've found KB's on how to confgure this, but I couldnt find any other

documentation.

Thanks a lot everyone!

Reply
0 Kudos
2 Replies
logiboy123
Expert
Expert

Is your vCenter physical or virtual?

This will not effect whether or not you can do what you want to, it will effect whether it is a good idea or not. Best practise at the moment (vSphere 4.1) is not to run a vCenter server on the same infrastructure that it manages when using a vDS. If your using vSS then having vCenter as a VM is fine.

A vDS will give you a centralised management interface for each host that is attached to the vDS. A vDS has dvUplink profiles that can be assigned to individual to NIC's on a host. Think of a dvUplink as something similar to an DNS alias for networking.

For example I have vDS0 configured with 4 x dvUplink profiles, on host1 I assign;

vmnic0 - dvUplink1

vmnic1 - dvUplink2

vmnic2 - dvUplink3

vmnic3 - dvUplink4

But then I could have the following configuration on host2;

vmnic0 - dvUplink4

vmnic1 - dvUplink2

vmnic2 - dvUplink3

vmnic3 - dvUplink1

Therefore NIC assignments on hosts that are attached to a vDS can be independantly managed. the vDS will manage your Profiles across each host connected. This is especially useful in environments where you have different numbers of NIC's on each host in your cluster.

Networking under a vDS is a lot more complex then vSS. I suggest a lot more reading before attempting to do this in Production. Take some hosts aside and then play/break them in a lab environment. Typically I would build an ESXi host, assign it a IP and dns, then apply a host profile to get all my configuration settings, then migrate the host onto a vDS.

Cheers,

Paul

Reply
0 Kudos
r3zon8
Contributor
Contributor

Absolutely.

If your hosts have a common config of having 2 dualport 10GbE hba's each with port 0/1 always being hard wired to your vm/mgmt/vmotion pswitches, and ports 2/3 always wired to nfs pswitches then from a management point of view it would be ideal to employ vds. any change made on a dvSwitch or dvPortGroup will be pushed down to all the hosts attached to that dvSwitch right away. time required for something as small as adding a new vlan, shuffling around port assignment, or changing teaming policy across your DC will be greatly reduced. theres a small learning curve and some new caveats to watch out for when working with vds but nothing too drastic. an added bonus is your vmotions will be checked for connectivity before they are kicked off.

Reply
0 Kudos