We are implementing our 10G network, currently our VMware servers are on our 1G network. We are standing up new servers for the 10G network and a requirement for us going forward is that each applicaiton group needs to be on it's own VLAN. I've read where the max number of port groups for swtiches are (standard switch = 256 and distributed switch = 5000).
We are looking at possibly adding a distributed switch in vCenter 5.1, but can we still choose to leave certain ESXi hosts (the ones on our 1G network) off the distributed switch and continue to use their standard switches? Only our new servers we are standing up will be on the 10G network and we would want to put those on the distributed switch to handle the numerous port groups. Is it possible to do this and if so, can it be done without effecting our current VMs on the 1G network (Ii.e. no network connectivity interruptions?). Eventually we plan to migrate our current VMs from our 1G network ESXi hosts to the hosts on the 10G network.
Thanks
You can mix the VSS and VDS in a single cluster. Where this usually becomes an issue is for vMotion of workloads, as in most cases your VDS workloads are limited to hosts on the VDS. Same goes for the VSS. Migration of VMs can indeed be done without much of an interuption from the VSS to a VDS. There is a very brief network interuption as the switchover is made, but typically not enough to drop a ping. It depends on how sensitive your workloads are to latency and session interuption.
I've also gone through the exercise of mixing 1GB and 10GB uplinks on the same VDS when the use case demanded it. The design involved older hosts with 8x 1GB uplinks and newer hosts with 2x 10GB uplinks. It was a tad tricky to do and I don't advise it outside of corner cases.
Yeah, I would split this into two clusters in the same datacenter. One 10g with vDS and one with your gig hosts with standard switches.
When you are ready to start migrating workloads from the 1g cluster to the 10g cluster you can add one of your 1g host to the vDS, but only use half of it's uplinks. Leave the rest on the standard switch. If you use four interfaces, two mgmt and two guest traffic, you would have one mgmt and one guest traffic on both the standard and distributed switch.
It won't have network redundancy, but you can use this single host to migrate from the standard switch cluster to the distributed switch cluster without downtime. Migrate to the split host, change the NIC into the appropriate vDS port group, and then migrate into the 10g cluster. As long as everthing is within the EVC paramaters you should be able to do a seamless upgrade when you are ready.
We used a split/migration host to move the guests from 96 two socket blades with standard switches into a UCS 10g cluster with 1000v without downtime. We moved 1200 some odd workloads this way.
If you won't have enough hosts to have two clusters you would need to put them all on the vDS to keep vMotion/DRS/HA working...
You also may be a canidate for using VXLANs combined with virtual firewalls for each of your applications....we are playing with them in the lab but haven't done anything in prod.
Yes, we plan to keep the 10G servers separated in its own cluster using the vDS and gradually migrate our VMs to the 10G servers. Thanks everyone
