I would like to see the ability in DRS Rules to keep new VMs off of a specific ESXi hosts before they are created. First we would create a group of VMs that are allowed to run on the specific host. Then, I would like to be able to create a rule that would state "For VMs NOT in this VM group they must not run on Hosts in this host group." The change would be the ability of setting the rule to enforce VMs not in a VM group to obey the rule versus having to specify every VM specifically in a group.
We are a small IT department and we provision View clients on the same cluster as our servers. Our specific situation is that we have virtualized Cisco VOIP servers on one of our ESXi hosts and according to their support we are not allowed to run non Cisco VOIP servers on that same ESXi host. We have a single host for VOIP and it is in the same cluster with for other ESXi hosts. The four other hosts run all of our other VMs. We put the "VOIP" host in the cluster to give it the ability to take advantage of HA. The "VOIP" host is using less CPU/memory than our other hosts. When we provision a View client, DRS automatically places it on the "VOIP" host. This also happens when we deploy a VM from a template. It's more problematic with the View provisioning as the "VOIP" host does not have the network configured that is needed by the View client. We end up vMotioning the newly created VMs to other hosts as soon as they show up.