I have a vSphere cluster with 4 ESX servers and DPM enabled.
ESX number 1 to 3 are identical. Same CPU, RAM and so on.
ESX number 4 has newer CPU, same number of cores as ESX1-3, but less RAM.
When setting DPM and DRS to fully automatic, it will power off ESX number 3. This will lead to about 40% utilization of memory on ESX 1 and 2 and 80% utilization on ESX 4.
I use Quest Foglight for VMware to monitor my environment. It continously complains on the memory control driver unable to deflate, on the VMs that is located on ESX number 4, which has higher memory pressure than ESX 1 and 2.
"[Critical] The Memory Control Driver for virtual machine , will not deflate. The VM is unable to reclaim memory that is allocated to it. Add more physical resources to the server or use VMotion to move the VM to better balance utilization across servers in the cluster."
If DPM had powered off ESX number 4 instead of 3, I think the cluster would be better balanced, and the total memory pressure lower.
Why do ESX 3 go into standy and not ESX 4? Can I change this behaviour?
The difference in memory utilization is because the newer CPU supports Hardware Assisted Memory Management (HaMMU) and is using large pages where the older servers that don't have HaMMU and are using small pages. For more on this see: http://blogs.vmware.com/uptime/2011/01/new-hardware-can-affect-tps.html.
There's no way for DPM to prioritize ESX4 to be suspended over the other hosts.
In theory, it should shut down the ESX host that has the least load on it - it will migrate the VMs from this host and then shut it down.
you have 2 options, you could set DRS rules on VMs to try keep some of them on each of hosts 1 - 3, kind of leading DPM into selecting host 4 as a shutdown host, or as Andre says, you can specify which hosts are actually elligible for DPM.