No, it doesn't matter how you mix VMs with different vCPUs counts. Each VM/vCPU is scheduled individually from each other.
As long as you don't overcommit too many (busy) vCPUs on a host you're fine, which obviously applies no matter whether all VMs have the same amount of vCPUs or not.
No, it doesn't matter how you mix VMs with different vCPUs counts. Each VM/vCPU is scheduled individually from each other.
As long as you don't overcommit too many (busy) vCPUs on a host you're fine, which obviously applies no matter whether all VMs have the same amount of vCPUs or not.
Exactly. Virtualization is practically the sole answer to the question above
Hello,
Thanks for your responses, I did some digging around meanwhile because there is some kind of myth running in our company about impact when mixing single vCPU VMs and Multiple vCPU VMs and seems to be related to version 2.x of ESX according to these blog articles :
http://blog.scottlowe.org/2008/06/30/vmware-esx-cpu-scheduling-information/
http://www.yellow-bricks.com/2008/07/07/multiple-virtual-cpu-vms/
http://blogs.vmware.com/performance/2009/06/measuring-the-cost-of-smp-with-mixed-workloads.html
That is correct that was an issue with the CPU scheduler used in ESX 2 hosts - with three that issue was resolved and only has gotten better with each subsequent release of vSphere