Cores vs clock speed

Looking for some information on how cores compare to clock speed in a virtual environment. If anyone has some links/blogs/technical information to share I would appreciate it.

Building a couple new esxi hosts, and just curious how important clock speed is vs core count. The modern Intel Xeon family really doesn't offer high the clock speeds anymore when you start looking at 10+ core processors.

For example curious how a 2.0ghz 18 core proc would compared to a 2.6ghz 14 core proc.  Maybe it's more workload dependent, how many vm's running per host vs requirements of those vm's.

Any ideas are appreciated, browsed around for a good white paper or article and most of what I found was for gaming machines.



0 Kudos
1 Reply

I'd say that more cores are better, and that hyperthreaded CPUs are better than non-hyperthreaded CPUs.    Here's why:

1).  A core (regular or hyperthread) can be basically equated to a vCPU.

2).  If you add up the number of VCPUs being used by all the VMs you are likely to have active at once and compare that total to the total number of cores + hyperthread cores on the processor(s) you will be able to tell to what extent ESXi will have to manage this resource for you.  For example, if the total number of VCPUs being used by all the active VMs is less than or equal to the number of cores + hyperthread cores in the processor then no management will be necessary because a core will always be ready to assign to a VCPU when a VM needs it.  On the other hand if there are more VCPUs active than the total core number then ESXi must manage them in that there aren't enough for every VM to be using what it wants at the same time.  Obviously the higher the ratio of active VCPUs to available cores is the harder that task becomes and the more likely one or more VMs will have to wait their turn and the slower the response time.

3).  Hyperthread cores are good because they cost very little for the benefit they give.  Remember, a hyperthread core siphons CPU cycles off of the main core it belongs to only if both are actually executing instructions at the same time.  ESXi does it's best not to schedule a VCPU to a hyperthread core if there is a regular core available instead, meaning that it won't use them until all the regular cores are assigned.  However, here's the good part.  Suppose that a VCPU for VM "A" has been assigned to main core 0 on the processor and all the other main cores are also in use.  Also assume that VM "A" has multiple VCPUs assigned to it, which means that in order for it to run multiple cores must be available.  When VM "A" gets its timeslice and it's VCPUs are mapped to physical cores, that doesn't necessarily mean that VM "A" will actually be executing instructions on both of it's assigned cores for the entire duration of its timeslice.  It may have only one process that needs CPU, and therefore is leaving the other core idle, but even if that's the case it must be assigned as many cores as it asked for because its operating system is expecting them to be there, whether being used or not.  This means that if VM "B" was assigned hyperthread core 0 which goes with the main core being utilized by VM "A" because all the other main cores were in use and ESXi had to start mapping VCPUs to hyperthread cores, VM "B" might still get more than half of the computing power of the core to the extent that VM "A" isn't actually executing instructions on the associated main core at the same time.  So this means that at worst if instructions are being executing on both the main and hyperthread core by different VMs at the same time, they each run at about half speed, but if one of them isn't actively using the core, even though one of it's VCPUs is mapped to it during its timeslice, the other VM seamlessly receives the CPU cycles the first VM isn't actively using.  In essence this allows ESXi to over commit CPU resources by having all these extra hyperthread cores available and the architecture of the chip will seamless tune them by virture how hyperthreading works.

4).  Now, the one situation where faster but fewer cores, or faster cores but no hyperthreading capability (e.g. some AMD processors) might be better is if the VMs you are intending to run are mostly single VCPU workloads and that the workload within the VM is mostly single threaded, and assuming that you aren't going to run signifigantly more VMs of this type at the same time than the number of physical CPU cores you have available.  This could be the case with workloads where most of the processing need is for mathematical calculations (e.g. scientific or engineering applications) which need to run serially and can't be broken up to take advantage of multithreading, rather than multitasked or IO based workloads like databases, email servers, web servers, etc.  In that case completing the single threaded calculation faster courtesy of the higher clock speed might yield a better result.

Hope that helps.

0 Kudos