VMware Cloud Community
mutthu
Enthusiast
Enthusiast

ESXI 1 physical socket vs 2 physical socket

I hope someone can clear this up for me?

I have been using 2x18 Cores CPU's ESXi hosts all this time, and it is time to refresh the hardware.

I am considering buying one socket ESXI server instead of 2 sockets since modern hardware architecture leads to performing better than old architecture.

Typically, when adding processing power, we assign 1xCores or 2Xcores for VM and don't give it away more than it needs; that can degrade performance significantly.

We will be running SQL, Citrix servers in an on-prem VMware cluster.

Will there be a performance issue if we buy a 1 Socket server vs. 2 Sockets?

Labels (1)
0 Kudos
4 Replies
a_p_
Leadership
Leadership

It really depends on the requirements, and without knowing those, it's hard (nearly impossible) to answer your question.
However, I'd suggest that you take a look at the current resource usage, to see how much CPU your VM's consume.

André

0 Kudos
IRIX201110141
Champion
Champion

A high core single CPU have most likely less clock count (MHZ) than a lower core count CPU. When the MHZ fits your performance requirements than go for for it.  One single CPU often means only one Numa Node which is great and automatically stops all discussion about (v)NUMA and performance when a VM not fitting into a Numa Node.

 

Most of our vSAN cluster have only one CPU socket because of licensing 4 or more hosts. Intel have special "U" XEON CPU which are comming at a great price and we choose the 24C version.

Regards,
Joerg

0 Kudos
mutthu
Enthusiast
Enthusiast

Sometimes vendors come up with a crazy count of the CPU cores, and we often tell them to start with a minimum. You made a good point about NUMA and license host about vSAN. 24 Core physical CPU will help us assign a VM with vCPU of 12-16 if a vendor requests it. Will this monster VM penalize other VMs though the vSPhere scheduler adjusts CPU usage in a VSAN environment?

0 Kudos
IRIX201110141
Champion
Champion

A VM with oversized vCPU most likely always harm your environment. There is a reason why not configuring each VM with max vCPU an let the Hypervisor do some magic stuff.

If the GuestOS spread the proccesses all over "available" CPUs the Hypervisor needs to find a free pCore of all of them at the same time. In the meantime no other VM get resources and that will increase your ReadTimes and performance will go down.

Yes with Windows 2012? and somewhere around vSphere 6?. the Hypervisor try to tell the GuestOS that its running as a VM and when possible to optimize the spreading to less CPUs when possible because if a vCPU have zero workload the Hypervisor doesnt need to allocate a pCore. But its always the GuestOS which makes the decision.

So.. as always.. .. Always right size your VMs!

Regards,
Joerg

0 Kudos