Hello,
I am looking at the ESXi Host Configuration Maximums and realizing I don't know what the distinction is between Logical CPUs per ESXi Host (160) and vCPUs per ESXi Host (2048). Any explanation is greatly appreciated. Thanks!
hmm...
okay let me try to explain you.
1LCPU can be pinned to multiple VM's.
if I have 1 quad core CPU, I have 4 LCPUs
I can create multiple VM's like
1) 1 VM with 1 vcpus
2) 1 VM with 2 vcpus
3) 1 VM with 4 vcpus
1st VM will be pinned to 1st LCPU.
2nd VM will be pinned to 2nd, 3rd LCPU.
3rd VM will be pinned to 1,2,3,4 LCPU.
if the cpu reqquest comes from 1st VM and 3rd VM at the same time, then it will queue the process and do it accordingly!
Hope you got it now!
1 quad core CPU = 4 logical CPUs without hyperthreading
1 quad core CPU = 8 logical CPUs with hyperthreading
So no of LCPUs it support is 160.
You can assign multiple vcpus, if I have 1 quad core CPU, I can assign 4 CPU’s to VM.
If I have 8 slots on the server, 8x4 = 32 CPUs, you can give 32 CPUs to the VM.
Again 1 LCPU can be mapped to multiple VM’s. on single quad core, we can create 4 VM’s with 4 vCPUs.
thanks. I get the LCPU now, but somehow still unclear from you explanation regarding vCPUs...
hmm...
okay let me try to explain you.
1LCPU can be pinned to multiple VM's.
if I have 1 quad core CPU, I have 4 LCPUs
I can create multiple VM's like
1) 1 VM with 1 vcpus
2) 1 VM with 2 vcpus
3) 1 VM with 4 vcpus
1st VM will be pinned to 1st LCPU.
2nd VM will be pinned to 2nd, 3rd LCPU.
3rd VM will be pinned to 1,2,3,4 LCPU.
if the cpu reqquest comes from 1st VM and 3rd VM at the same time, then it will queue the process and do it accordingly!
Hope you got it now!
The virtual CPU or vCPU is the CPU assigned to the VM.
A VM can have upto 32 vCPU.
Max vCPU a host can manage is 2048, total on all the VM.
Sent on my BlackBerry®
SImple Logical CPUs apply to the host hardware and relate to phyical CPU/Cores and Hyperthreaded cores.
example a dual CPU 8 core machine with out hyper threading enabled will show as having 16 logical CPUs where the same machine with Hyperthreading enables will show 32 logical CPUs
vCPU's relate to the Virtual Guests. hence the larger number as we can do pCPU over commit.
example a VM with 4 CPUs will use 4 x vCPU out of the 2048 limit. as VM with a 2 x Dual core CPU will also use 4 vCPU out of the 2048 limit
That makes sense. Now how does this relate to VM performance? Say that I have a host with 24 logical processors (2 sockets w/6 cores each) and I have a single VM with 8 vCPU (2 sockets/4 cores). Will the VM run using the full hardware capability or will it only use part of the host's CPU capability (2/3?)?
The VM will attempt to run all its processing requirements on a single CPU, that said, there are very few applications in the x86 world of windows that actually utilse all addressable CPU cores or Sockets.
In your case, assuming a fully SMP aware application set, the VM will utilise 8 logical Cores on your physical host. and no more.
That's what I was starting to realize. So would it make sense to assign the maximum vCPU to each VM and control those that require greater bandwidth when the host is loaded with the shares feature (poorly named IMHO)? This is assuming all VMs are fully SMP aware.
Time for the Consultants answer. it depends, it depends on what is your driving reasons for virtualisating in the first instance, if it is consolidation then no keep your guest lean and mean, if it is portability and these are heavyly utilised machines then yes.
that said, I would argue that there are very few physical machines that actually utilise the resources that are available to them, especailly CPU.