VMware Cloud Community
terminalx
Contributor
Contributor
Jump to solution

ESXi CPU Newb Question

Hi There

I have been searching through kb's, the community, and I think I know my answer.. But, it never hurts to ask right?

Here's my environment:

Dell 2900 2 x Quad Core 2.5ghz Processors. 2 Procs, 4 cores, 8 logical processors

Please help me out and validate/correct my statements

1.) So, I understand that the recommendation is for a guest to use 1 vCPU, which would only use at maximum 1 logical processor. In theory, if I only ran one guest, with one vCPU at 100%, I would use ~1/8th of the host potential. (of course some overhead for ESX)

2.) Since most Windows applications now support at least 2 way multithreading, it stands to reason that a server running an multi threaded capable app at 100% at any given time on one vCPU could benefit from 2 vCPU's. (no other guest's withstanding)

3.) Since I have 8 logical processors, and only 4 guest's running Windows Server 2008, it seems to reason I should give each of them 2 processors, thus fully utilizing the host. (See #5)

4.) If I ever added a fifth guest (with one or two proc's), because of vSMP I would experiance slowness as the multi vCPU machines would be waiting for 2 cores to be free at once.

5.) Since ESX needs some processor for over head, does it use one core in particular? i.e. Would I be better off to have 3 guests with 2 vCPU's (6 cores used), 1 guest (the one that does almost nothing) with 1vCPU (7 cores used), and ESX would use the remaining core for itself (8 cores used?)

Thanks for helping me out..

I feel like a tird asking, but it's a bit confusing..

-Henry

Reply
0 Kudos
1 Solution

Accepted Solutions
Craig_Baltzer
Expert
Expert
Jump to solution

  1. Yes

  2. Maybe, depends on the load in the guest and how "busy" it can keep the 2 vCPUs. There is additional virtualization as well as multi-processor HAL overhead to think about when going from uniprocessor to multi-processor so you need to be "working" the 2nd CPU

  3. ESX will also use a core, so if you take the simplistic view of allocating cores to VMs, you only have 7 to "allocate"

  4. Maybe, depending on how "active" the VMs are, but if all CPUs are busy then yes

  5. I believe core 0 is used for ESX. So in theory you could "plan out" core allocation as you've done.

Unless these are computationally intensive VMs then don't get too hung up on getting a 1 to 1 mapping of VMs to physical cores figured out. The main issue is not to tie up ESX with any more scheduling constraints than you have to (i.e. the more vCPUs in a guest the more "difficult" it is to acquire all of the resources needed to let it run), let it look after the scheduling and then monitor for usage and contention (using the ESX tools, not the tools in the guest OS).

View solution in original post

Reply
0 Kudos
3 Replies
weinstein5
Immortal
Immortal
Jump to solution

1) that is correct - except most guests do not run at 100% constantly and you will be able to place more than 8 VMs on the host

2) That is true if the dual vCPU vm is running at 100% the vast majority of the time but most do not - best practice is to start with single vCPU and monitor to see if a second vCPU is needed - the vcpus of a virtual smp vm get scheduled simultaneously - so if the vmkernel can not schedule bot vCPUs neither get scheduled - it is easier to schedule 1 vCPU than 2 or 4

3) No start with a single vCPU see #2

4) Yes

5) You are correct the vmkernel does require resources - as I mentioned before best practice is to start with a single vCPU you will find more times than not the vms performance will be fine -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Craig_Baltzer
Expert
Expert
Jump to solution

  1. Yes

  2. Maybe, depends on the load in the guest and how "busy" it can keep the 2 vCPUs. There is additional virtualization as well as multi-processor HAL overhead to think about when going from uniprocessor to multi-processor so you need to be "working" the 2nd CPU

  3. ESX will also use a core, so if you take the simplistic view of allocating cores to VMs, you only have 7 to "allocate"

  4. Maybe, depending on how "active" the VMs are, but if all CPUs are busy then yes

  5. I believe core 0 is used for ESX. So in theory you could "plan out" core allocation as you've done.

Unless these are computationally intensive VMs then don't get too hung up on getting a 1 to 1 mapping of VMs to physical cores figured out. The main issue is not to tie up ESX with any more scheduling constraints than you have to (i.e. the more vCPUs in a guest the more "difficult" it is to acquire all of the resources needed to let it run), let it look after the scheduling and then monitor for usage and contention (using the ESX tools, not the tools in the guest OS).

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

Moved to ESXi forum.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and other Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos