VMware Cloud Community
firstamb
Contributor
Contributor
Jump to solution

vCPU to pCPU (Physical)

Man, my head is about to explode with all the information I read on vCPU to physical CPUs!

Let me just run this down, I work in an environment that has 20 hosts spread out over 2 datacenters. 12 host at A and 8 at B. Well we were under the impression that adding more hosts and spreading out our VM's would help alleviate some of the sluggish issues. We had 6 at A and 4 at B. Keep in mind I said help, not cure all to be all! Well after what I have been reading with VM resources and best practices I am just more confused. I need a more direct answer or something that can point me in the right direction.

Memory is not an issues on my hosts, I am pushing anywhere from 32GB to 48GBish on the hosts themselves, not great but good enough. Now my vCPUs on my VMs are the issues I feel is plaguing some of the sluggishness. For example, on my new hosts they are:

Dell m600

42GB memory

8CPUs x 2826GHz

2 Processor Sockets

4 Cores per Socket

8(-16 logical processers depending if we have Hyperthreading running on the host or not with this host having only 😎

With that host box being an example, I have only three virtual machines on it. All three are Server 2003, 2 of them are terminal servers and one is a file/program server (one program is mapped out to users on the terminal servers).

Each of the three servers have 4 vCPU and 4 GB of memory.

My understanding is that memory we are solid. 4x3=12 right? so 12GB of memory and we have 42GBish on this server we are good right?

vCPU is 4x3=12 right? Is that 12 physical processor sockets, 12 cores, or 12 logical processors I would have to have? Which as you can see this host doesn't have 12 cpu anything if I am understanding this correctly?

I read on here that you should not extend vcpu out more than your host has itself? So My ultimate question is, looking at the specs above, how many vCPUs can my host theorically dish out? (I know it depends but for the sake of it, let's put a number on it). Thank you in advance!

Needing help,

-Oscar

0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

This is how your example host looks like:

  • Dell M600 blades with Intel E54xx CPUs!?
  • Two quad-core processors, HyperThreading disabled!?
  • What type of storage (FC/iSCSI/local)?
  • In case of local storage: SAS or SATA disks? RAID controller with BBU (write-cache enabled)?
  • Windows Server 2003 Standard edition.

A few thoughts on this:

  • From my experience XenApp rather runs out of memory than CPU (Unless you are running really CPU intensive applications).
  • Enabling HyperThreading may help a little bit for the CPUs you use. For the 55xx or newer CPU models, enabling HyperThreading will make a noticable difference.
  • If you are using local storage, BBU makes a HUGE difference in disk performance (factor 10x or better).
  • A default installation of Windows 2003 does not align the partitions (31.5kB) to the VMFS and physical storage. This may not be a real issue in case of a storage system, but for local storage properly aligned partitions can make a difference of 10-15% in disk performance. If there's a chance, align the partitions to 1MB, which is the default for Windows Vista/2008 and newer.

My conclusion for the mentioned example host:

  • The most important point - and this is where I absolutely agree with Tom - is to reduce the vCPU count for the file server (I assume this will make no difference for the file server) as well as for the XenApp servers to two. This will leave two vCPUs for the host and should reduce the "CPU Ready" values which may cause the sluggishness.
  • In case of local storage consider to add BBU unless you already have it.
  • If the above does not help, consider to align the partitions to 1MB and - in case it's the guest memory which is limited - consider to switch to the Windows 2003 Enterprise Edition to be able to allow more guest memory (up to 12-14GB usually works well with XenApp on Windows 2003).
    Hint: One license of Windows Server 2003 Enterprise (has to be at least an R2 license) allows to install up to 4 instances of Windows Server on a single host.

André

View solution in original post

0 Kudos
14 Replies
mcowger
Immortal
Immortal
Jump to solution

vCPUs map to cores.  So 12 total vCPUs maps to 12 physical cores.  However, ESXi does time slicing of the cores, so that you can overcommit your hosts (which you've done).  Overcommitment is fine as long as its not casuing a performance issue.

Theoretically you can dish out 25 vCPUs/core, so 200 cores.  But you will run into performance problems LONG before that Smiley Happy

--Matt VCDX #52 blog.cowger.us
0 Kudos
firstamb
Contributor
Contributor
Jump to solution

SO, with what you just said, this is another question I was tyring to find the answer too, Does my M600 have 8 physical cores, or just 2 (because they are quad core). How does ESXi view that?

I just re-read what you typed, and I am going to go with ESXi views my Host as having 8 physical cores?

0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

From a CPU and Memory point of view you appear fine,  now as an aside, which of the guest are having the Sluggish issue, I am taking a wild guest here and saying that it is your Citrix Boxes.

what version of XenApp are you running, how many users are on it. remember a Citrix box is designed to have High CPU utilisation.  consider checking to see if there is any context switching in the guest perfmon stats, as this is evidence that the guest is waiting for processor time. and swapping from one CPU to another to get CPU Access.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
akshunj
Enthusiast
Enthusiast
Jump to solution

Correct. You have 16 logical cores, 8 physical cores, residing in two sockets. 

0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

To see what ESXi sees look at the summery tab on vCenter after you have selected a host.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
firstamb
Contributor
Contributor
Jump to solution

So Logical Processors is 8. So is that what I would base my vCPU deployment off of? The Logical Processor count?

Well the sluggishness is throughout, but Citrix boxes show it more. I am running Presentation Server 4.5.

0 Kudos
sparrowangelste
Virtuoso
Virtuoso
Jump to solution

could the storage be slowing you down?

--------------------- Sparrowangelstechnology : Vmware lover http://sparrowangelstechnology.blogspot.com
0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

I tend to never overprovision Citrix servers.  they are high CPU users, I would say that you are suffering from CPU contention issues due to the fact that you have 2 x 4CPU Citrix guest on there. Consider dropping the CPU count to 2 on each of them and actually adding another Citrix server  or two into your farm.

I would lay money that your slugishness would be reduced. as the Guest would not be waiting for 4 pCPU cores to come available on the host to deal with guest processor requests

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

I doubt it is storage as this is not a global issue, only certains guests are being affected on certain hosts.  from the guest type mix and number of vCPUs utlised I would think it is a CPU contention issue, caused due to heavy loading of his Citrix guests.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

This is how your example host looks like:

  • Dell M600 blades with Intel E54xx CPUs!?
  • Two quad-core processors, HyperThreading disabled!?
  • What type of storage (FC/iSCSI/local)?
  • In case of local storage: SAS or SATA disks? RAID controller with BBU (write-cache enabled)?
  • Windows Server 2003 Standard edition.

A few thoughts on this:

  • From my experience XenApp rather runs out of memory than CPU (Unless you are running really CPU intensive applications).
  • Enabling HyperThreading may help a little bit for the CPUs you use. For the 55xx or newer CPU models, enabling HyperThreading will make a noticable difference.
  • If you are using local storage, BBU makes a HUGE difference in disk performance (factor 10x or better).
  • A default installation of Windows 2003 does not align the partitions (31.5kB) to the VMFS and physical storage. This may not be a real issue in case of a storage system, but for local storage properly aligned partitions can make a difference of 10-15% in disk performance. If there's a chance, align the partitions to 1MB, which is the default for Windows Vista/2008 and newer.

My conclusion for the mentioned example host:

  • The most important point - and this is where I absolutely agree with Tom - is to reduce the vCPU count for the file server (I assume this will make no difference for the file server) as well as for the XenApp servers to two. This will leave two vCPUs for the host and should reduce the "CPU Ready" values which may cause the sluggishness.
  • In case of local storage consider to add BBU unless you already have it.
  • If the above does not help, consider to align the partitions to 1MB and - in case it's the guest memory which is limited - consider to switch to the Windows 2003 Enterprise Edition to be able to allow more guest memory (up to 12-14GB usually works well with XenApp on Windows 2003).
    Hint: One license of Windows Server 2003 Enterprise (has to be at least an R2 license) allows to install up to 4 instances of Windows Server on a single host.

André

0 Kudos
firstamb
Contributor
Contributor
Jump to solution

Ok, so on my host server. Since I have a 2x4cores, I only have 8 pCPU to dish out?

Put the file server down to 1, and keep the Citrix at 2 each? Will try that tonight. That would mean that, hypertherotically thinking, I would have 3 pCPU's as 'reserve so to peak'. 2x2+1 = 5 8-5 = 3 standby pCPU?

-Oscar

0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

Also remember that ESXi itself requires access to CPU.  This will never need more than a single core.

Therefore with you 8 cores, and with 2 x 2vCPU Citrix boxes and a single vCPU File and print server you will be using 6 cores.  you should have 2 cores spare,  the beauty of vSphere is that you can actually have 3vCPU by manually editing the VMX file Smiley Happy so at a push you could bump your CTX servers to three CPU, that said I personally would increase your Server farm with another 2vCPU server.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
a_p_
Leadership
Leadership
Jump to solution

Put the file server down to 1 ...

Caution: If you do this you will have to switch back from the "Multiprocessor HAL" to the "Uniprocessor HAL" for the processor in the Windows 2003 device manager. I'd suggest you stay wit 2vCPUs at the moment and only go to one in case it's really necessary.

André

TomHowarth
Leadership
Leadership
Jump to solution

You make a good Point regarding the HAL Andre.  simply taking a machine from 4vCPU to 1vCPU will actually cause more issues, as you will need to change the HAL that the guest uses.  just drop them down to 2 vCPU

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos