Man, my head is about to explode with all the information I read on vCPU to physical CPUs!
Let me just run this down, I work in an environment that has 20 hosts spread out over 2 datacenters. 12 host at A and 8 at B. Well we were under the impression that adding more hosts and spreading out our VM's would help alleviate some of the sluggish issues. We had 6 at A and 4 at B. Keep in mind I said help, not cure all to be all! Well after what I have been reading with VM resources and best practices I am just more confused. I need a more direct answer or something that can point me in the right direction.
Memory is not an issues on my hosts, I am pushing anywhere from 32GB to 48GBish on the hosts themselves, not great but good enough. Now my vCPUs on my VMs are the issues I feel is plaguing some of the sluggishness. For example, on my new hosts they are:
Dell m600
42GB memory
8CPUs x 2826GHz
2 Processor Sockets
4 Cores per Socket
8(-16 logical processers depending if we have Hyperthreading running on the host or not with this host having only 😎
With that host box being an example, I have only three virtual machines on it. All three are Server 2003, 2 of them are terminal servers and one is a file/program server (one program is mapped out to users on the terminal servers).
Each of the three servers have 4 vCPU and 4 GB of memory.
My understanding is that memory we are solid. 4x3=12 right? so 12GB of memory and we have 42GBish on this server we are good right?
vCPU is 4x3=12 right? Is that 12 physical processor sockets, 12 cores, or 12 logical processors I would have to have? Which as you can see this host doesn't have 12 cpu anything if I am understanding this correctly?
I read on here that you should not extend vcpu out more than your host has itself? So My ultimate question is, looking at the specs above, how many vCPUs can my host theorically dish out? (I know it depends but for the sake of it, let's put a number on it). Thank you in advance!
Needing help,
-Oscar
This is how your example host looks like:
A few thoughts on this:
My conclusion for the mentioned example host:
André
vCPUs map to cores. So 12 total vCPUs maps to 12 physical cores. However, ESXi does time slicing of the cores, so that you can overcommit your hosts (which you've done). Overcommitment is fine as long as its not casuing a performance issue.
Theoretically you can dish out 25 vCPUs/core, so 200 cores. But you will run into performance problems LONG before that
SO, with what you just said, this is another question I was tyring to find the answer too, Does my M600 have 8 physical cores, or just 2 (because they are quad core). How does ESXi view that?
I just re-read what you typed, and I am going to go with ESXi views my Host as having 8 physical cores?
From a CPU and Memory point of view you appear fine, now as an aside, which of the guest are having the Sluggish issue, I am taking a wild guest here and saying that it is your Citrix Boxes.
what version of XenApp are you running, how many users are on it. remember a Citrix box is designed to have High CPU utilisation. consider checking to see if there is any context switching in the guest perfmon stats, as this is evidence that the guest is waiting for processor time. and swapping from one CPU to another to get CPU Access.
Correct. You have 16 logical cores, 8 physical cores, residing in two sockets.
To see what ESXi sees look at the summery tab on vCenter after you have selected a host.
So Logical Processors is 8. So is that what I would base my vCPU deployment off of? The Logical Processor count?
Well the sluggishness is throughout, but Citrix boxes show it more. I am running Presentation Server 4.5.
could the storage be slowing you down?
I tend to never overprovision Citrix servers. they are high CPU users, I would say that you are suffering from CPU contention issues due to the fact that you have 2 x 4CPU Citrix guest on there. Consider dropping the CPU count to 2 on each of them and actually adding another Citrix server or two into your farm.
I would lay money that your slugishness would be reduced. as the Guest would not be waiting for 4 pCPU cores to come available on the host to deal with guest processor requests
I doubt it is storage as this is not a global issue, only certains guests are being affected on certain hosts. from the guest type mix and number of vCPUs utlised I would think it is a CPU contention issue, caused due to heavy loading of his Citrix guests.
This is how your example host looks like:
A few thoughts on this:
My conclusion for the mentioned example host:
André
Ok, so on my host server. Since I have a 2x4cores, I only have 8 pCPU to dish out?
Put the file server down to 1, and keep the Citrix at 2 each? Will try that tonight. That would mean that, hypertherotically thinking, I would have 3 pCPU's as 'reserve so to peak'. 2x2+1 = 5 8-5 = 3 standby pCPU?
-Oscar
Also remember that ESXi itself requires access to CPU. This will never need more than a single core.
Therefore with you 8 cores, and with 2 x 2vCPU Citrix boxes and a single vCPU File and print server you will be using 6 cores. you should have 2 cores spare, the beauty of vSphere is that you can actually have 3vCPU by manually editing the VMX file so at a push you could bump your CTX servers to three CPU, that said I personally would increase your Server farm with another 2vCPU server.
Put the file server down to 1 ...
Caution: If you do this you will have to switch back from the "Multiprocessor HAL" to the "Uniprocessor HAL" for the processor in the Windows 2003 device manager. I'd suggest you stay wit 2vCPUs at the moment and only go to one in case it's really necessary.
André
You make a good Point regarding the HAL Andre. simply taking a machine from 4vCPU to 1vCPU will actually cause more issues, as you will need to change the HAL that the guest uses. just drop them down to 2 vCPU