VMware Cloud Community
pmorrison
Enthusiast
Enthusiast

High CPU Ready times

I have just started to notice that several of my citrix vms are reporting very high CPU Ready times and users are complaining about speed.

The servers are in a resource pool that should be allowing for enough resources, The hosts in the cluster are only averaging 30% CPU util.

4 node VI3 cluster running on 4 proc dual core boxes with 32gig memory.

I have tried to increase the resource pool shares but that doesnt seem to be helping....

Reply
0 Kudos
17 Replies
ilatimer
Hot Shot
Hot Shot

This is usually indicative of a CPU scheduling issue. Are the Citrix server single CPU or multi CPU VMs? If the Citrix VMs are multi CPU and there are a lot of other VMs running on the ESX host the CPU scheduler might be having an issue with scheduling both vCPUs for the Citrix VM at the same time, thus causing a high CPU ready for the VMs.

Reply
0 Kudos
pmorrison
Enthusiast
Enthusiast

They are 2vCPU boxes...

I could try going to 1vCPU...

Reply
0 Kudos
pmorrison
Enthusiast
Enthusiast

Interesting... Going to 1vCPU made a huge difference on the performance within the VM..... Is it me or is this strange.

Reply
0 Kudos
Paul_Lalonde
Commander
Commander

Actually, no, this is expected behaviour.

Using multiple vCPU VMs really only makes sense when you 1) know that those VMs will have near-exclusive access to the pCPUs, or 2) You specify CPU affinity in the VM such that other VMs will not be sharing the same CPUs.

The VMkernel schedules multi-vCPU VMs only when the specified # of real physical CPUs are available. For a 2-CPU VM, if one pCPU is available but the second pCPU is busy, the VM will \*wait* (CPU ready) to be scheduled until both pCPUs are available.

Paul

Reply
0 Kudos
soleblazer
Hot Shot
Hot Shot

All,

I have a similiar situation myself. I have some vm's that are in a resource pool. The pool is set to use HIGH shares (default).

I look at the CPU ready for a handful of the vm's and the average is around 900ms. This host is not loaded. The summary tab shows I am using about 15% of the available cpu. What could be causing this? I would expect the vm's would only have ready time somewhere below 100, shouldnt be a wait for cpu time when the cpu's are not doing anything.

Am I missing something?

Reply
0 Kudos
pmorrison
Enthusiast
Enthusiast

This does not make sense to me. If I want/need the extra CPU processing power I basicaly can not get it without having elevated CPU ready times... These are 4 proc dual core hyperthreaded hosts so I should be able to have a 2 vCPU box, heck even a few 2 vCPU boxes go to town on this host without any issues....

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

If you really need more processing power, the Guest is probably not a good idea to be running as a Virtual Machine.

Remember that at its basic level, Virtualization is used to take a bunch of systems that seriously underutilize their hardware and combine them to bring up the overall ROI.

Its not to just reduce hardware costs by throwing everything including the kitchen sink at it.

If your system needs 2x - 3+ Ghz processors and 4GB RAM and is going to run its more then 50% utilization, my rule is that a 10k pizza box server is probably going to be highly utilized and the system doesn't go on ESX.

If I can only buy a low end pizza box that fits my environment for redundancy,etc and its a 2x - 3+ Ghz processor with 2GB RAM and its still going to cost me 7k and its going to idle at 1% utilization, then it goes on ESX.

Virtualization is not a consolidate the world product and there is lots of documentation stating that fact.

Reply
0 Kudos
soleblazer
Hot Shot
Hot Shot

One thing I have noticed in my environment is that Linux vm's almost always have pretty high CPU Ready times even though there are plenty of cycles available, most of the windows VM's seem to have ready times in the teens....very weird! Could it be something with LInux vm's, anyone else see this? I see this on two different clusters.

Reply
0 Kudos
Paul_Lalonde
Commander
Commander

Tell you what, dedicate 2 PCPUs (cores) to each Citrix VM by using the CPU affinity under the VM's Advanced tab. Then, go into every other VM on the box and make sure it is not allowed to access those PCPUs (you'll have to edit their Affinity settings as well and assign the remaining PCPUs).

If you can guarantee those VMs access to their own specific CPUs, you'll more or less have a "dedicated server" for that VM.

vSMP is really a mysterious thing. I have found it underperforms most of the time so I make my heavy duty VMs 1-CPU only. With so many CPUs in a modern server, the VMkernel will make sure my heavy duty VM will always have a CPU to run on.

Paul

Reply
0 Kudos
violet68
Contributor
Contributor

I have the same problem with Citrix server, Paul's recommendation above is helpful for me.

thank Paul

Reply
0 Kudos
ilatimer
Hot Shot
Hot Shot

If I remember correctly if you set CPU affinity you will not be able to VMotion your VMs to another ESX host.

Reply
0 Kudos
daniel_uk
Hot Shot
Hot Shot

I think that citrix always has bad press with CPU ready time.

Dual Vcpu's is only beneficial for an application that supports symetrical processing, and I would keep to a mimimum on how many of these you run at that.

CPU affinity is also not good as it limits what you can do with the product (vmotioning, CPU sharing etc) for the sake of trying to jam in a box thats not an ideal VM candidate in the first place.

Out of interest what apps/userbase do you have running on the citrix farm?

Reply
0 Kudos
RParker
Immortal
Immortal

Shares only come into play when there is contention. So if there is a need for 2 VM's to fight for a CPU, then the higher share gets the CPU cycle, and the other one gets whatever is left over.

Setting shares doesn't give more time to a VM per se, only in times when the host is forced to decide who gets it. When resources start to become scarce THEN shares play a role, but not before.

Try using a reservation.. that's like a certain amount reserved for thos VM's, see if that helps.

Reply
0 Kudos
RParker
Immortal
Immortal

Nice post Smiley Happy

Reply
0 Kudos
GlenMarquis2
Enthusiast
Enthusiast

<Standing Ovation>

Reply
0 Kudos
NathanEly
Contributor
Contributor

Well, I have not had the same luck with setting our citrix servers to a single vCPU, even when providing 4GB to each VM. User experience is severly impacted, as well as overall %cpu usage within the VM. I do not want to set CPU affinity so we can utilize Vmotion.

One suggestion that was made was to place all multi-proc VMs on the same Host(s). This way, the cpu scheduler didn't have to 'work' as hard providing cpu 'slices' to multi vs dual proc systems. It did help a little.

Even Citrix' best practices guide says that you throw more memory at VMs, not virtual CPUs; it just didn't help in my case. Unfortunately, we use many custom apps that may not work well under single CPU conditions. I'd guess there are over 30 apps installed per machine.

FYI - we are running 2 dualcore 3.06 ghz Xeons with 16GB each. Three Citrix VMs per host at this point.

Any other suggestions would be welcome

Thanks

Reply
0 Kudos
NathanEly
Contributor
Contributor

One clarificatoin update - we have several hosts running Citrix servers in order to distribute load, preserve the ability to perform maintenance on individual v citrix servers, and provide redundancy.

Reply
0 Kudos