VMware Cloud Community
RobertSims
Contributor
Contributor

citrix in ESX + CPU Allocation

Hi - hope someone can help

We are running Citrix PS4 on ESX 3.0. We have a Dell PE 6850 server with 4x Dualcore 3.2Ghx CPU's

If we create a Citrix server with 4vCPU we get a large Virtual overhead, Idling server using 30% of its allocated resource !!!

So we have created 2vCPU Guests and the idle usage is nearly none, but

With 2vCPU each guest seem to only be able to access 2 Cores worth of speed, so when the user count rises and CPU goes up to around 6Ghz the Guest starts to struggle (100% in Perfmon)

The annoying part is that the Host ESX server has a large amount ofCPU resource left.

Is this normal and if not how can i get a 2vCPU Guest to access more than 2 cores worth or CPU time??

Thanks

0 Kudos
6 Replies
adehart
Contributor
Contributor

I'm very interested in hearing the answer and how this all plays out for you.

We're contemplating virtualizing Citrix with ESX and most of the information I've seen here says you have to set the systems to use one CPU only and to turn processor affinity off (I hope that's right and I don't have it backwards). The idea being that the context switching in the virtual environment with Citrix and TS has so much overhead that a lot of CPU time is lost managing this. Locking it down to one CPU and making it so the guest OS can't "float" between CPU's keeps things running smoothly. I'm not entirely certain if you can run 2 virtual CPU's but lock the system down via the affinity option too so you can at least benefit from more CPU's. This optimization also has the downside of disablign the ability to use VMotion and DRS with the Citrix Guest OS's too.

Whoever answers, I'm curious to know if this changes in later versions. The original post mentions ESX 3.0. Is this true or better with later versions? I also understand that 3.5 will offer better support for Citrix/TS but I believe this is only in conjunction with the new tech built into the new processors to optimize context switching. Intel is releasing the new processors in November if I recall correctly (7300 series I believe) and AMD processors (not sure which generation off the top of my head) should be out soon too.

0 Kudos
ISD_Plc
Enthusiast
Enthusiast

Well this isn't a solution but I can tell that I have 4 citrix guest each with 4 vCpu's and they dont have a high overhead at present and the server are also quad dual core servers.

Doesnt help much I know sorry.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Use of multiple vCPUs with an application that is not multithreaded will cause exactly what you are seeing. I suggest you create more citrix servers in your farm and use only single proc VMs. Another option is that your 4 vCPU VM is really just idle most of the time. In that case I would drop it down to less processors.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
virtualdud3
Expert
Expert

I agree with what Texiwill is stating.

I would try building several Citrix VM with only a SINGLE vCPU, and see test the performance without any 4 vCPU VMs running. I'll bet the performance increases drastically.

You can either decrease the number of vCPUs in an existing VM from 2 to 1, and change the HAL from multiprocessor to single processor, but it might be easier to simply start with a clean VM.

The problem is that when you have multiple vCPUs is that when a VM needs CPU cycles, it has to wait until there are 2 or 4 (depending on the number of vCPUs) "free" CPUs in the host machine. I have implemented several "virtual" Citrix farms and not once have I seen a performance benefit from running multiple vCPUs. I'm not saying there is never a reason to run multiple vCPUs, but I just haven't seen it.

In addition to the problems with CPU overhead, multiple vCPUs also increases the per-virtual-machine memory overhead (see link below for vi_performance.pdf, page 5). With 4 vCPUs, a VM with 1 gig of RAM has an additional 141 MB of overhead for a 32-bit VM and 523 MB of overhead for a 64-bit virtual machine.

http://www.vmware.com/pdf/vi_performance_tuning.pdf

############### Under no circumstances are you to award me any points. Thanks!!!
0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

I'd also agree that more vms with 1 vcpu is the way to go. Since Citrix will load balance let it.

Also try 64-bit VMs. Also compare PS 4.5. All of these are supposed to improve memory usage.

Of course, if all you want to run is Citrix you might want to run Citrix on the hardware and see how many users you can comfortably get. I'd assume you have different app silos and that's what you want ESX for.

Please provide some load information such as how many users you have when you're seeing the high cpu numbers.

Citrix on ESX with 1 vcpu was OK until you hit 12-15 users/vm for us.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
adehart
Contributor
Contributor

Jae -

What processor/RAM config are you hitting the 12-15 user/vm limit on with Citrix? Is the overhead really that high?

We regularly have 60-70 users on a dual 2.4GHz Xeon with 4GB RAM. The server is @ 5-6 years old. Logically we have to split that because of the RAM limitations we face with 32-bit (64 bit may be down the road). So is it unreasonable to assume we'd get 30 users per VM on a 2.6 quad-core Xeon (X5355) if we assigned 1 vCPU per VM?

I appreciate everyone's comments here.

- Tony

0 Kudos