VMware Horizon Community
kasperottesen
Contributor
Contributor

Deploy of 50 desktops, high %RDY

Hi,

yesterday we were deploying around 50 desktops to a single hosts, to test how fast they would become available. As expected the deployment was fast and the guests was created and powered-on in minutes and became available minutes later. Simultaneously as the desktops were created and powered-on we watched performance counters closely. Ofcourse when powering-on 50 guests almost simultaneously we expected to see CPU contention, but when desktops became available we expected them to idle and physical CPU utilization drop . After the desktops became available only a couple of guests was idle and had low %RDY, the remaining part of the guests had high CPU utilization and very high %RDY (average around 80%) for a long period of time before they went idle. We noticed %CSTP was high on 2vCPU guests aswell as very high %LAT_C on both 2vCPU and 1vCPU guests. Also %DMD was average 99.

It was like the hypervisor could not de-schedule and every core continued to be saturated for no reason, like a never-ending loop. After a while (up 1.5 hour) things become normal, but why is it taking so long?

We didn't see any %SWPWT, TPS was sharing a lot so we didn't had any memory contention.

Guest configuration:

1vCPU/2vCPU

3GB memory

OS: Win XP

All linked clones.

Host spec:

2 x Six-Core CPUs

144 GB memory

0 Kudos
3 Replies
MKguy
Virtuoso
Virtuoso

(Perhaps your post would be better off in the general ESX-section.)

Sounds to me like it was just too much for the host to handle at one time.

A couple of points to check:

- any application/service running in the guests that would perform resource-hungry operations on startup? (AV-Patternupdates and such)

- how many total vCPUs were running at the time? (or how many 2vCPU VMs?)

- did you enable HT on the Xeon 6-cores CPUs?

- which version of ESX are you running? The 4.1 CPU scheduler should work noticeably better under contention compared 4.0.

- if you're on 4.1, modify the HaltingIdleMsecPenalty Parameter on the host: http://kb.vmware.com/kb/1020233 . This is also known as the "red bull setting" and offers significant improvements when dealing with lots of vCPUs like in VDI deployments. This setting is not needed in ESXi 5.0.

I also imagine powering on 50 Guests at a time causes quite a bit IO load on your storage, even with linked clones. And if you're using Software iSCSI, it will consume a fair amount of CPU time too.

-- http://alpacapowered.wordpress.com
0 Kudos
kasperottesen
Contributor
Contributor

To answer you questions:

- Maybe these tasks which usually takes minutes to complete, just takes ages to complete because the physical cores are saturated. We will do further investigation in coming tests to figure if thats causing the physical cores to be saturated for a long period. Normally when theres no CPU contention and we deploy like 5 desktops at a time there is no problem at all.

- In one test we deployed 50 2vCPU guests, so 100 vCPUs in that case. We have done tests with a mix of both.

- Yes HT is enabled

- We are running esxi 4.1.0 433742

- We are definately going to try that setting.

We are using FC. We watched performance counters on the storage arrays through the whole process.

0 Kudos
MKguy
Virtuoso
Virtuoso

I'm not too surprised with this happening during the 50 2vCPU test to be honest and wouldn't expect being able to fix anything that boots VMs at the same time with more total vCPUs than double the amount of logical CPUs on the host.

You can find more info and benchmark results with the HaltingIdleMsecPenalty setting in this whitepaper (requires registration, but it's a good read):

http://www.projectvrc.com/white-papers/doc_details/12-project-vrc-phase-iii.html

The general recommendation there seems to be:

HaltingIdleMsecPenalty=2000

HaltingIdleMsecPenaltyMax=80000

-- http://alpacapowered.wordpress.com
0 Kudos