VMware Horizon Community
theowood
Contributor
Contributor

Poor performance compared to desktop with VMware View 5.2

We are pilot testing VMware View 5.2 and the initial feedback from our users is that desktop performance is significantly worse than their regular desktops.   Specific complaints:

  • Opening and saving my drafts in MS Word consume more time (regardless of the file size, mostly, the MS Word is not responding for more than 30 seconds);
  • Opening and closing PDF also  consume more time (especially with larger size);
  • Although searching and browsing the internet is as fast as using the desktop, the challenges I encounter are on copying from the internet and pasting it on MS Word because both consume more time as well.


We are using the fastest Intel Xeon 4-processor HP servers, plenty of memory, and Windows 7 32-bit with all the recommended optimizations.   Each desktop is currently assigned 2 vCPU's and 3 GB of memory.   We are using the same Microsoft Office 2007 and Adobe Acrobat PDF reader on the physical desktop as we are on the virtual desktop.   We are pilot testing with less than ten users on these servers right now, so there is no contention.    Storage is coming from a fast, dedicated NetApp filer.


My guess is that we may be seeing some difference in performance because of the processor/hypervisor or possibly the network interface since users are retrieving the documents across the local network.


Any suggestions on things to look into before we get started.






Reply
0 Kudos
14 Replies
MHAV
Hot Shot
Hot Shot

Hi theowood,

have you updated the BIOS/Firmware of the Hardware you are using?

have you set the energy Option of the Hardware to "static high Performance"?

what storage you are using to run the vDesktops on thinking in the IOPS direction of the storage System?

what is the average max latency of the Datastores the VMs are running on?

Chers

Michael

Regards Michael Haverbeck Check out my blog www.the-virtualizer.com
Reply
0 Kudos
MHAV
Hot Shot
Hot Shot

Thewood,

check as well the Performance within vCenter

CPU

- Cluster/Host Utilization

- VM Utilization

- VM % Ready Time

MEMORY

- Host Utilization

- VM Utilization

- Swapping/Balooning

STORAGE

- VMs per VMFS LUN

- Disk Latency < 20 ms

Chers

Michael

Regards Michael Haverbeck Check out my blog www.the-virtualizer.com
Reply
0 Kudos
theowood
Contributor
Contributor

We are using memory based storage with a product called Atlantis.   It is very fast.   Read latency is less than 2ms on average.

We are set to static high performance and using the latest firmware updates

This is a pilot test, so we are only testing ten VMs right now (<5 per host), so we are not close to exceeding CPU or memory utilization.

On the % Ready, it is not straightforward to determine.   The CPU Ready summation in milliseconds for the hosts are 40000 to 50000 milliseconds on the daily chart.   I believe this is sampled every five minutes, so this would be 16%.    We deployed the maximum expected # of VMs, although only a couple are being used.    We are expecting a maximum of 50 VMs per 4 processor host.

I am wondering if changing them from 2 vCPU to 1vCPU would actually improve performance ? 

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

Atlantis is a good product..

Here is a nice CPU ready cheat sheet

CPU Ready Revisted - Quick Reference Charts - VMtoday

Reply
0 Kudos
theowood
Contributor
Contributor

Thanks, that confirms my calculation.

Right now, I have 65 idle 2vCPU virtual desktops on one of the hosts.    The virtual desktops are all powered on and running Windows 7.   Each virtual machine is using 20 to 40Mhz just powered on and with no one logged on, which is not too much.   However, if you look at CPU Ready % for the host, it is at 15%.    This means before any user even logs on, I am already seeing higher CPU Ready % than the article recommends (<10%)!

Of course, if I power down all the VMs and run only a couple, the CPU Ready will likely go way down.  However, if it is true that users will see slowness past 10% CPU Ready then we will be lucky if we can fit 30 users on the four processor server before they start complaining.

We are going to try a few more things tonight, like switching to 1vCPU VMs instead of 2 vCPU.   If anyone has some other tips/advice, feel free to add.

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

How many physical sockets on that server?  Your comments lead me to believe it is just one, correct?  What generation of Xeon?  There are HUGE differences between say, Harpertown based chipsets and Sandybridge chipsets

Reply
0 Kudos
theowood
Contributor
Contributor

Brand new HP Proliant DL 560 Generation 8  4 processor (4 socket).   Each processor is an Intel Xeon E5-4650@2.7Ghz "Sandy Bridge"

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

So then, with that chipset having 8 cores per socket, you should see 32 physical cores, and 64 logical cores(w/ hyperthreading).  Is that what you are seeing when looking at the "Summary" tab on your vSphere host?


Reply
0 Kudos
theowood
Contributor
Contributor

Yep 32 cores.   64 logical cores with hyperthreading.   Each core is 2.7GHz

Capture.JPG

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

Yeah, something doesn't see quite right then based on what you are seeing.  Can't elaborate at the moment.  Perhaps some others will chime in before me.

Reply
0 Kudos
Linjo
Leadership
Leadership

I agree, something is not right here.

Are these delays constant throughout the day or is it ok sometimes?

I also like Atlantis but have you looked at storage latency?

You need to get some kind of monitoring in place, I would recommend Liquidwarelabs UX in this case since it does in-guest monitoring.

Desktop Monitoring

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

So, you say you are using AtlantisIO for storage acceleration.  I’m assuming then you have rolled out some variation of automated pools and linked clones from replicas.  This can work really good if done properly, and really bad when not.  In order to better isolate the root cause, I’d try to use standard provisioned VMs and see if you experience the same behaviors.  One thing to remember is that no resource is an island, and if you are incurring storage performance issues through physical design, or deployment architectures
of View, this can give the impression of wait times on CPU, and overall how CPU is perceiving it’s utilization.  Not an absolute answer for you, but just a few thoughts to think about.

Reply
0 Kudos
theowood
Contributor
Contributor

We are using non-persistent desktops with their storage on the ILIO ramdisk.  Read latency is less than 2ms and write latency is less than 6ms which is very good.  We tested disk performance with ILIO using HD Tune and IOmeter and it was very good.  I do not suspect the disk right now.

We have Liquidware Stratosphere on the desktops and I will be checking some of the reports today.

Reply
0 Kudos
VirtualMattCT
Enthusiast
Enthusiast

How does it perform if you have just 1 VM with 2 vCPUs running on the host.  Maybe start with one and ramp up to the 65 and see how it goes.  65 x 2 = 130 vCPUs allocated ... that should be fine, but sounds suspect if you are already seeing ready time.

Definitely make sure the Power Profile is Max Performance (which should change the Power Regulator to static high performance).  In the advanced BIOS settings, you can disable the "Collaborative Power Control" (specific to HP servers) too.  Then I would make sure your memory is properly balanced, so you aren't having some kind of induced CPU ready time, from NUMA locality...  You could try to disable node interleaving, in the HP BIOS, and see if that has any effect on your problem too.

Now all that said - maybe it's something in your image that's different, specific to MS Word - perhaps an add-in or something.  If you suspect poor network performance, definitely try some testing with iperf and see what kind of throughput you're getting.  Are you using the VMXNET3 adapter?

Reply
0 Kudos