VMware Cloud Community
BtrieveBill
Contributor
Contributor

VMware Latency Sensitivity Setting Seems to Have Opposite Effect?

Has anyone yet done any exhaustive testing with vSphere 5.5 and the new Latency-Sensitivity feature, as described in the paper "Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5"?  I work a lot with performance-sensitive database environments, and I was hoping that this would provide some performance gains for our customers.  However, it seems to have an opposite effect.

I first created a new VM using v10 hardware with 2 vCPU's and 4GB of RAM, installed Windows Server 2008 R2 with all patches, then added the PSQL database engine.  I then did 50000 database reads from the server, from the local machine, from another VM on the same host, and from my workstation -- grabbing timings through Wireshark to see what the typical response time would be. 

The most interesting thing I see is that when I enable the HIGH setting for latency sensitivity, my environment gets SLOWER.  In NORMAL mode, with full memory and CPU reservations for the server VM, I can do 50000 database reads in about 11-12 seconds from my workstation.  Nominal network round trip time for the requests is well under 1ms, with a vast majority of replies occurring in 0.1-0.2ms.  When I enable the HIGH sensitivity mode, which should make the system faster, I instead can do the same work in 19 seconds, with lots of network replies coming back between 1 and 6ms -- this is substantially slower!

As an aside, I also noticed that when I enable the HIGH mode, the CPU utilization for this idle test server in the vSphere Summary screen drops from 5600 (the maximum reservation for two 2.8GHz CPUs) to 28 -- a very low value.  Strangely, the paper talks about keeping the maximum reservation for CPU -- but as soon as I enable HIGH mode, the CPU speed drops again.  Am I missing something here?

I see similar results when I run the tests from the other guest OS on the same host -- with sensitivity set to NORMAL, I get 50000 reads in around 7.75s -- with it on HIGH I get 13s.  I have also tested with both the E1000 and VMXNET3 NICs, with similar results.

Has anyone else tried testing this new feature yet and have any results to share?  Am I missing something critical here?

0 Kudos
3 Replies
lledarby
Contributor
Contributor

Way late but we just started testing yesterday.  Ready went WAAAY down and usage went to 100% as expected for our CPU hog app. 

0 Kudos
Linjo
Leadership
Leadership

"Latency-Sensitive" and "performance-sensitive" is not the same thing...

The latency sensitivity function turns off a number of features that decreases latency but also decreases throughput.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
BtrieveBill
Contributor
Contributor

Understood.  High-speed databases like PSQL are latency-sensitive.  They do NOT use a ton of CPU time, but are rather sensitive to the overall delay induced by virtualization, especially when running with a single application/user running a process that does hundreds of thousands of individual requests. I was hoping that this new feature would allow some improvements on the latency side, which would show up as performance in the client/server world.  Perhaps this is not the right option, though....

0 Kudos