VMware Cloud Community
GalNeb
Enthusiast
Enthusiast

High %lat_c CPU Latency for 1 VM

I have 1 VM that has a significant %lat_c/CPU Latency and it makes no sense.

I have found that CPU latency is made up of Ready, Swap Wait, and Power control issues.  But all of these are under control:

pastedImage_0.png

No Swap wait at all, very low ready, and power management is fully off.  (C-state and CE1 are both off in the bios and not reporting in esxtop) (Power is set for OS Control, esxi is set at High Performance and esxtop is reporting all CPUs at 100% P0 level)  So it can not be power related.  It is just this one VM.  Not other VMs are showing this level of latency.  Host totals are:

pastedImage_1.png

There are only 10 VMs on this host, 16 vCPUs, 704 worlds.  The host is a dual Hex-core with hyperthreading (24 threads possible), 192GB of RAM.  It is not the newest thing in the world, it is 2.5GHz Westmere processors.  It is running ESXi 5.5 (without R1 so far).  We have run it on a v55R1 host with the same results.

The VM is 80GB RAM and Dual proc averaging 65% to 68% CPU with a few spikes to 100%.  So it really does need 2 procs and we are monitoring it to see when it will need a third.  It is an SQL server and is using all 80GB of RAM and could actually use more.  It is hammering the SAN pretty hard reading indexes (99% reads from SSD cache on the SAN).  Active Memory is averaging over 70 GB.  The entire hosts active memory is barely over 80GB.  The other VMs are all pretty small.

What else could be causing 10% CPU Latency.  Does this really mean that this VM is waiting for physical CPUs 10% of the time when it should be doing work?  The performance of the MS CRM Dynamics application is not what we would like.  We have been contemplating adding an SSD to the host to use vFlash Cache to bring the entire 350GB database to the host to offload the reads from the SAN, but while that would offload the SAN we don't really figure it will help performance of this VM that much since we are seeing nearly 100% of the reads coming from the SSD cache that is on the Nimble SAN.

Can anyone give me any idea what is causing this latency and how to resolve it.

Old enough to know better, young enough to try anyway
Reply
0 Kudos
14 Replies
GalNeb
Enthusiast
Enthusiast

I dug deeper into esxtop and found that the CPU Latency that I have been seeing is only on the vmx world/process of the VM:

The vmx process is defined in KB1019471 as "A process that runs in the VMkernel that is responsible for handling I/O to devices that are not critical to performance. The VMX is also responsible for communicating with user interfaces, snapshot managers, and remote console."  So it would appear that the CPU Latency is not causing "critical" performance problems.  But why would this VM among 10 VMs running on this host have CPU Latency on this one process?  This is very misleading when I see CPU Latency defined in the client as "Percent of time the virtual machine is unable to run because it is contending for access to the physical CPU(s)".  Seeing something like this averaging 10-12% is very ominous. Under load, a week ago, this VM was reporting 20-23% average CPU Latency, yet still almost no Ready or Swap Wait.

What is this all about?  How do I interpret this?  I don't feel comfortable ignoring it.  What if I see this counter increase on another VM?  It is not reasonable to have to go to this depth to find out where such an ominous sounding counter is coming from and then to determine that is it not a problem. 

Old enough to know better, young enough to try anyway
Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

I see that this thread has now been viewed 99 times, but not a single response with any idea of what this %LAT_C counter is all about.  I now have another example that is even worse.  This VM is a Windows iSCSI target server (VSAN) with %LAT_C numbers from 60 to 88% on the VMX process.  CPU is about 40%.  the total of the two is consistently 110% when the VM is under load.  The screenshot below totals 113.9%.  See the esxtop entry below:

2014-07-31_17-04-07.png

Power saving by the Motherboard has been suggested in other threads as being a component of this counter.  However, power has been optimized:

2014-07-31_17-21-22.png

Both of the VMs in question are high disk I/O, and while there have been spikes in Disk Latency numbers, they are averaging 7 ms or less under heavy load.  This is for the VMs accessing the targets on the VSAN that is shown in the ESXTOP entry above.  The VSAN itself is getting access times to the physical SAN on which it resides averaging 2-7 ms with hardly any spikes at all.  I am seeing more latency introduced by this VSAN than I would like.  Performance is not what it should be.

Note that the $USED of the vmx process is equal to the %LAT_C.  This is creating a high $USED across the board that appears to be bogus.  Even causing alarms on the VM.

2014-07-31_17-32-59.png

You can see that it is showing 100% utilization, generating an alarm, even though the CPU utilization is only 938 Mhz.  It is interesting to not that the chart does not allow higher than 100% even though the actual total utilization that we see in esxtop is 110% between these two processes.  The 938 Mhz correlates to the 39.92% vcpu-0 numbers in esxtop.  The screenshots were not taken at the exact same moment so the numbers are not exact.

These numbers are no longer unique on this host.  High %LAT_C is being seen on all the VMs on this host.  Sometimes as high as 150%.  This screenshot shows a couple in the 90% range:

2014-07-31_17-39-35.png

Can someone respond?

Old enough to know better, young enough to try anyway
Reply
0 Kudos
King_Robert
Hot Shot
Hot Shot

Is there Vmware tools properly install on this VM ?

Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

oh yes, current or near current tools are on all my VMs.  At least 5.5 build 147..., not all have been updated to 5.5U1b build 189....  The ones that I am specifically talking about here do have 5.5U1b on them.

Old enough to know better, young enough to try anyway
Reply
0 Kudos
vNEX
Expert
Expert

Hi Gal,

some say ...

CPU Latency, i.e. %LAT_C in esxtop, includes: ready, cstp, ht busy time and effects of dynamic voltage frequency scaling and doesn't include mlmtd

https://communities.vmware.com/message/2375995

https://communities.vmware.com/thread/428764

I know you wrote that PM is disabled but can you please post Power Management Settings from your host configuration tab and also:

- output from vim-cmd hostsvc/hosthardware

- full output from ESXTOP regarding CPU resources so we can see metrics of all running VMs (for complete picture and contrast between them)

- What type apps are running on remaining VMs

I am quite convinced that your host does not have enough CPU resources to catch up with deployed VMs especially your SQL machine...

Just in addition:

Relevant counter is: average.percent.latency - Percent of time the VM is unable to run because it is contending for access to the physical CPU(s)

Other thoughts:
%LAT_C: Time VM was not scheduled due to cpu resource issue
%LAT_C: shows the CPU time unavailable to due to core sharing

Check the %LAT_C counter to see if it’s not 100%. due to two vms sharing the same core.

vCOPS
VM CPU Contention (%): Measures the difference between how much CPU the VM wants to use and how much CPU the VM gets

Thanks & Regards

P.

_________________________________________________________________________________________ If you found this or any other answer helpful, please consider to award points. (use Correct or Helpful buttons) Regards, P.
Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

I have seen those types of threads before that purport to know what %LAT_C is, thank you.

As you can see from the various screenshots that I have already posted, Total CPU usage on the host is about 25%.  It is a quad-proc, 10-core machine with hyperthreading, so 80 logical CPUs.  Ready time is basically zero.  CSTP is zero.  As you can see from the power screen of esxtop, c-state and C1E are both disabled.  Power management on the MB is set to OS Control and the ESXi host is set to "High Performance", so all 80 logical CPUs are 100% in P0 state.  The host has 512GB of RAM with maybe 80% of it Granted.  So no memory contention, and no cpu contention reported on any counter other than %LAT_C.

The SQL server is a 3 proc VM running about 75% CPU under load.  It is running a CRM database so the only time it is under load is during a mass e-mail cycle.  The VSAN from the second series of screenshots, is a single proc VM running about 40% CPU as you can see.  When these second screen shots were take, the SQL server and VSAN were on different hosts.

At the time of the second screenshots, the host was running 65 VMs with a total of 104 vCPUs.  With 80, logical CPUs, the ration of vCPU to logical CPU is only 1.3 to 1.  Even vCPU to core ration is less than 2.6 to 1.

I am just about convinced that a large share of %LAT_C has nothing to do with CPU at all and is really disk wait time(time that the VM is totally idle waiting for disk).  It seems to me that it is related to disk utilization and not cpu utilization. 

the vim-cmd output is in the attached file.

Old enough to know better, young enough to try anyway
Reply
0 Kudos
vNEX
Expert
Expert

I understand you frustration ... I am also not satisfied with some parts of vSphere documentation in terms of technical deepness sometimes it leaves many questions.(For me this is only one place where MS beats VMware...:)

Its quite often that it tells you A but not a B ... seems to me its a business intention.

If I put together you info correctly you have:

AES-DynSQL - with CRM database, 80GB RAM, 350GB disk cap., 2 vCPU averaging 65% to 68% CPU usage with a few spikes to 100% ; CPU latency = 10%, max.27%, SAN latency 2-7ms

This VM whis is running on dual-socket Hex-core host with HT enabled, 192GB of RAM.(2.5GHz Westmere processors)


vSAN-50-2 - Windows iSCSI target server,  1 vCPU, averaging 40% CPU usage, CPU latency from 60 to 88%, SAN latency 2-7ms

This VM whis is running on quad-socket, 10-core (Intel(R) Xeon(R) CPU E7- 4870  @ 2.40GHz) host HT enabled, 512GB RAM


Back to the CPU latency duscussion from ESXTOP metrics posted thi is my estimation for VSAN VM:

(Its just my understanding based on my experience and public vSphere documentation)


vSAN-50-2 ESXTOP metrics:

%SYS (yours 75%) - means how busy vmkernel services are (on behalf of world) to satisfy the needs of the VM/worlds ; its typical fo high I/O VMs

%WAIT (58.1%) vs %IDLE (56.4) difference between these two metrics (%VMWAIT; your is 1,7%) means time VM spent in blocked state waiting for some events to

complete; usually waiting for I/O

High VMWAIT can be caused by poor storage performance or high latency of some pass-through device configured for the VM.

%OVRLP (your is zero) time spent by VM "in system queue" while vmkernel services scheduled another VM/world ... (i.e. time the VM was not scheduled on behalf of other worlds..)

This is usualy indication that the host is experiencing high I/O

You have also zero for CSTP, MLMTD, SWPWT, RDY

Based on that I am really convinced that your VM are not constrained on I/O but to prove this we shloud also look at GAVG or DAGV/KAGV metrics.

I have found quite good explanation what CPU latency means (someone post answer from VMware support) see:

***********************************************************************************************************************************************************************************************

CPU Latency rises when VM cannot run on the best core (where it was running)  while Ready time rises when none of the cores in entire motherboard is available.

[ CPU latency includes: ready, cstp, ht busy time and effects of dynamic voltage frequency scaling and doesn't include mlmtd ]

***********************************************************************************************************************************************************************************************

My understanding is if above is true that CPU latency (%LAT_C) PRECEDES Ready time (%RDY) and can be seen when scheduler is not able to optimally schedule VM.

So from that point of view existing CPU Latency doesn't meant to be "critical" performance problem until you are HIT by excessive ready time (%RDY).

For that reason ready time is always referred as a key metric for performance troubleshooting and its worth to keep an eye on it.

CPU latency examples which comes to my mind:

- Extensive switching/migrations between logical CPUs (core sharing) HT busy?! ***

- Dynamic voltage frequency scaling (BIOS CPU power management and CPU sleep states) ;  Intel Turbo Mode

- VM scheduled outside of prefered NUMA node boundary?! that could be verified through N%L memory metric which is current percentage memory being accessed by VM

that is local

>But its only my personal interpretation which could be totally wrong I would also appreciate if some VMware staff/guru could post some comment regarding %LAT_C counter...<

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

*** vSphere documentation:

The ESXi CPU scheduler can interpret processor topology, including the relationship between sockets, cores, and logical processors. The scheduler uses topology

information to optimize the placement of virtual CPUs onto different sockets to maximize overall cache utilization, and to improve cache affinity by minimizing virtual CPU

migrations. Virtual machines are preferentially scheduled on two different cores rather than on two logical processors on the same core.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


Anyway I would try to handle Power Mangement differently instead of using (OS Control) option I always chose Max.Performance (on Dell RXXX hosts) so hypervizor

cannot interfere to BIOS PM.You will than see in host PM settings under Active Policy : Not Supported

Assumption is that hardware will always know better how to manage its power policies and offloading another task from vmkernel is always better.


Also using Intel HT technology for SQL databases (unless is fully supported) should be always tested. So you could try how will behave your SQL workloads without HT....

e.g. for MS Dynamics AX 2012 is for SQL server recommended to disable hyperthreading.

http://technet.microsoft.com/en-us/library/dd309734.aspx

In some special cases/workloads turning off Intel Turbo boost also have some gains...


Form more info see:

http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf


"At the time of the second screenshots, the host was running 65 VMs with a total of 104 vCPUs.  With 80, logical CPUs, the ration of vCPU to logical CPU is only

1.3 to 1.  Even vCPU to core ration is less than 2.6 to 1."

I that case I would take into count olny physical cores because Hyperthreading delivers olny 30% performance gain at the best (mostly lower between 10-15%). So

although vCPU to pCPU/core ratio of 2,6 could be absolutely sufficient for some workloads it can be the edge for others.... you can try to lower vCPU count on some

VMs on that host or spread VMs between other hosts.

Hope this helps you to make some progress

Regards,

P.

_________________________________________________________________________________________ If you found this or any other answer helpful, please consider to award points. (use Correct or Helpful buttons) Regards, P.
Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

The %LAT_C just doesn't make sense.  It is 4:00 am and I have 3 VMs doing anything, the other 49 on the host are all idle.  Total CPU 14GHz out of 100GHz available on the host. 80vCPUs between the 52 VMs on 80 logical procs.  These 3 VMs are all showing High %LAT_C on only the vmx process.  This screenshot actually caught it at 89.1%.  High Disk I/O, but disk latency (all components) under 2ms.  %LAT_C also shows up as %SYS and %USED on the vmx process and is rolled up into %Usage on the VM.  So is it using or latency?  The dictionary definition of Latency would indicate a waiting time, not a time doing something.  %Usage is doing something.  It is all bogus.

2014-08-09_5-22-44.png

this is the same VM a few second before it spiked quite so high;
2014-08-09_4-55-58.png

As you can see, if you look at the 0/1 cpus, the Usage in MHz of the VM is way higher than the total of its vCPUs.  That is the %LAT_C being rolled up into the total usage numbers.  It is warping all the reports and charts.  We really need someone from VMware to tell us if this is real.  Is the CPU actually doing something during this %LAT_C time on the vmx process?  What?  Is this an indication of a slowdown of the actual VM or not.  If I look at the vCPU numbers I would have to say that it is not effecting performance of the running processes inside the VM.  But I can't be sure unless I know what external services might be responding slowly due to this %LAT_C on the vmx process.  I can't find any other counters that are indicating slowdowns.  Just what is vmx doing?  Why does it have %LAT_C  and not any other processes?

Old enough to know better, young enough to try anyway
Reply
0 Kudos
vNEX
Expert
Expert

- when you are in ESXTOP always first look at your VMs at once (not expanded to worlds...) only when you suspect there is something wrong look at VM processes.

can you post this screen...please

- when you are in expanded view most important for you are vmx-vcpu worlds not vmx ....

- please post N%L memory metric for your VMs

- do you use any limits, reservations, affinity or some advanced configuration parameters for any of your VMs...

Thanks

P.

_________________________________________________________________________________________ If you found this or any other answer helpful, please consider to award points. (use Correct or Helpful buttons) Regards, P.
Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

This problem is still driving us crazy.  At the time of these screenshots, I am down to 17 VMs on a machine with 40 cores (with Hyper-threading 80) and 512GB of RAM.  Total vCPUs is less than 34.  Total vRAM ~ 120GB.

Highlighted line on this graph is CPU Latency.  Actually pegged at 100% for almost 10 minutes.  Entire time is during high disk I/O as I was cloning a VM from one datastore on this virtual SAN to another.

As you can see by this graph and table below it, vCPU 0 maximum usage is 431 Mhz, but the CPU Latency was pegged at 100% for almost 10 minutes.  And since Usage includes this Latency number, I am getting CPU alarms right and left on my VMs.  As you can see from earlier posts, the cpu usage of the vmx process includes CPU Latency on that process and Total Usage of the VM is a total of usage on all the worlds.

2014-10-06_11-39-15.png

2014-10-06_11-51-48.png

As stated earlier, %Lat_C is coming from the vmx process.  Actual vCPU utilization maximum is about 400 Mhz.  Yet performance of this virtual SAN is terrible.

2014-10-06_11-54-47.png

N%L is 100%, so vCPUs are not moving around, all data is local to the NUMA node.  There is no reason for CPU Latency in this environment.  Every vCPU has it's own core, let alone Thread.

Old enough to know better, young enough to try anyway
Reply
0 Kudos
GalNeb
Enthusiast
Enthusiast

ps.  Physical SAN is a 10GB iSCSI device that is heavy on SSD (read and write cache), reads and writes are averaging sub 1 ms as viewed from the host, from the VMs and from the VSAN in question.  Yet throughput is only averaging 50-60Mbits per second on the vSAN.  In otherwords, performance stinks.  It really should be pushing over 1Gb for this kind of copy operation.

The cloning operation that I was performing took about 2 hours.  The same operation natively on the physical SAN would have taken 5 minutes or less. 

Also, All of the VMs on this host are idle except for this one cloning operation.  Almost all the reported CPU usage on the entire host is due to cpu latency of the vmx processes of these VMs.  The host is using 8-10Ghz out of total capacity of about 100Ghz.  I wish I could more easily separate out the latency, but my estimate is that at least 6Ghz of that 8Ghz is CPU Latency of the vmx processes being reported as usage.

As I was watching, it was interesting to note that at time the CPU Latency was hitting 70% at the same time the VSAN was report 30% usage internally (ie on vCPU 0- single proc VM).  This correlation would indicate that cpu latency is indeed keeping my vCPU from running, but this is anecdotal and can not be proven without more information about what cpu latency really is.  Since it is obviously not just ready + swap + power/waiting for it's prime CPU as reported by many sources.

The description provided by above that I have also seen elsewhere, would indicate that ready time is a counter of Host wide overcommit (no processor available) and CPU latency is a counter per VM of basically the same thing on a single VM basis since from a single VM point of view, whether it is the best processor or any processor that can not be found, it is still waiting when it should be doing work.  But the bothersome problem is that the CPU latency observed is not on the vCPU process, it is on the vmx process.  It is not the vCPU that is waiting.  But if the services provided by the vmx process are required before the vCPU actually gets its interrupt and starts to wait (ie ready or lat_C), then it is just as bad.  So we are back to defining not only what is Lat_C, but  what is Lat_C as it pertains to the vmx process and also why Lat_C is considered usage for the vmx process.

vSphere 5.5 build 1892794, so I think that is 5.5 U1b or U1c

Old enough to know better, young enough to try anyway
Reply
0 Kudos
Iwan_Rahabok
VMware Employee
VMware Employee

Good analysis. There is some discussion internally in VMware. Let me digest those. I will also ask Ravi Soundararajan on this. Hopefully he knows whom to ask this.

Bumping up this thread in the mean time, as I think it deserves the attention.

BTW, I'm just an account pre-sales in Singapore. I'm not part of the vSphere engineering. I'm however, is very keen to know the answer too.

Cheers from Singapore!

e1

e1
Reply
0 Kudos
Iwan_Rahabok
VMware Employee
VMware Employee

Hi vNEX

The main issue is here is the vmx has high %LAT_C while the vmx-vcpu does not. This means the Guest OS does not use the CPU, but perhaps the VMM layer. Based on the KB1019471, it should not be using a lot of CPU.

Apology I do not have an answer either. I am curious myself.

e1
Reply
0 Kudos
Iwan_Rahabok
VMware Employee
VMware Employee

GalNeb David, thanks for reaching via email. Hope you are okay with the answer.

As a summary for everyone here, I posted the question internally and got the advice that "The CPU usage of a vmx world can be high because we charge all the time consumed by the I/O threads associated with a VM to the its vmx world. So it's normal to see the CPU usage of the vmx world to be high"


What does not seem right here is the value impact CPU Latency. It does include USED, which it should not. The engineer replied that he would follow up on this.


Cheers!

e1


e1
Reply
0 Kudos