VM Disk Latency in Seconds - Why the large delta.

Hello, I have been scratching my head with the results of a report from Vrealize Operations Manager. It is listed under Performance, Summary| MAX Vm Disk Latency (ms)

There is a VM Disk Latency (95th percentile) column. Out of our 30-40 datastores, there are two used for VDI's that are in seconds. I have been unable to find a reason for this. I have looked at the flash storage array, the hosts, used esxtop, used iometer... I do see a reference in the vRealize Operations Manager manual that Datastore | Disk Command Latency shows the adjusted read and write latency on the datastore level. I did see that there was quite a bit more read IO's than write IO's.

At this point I'm wondering if it has to do with firmware/drivers/incorrect readings? Esx is at 6.5.0, on a Cisco UCS.

I appreciate any thoughts/ideas.


Max VM Disk LatencyVM Disk Latency (95th percentile)VMs
68,832.8910.27 ms6
65,418.1817.57 ms20
58,868.16129.02 ms25
24,742.775.73 ms26
24,7198.47 Second(s)42
21,058.991.59 Second(s)24
3,297.15329.07 ms14
1,419.53826.2 μs24
916.58133.33 μs20
674.074.57 ms45
197.6321.57 μs26
86.33130.02 μs28
82.9116.02 μs7
69.3970.28 μs19
68.69913.98 μs12
55.0714.48 ms19
53.6720.56 ms18
51.071 ms18
0 Kudos
1 Reply

The datastores listed are across four different datastore clusters across multiple arrays. The two datastores with VM Disk Latency in seconds, are the only ones on 6.5.0 and using a PureStorage all flash array. The others are 6.0.0 running on on flash arrays. Connectivity is via fiber channel. I attempted to look for correlations between the high latency and other metrics (read, write, io, etc...) and do not see one (for example backups).

0 Kudos