I was looking deeper into the storage path latencies for a host - exporting the values to csv.
Folks internally have been concerned about storage latencies. What I found looking at
each iSCSI target was that the averages (real time) over two hours were actually
pretty good. The maximum average was 9.5ms. That particular target had one
measurement of 1800ms, perhaps 20 between 10 and 30ms and the other 160
measurements were all under 6ms. So the question I have is - how much impact
on storage performance of a particular target would one single latency of 1800
if the series after is 9.5. In this case this was a read operation. Are occasional
spikes like that just normal?
The default graphs for disk performance show the spikes along with the averages.
So I think people get worked up over the spikes and haven't really delved into
the actual figures. Or should I be more concerned about a occasionally higher
values like the 1800ms read??
Also the vast majority of the iSCSI targets had averages far less than 9ms.
4 or 5ms was not uncommon.