VMware Cloud Community
jr195
Contributor
Contributor

monitoring Storage I/O Control

My big question is "How is Storage I/O Control helping us?"

Specifically, we have FC datastores backed by rather slow 7k SATA disks. 98% of the time, latency to those datastores is great at 2-3ms. But one VM can do a burst of i/o that pushes those disks over the edge and latency shoots up to 50-200ms or higher. SIOC sounded exactly like the QoS we needed, and we paid for the much more expensive license to enable it.

We enabled SIOC on all the datastores and set the congestion threshold at 50ms. I see it kicking in on the Max Queue Depth per Host graph under each datastore, but still see latency spikes of well above 50ms both from the perspective of the storage array and the ESXi hosts. So while I can assume that the severity and duration of the spikes has improved, I don't see any way of proving this.

For those of you running SIOC, how do you prove that's working?

Is there any way to show what effect it is having on individual VMs, rather than just at the host or datastore level?

We run ESXi 5 with NetApp storage over FC. All datastores are VMFS3 for now.

Thanks.

0 Kudos
0 Replies