1 person found this helpful
> We think this is a huge drawback for NFS datastores (which is quite popular). Quite frankly, we are surprised that VMWare don't seem to be interesting in implementing this.
AFAIK, VMware does look into this topic very deeply right now. It has been recognized as a missing feature. And I see more and more customers move towards NFS-based storage for various reasons. I am not sure when exactly VMware will actually ship a solution, but I'd suggest raising this topic whenever you talk to a VMware representative (Sales, Professional Services, whatever). And I wish it would be sooner than later. You can help driving the priorities. Until then I guess we have to live with workarounds.
I had a conversation last month at VMworld with John Blumenthal about this exact issue. He mentioned that VMware is well aware of customer concerns in this area and is working with some of the vendors (presumably NetApp and EMC, perhaps others) - but the implication was that any solution is >12 months out.
The best I've come up with is monitoring I/O per volume and watching which host is talking the most to a specific NFS export, anything more like you said requires wireshark. I greatly miss esxtop everytime I'm troubleshooting an NFS environment.
Thanks for the replies!
Good to know that VMware is aware of the issue and seem to be working on a solution at least. Too bad that it seems so far away though.
I've been able to trace the file handlers used in NFS Lookup request/replies to specific files/directories (and thus determine the VMs). I've managed to filter out packets matching these file handlers with tshark in a way that i can see (and measure) the specific traffic when a VM powers on/off. However, further operations to the "disk" in the OS of the VMs don't use these file handlers. I can't trace the actual I/O, because most of that traffic is NFS FILE_SYNC packages (which have no human readable payload displaying file or directory, like the lookup packages have) and they use different file handlers than the NFS lookup packages. Too bad. Maybe there are some NFS gurus around that can nudge me in the right direction? Or suggestions about other possible workarounds?
Is there any methodology to identify the I/O intensive VMs that were running on NFS Datastores or on both NFS and VMFS datastores.If there exists something plz let me know.
Is there any methodology to identify the I/O intensive VMs that were running on NFS Datastores or on both NFS and VMFS datastores.
You should be able to analyze this through ESXTOP in the "v" view to see the IOs per VM.
Thanks but How do we get those results in csv file. In bash mode I was able to generate a csv file but it was giving all the fields approximatly 1000, where it was highly difficult to get those fields which belong to VM's IOPS. If there is any possibility to get it plese let me know.
Have you considered vCenter Operations?
The good news is that as of vSphere 4.1, you should be able to see most - if not all - the IO data for NFS datastores inside vCenter. However, even then, I think we're all aware vCenter isn't the easiest solution for aggregating/monitoring storage performance data. These are probably the reports you're looking for:
If so, the latter is simply a drill-in from the former and both are default functionality in VKernel vOPS (feel free to abuse a 30-day free trial). Additionally, keep an eye out for a free tool in the near future which will also help tackle this problem area.
Full disclosure: I can't forget to let everyone know I'm a VKernel employee or else the powers that be will be unhappy with me.