VMware Cloud Community
SusantaDutta
Contributor
Contributor

Slow NFS DataStore Performance

With ESX 5.1U1, I’m observing slow I/O performance from VMs on NFS database exported from NAS Storage system.  With 512KB Sequential Write workload within the VM on a NFS Datastore provides approximately 50MB/s data transfer rate.

I made an interesting observation – VM on a Local Datastore provides higher data transfer rate( > 350 MB/s )  and continues to provide the high throughput even after migrating it to NFS datastore and hits the limit of NIC(In my case NIC is 1Gbps, i.e close to 128MB/s).  I stopped the workload and re-triggered, still it I get same throughput(close to 128MB/s). But if I restart the VM, then re-triggered same workload, but throughput does not go beyond 50MB/s.

What gets lost when a VM is restarted, and starts performing slow

Based on this discussion, I tried with ESX50.U1,  and also configured ESX as per the “Best practices for running VMWare vSphere on network-attached storage” , but performance is the same


Regards
Susanta

Reply
0 Kudos
2 Replies
TMac
Contributor
Contributor

Susanta,  I don't have any answers but am experiencing the same issue.

NFS performance goes down the drain running on ESX 5.x with GAVG/rd and GAVG/wr breaking 1000ms.

We are currently migrating from ESX 4.0 to ESXi 5.1.  In our Dev/Test environment installed vCenter 5.1 with a cluster of ESXi 5.1 servers.  3 x IBM HS22 8 cores, 96 GB ram.  Have also migrated a few ESX 4.0 cluster to this vcenter.

Getting complaints from developers of sluggish servers on ESXi5.1.  Checking counters, see disk latency through the roof > 1000 ms.

Create a test of a linux server, tar a local directory into  the tmp folder.  Run the test on the ESXi 5.1 host it takes over 12 mins to complete, with disk latency > 1000ms.  Move the guest to an ESX 4.0 host using the same datastore job runs in 1.5 mins with normal disk latency.  ESXi 5.1 host lightly loaded with no CPU or Memory contention.

Anyone else experiencing this or have any suggestions.  Have an open ticket with VMware  but still waiting a response.

Reply
0 Kudos
TMac
Contributor
Contributor

Folks,

This issues is not resolved but have a workaround which led to a big leap in performance but still not what it should be.  Using VMware IOAnalyzer appliance a default ESXi 5.1 host  was get 2.3MB/s  on a 512k blk 100%read, 0% random test to an NFS mount over a 1gb link.  After much testing we found if we disable the rx Interrupt Moderation we could get 74MB over the same link.  This is the commands we used to modify the Interrupt Moderation:

ethtool -C vmnic5  rx-usecs 0 rx-frames 1 rx-usecs-irq 0 rx-frames-irq 0
ethtool -C vmnic4  rx-usecs 0 rx-frames 1 rx-usecs-irq 0 rx-frames-irq 0

http://www.odbms.org/download/vmw-vfabric-gemFire-best-practices-guide.pdf

Hope this helps someone else.

Terry

Reply
0 Kudos