VMware Cloud Community
esxi1979
Expert
Expert
Jump to solution

iSCSI vs NFS

Hi

VMware has not released this paper's new version since 2012 - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_compari... 

I went though some old post & got sense from all that NFS is better for below reasons

1. Easy Setup

2. Easy to expand

3. UNMAP is advantage on iSCSI

4. VMFS is quite fragile if you use Thin provisioned VMDKs. A single powerfailure can render a VMFS-volume unrecoverable.

5. NFS datastores immediately show the benefits of storage efficiency (deduplication, compresson, thin provisioning) from both the NetApp and vSphere perspectives

6. Netapp specific : The NetApp NFS Plug-In for VMware is a plug-in for ESXi hosts that allows them to use VAAI features with NFS datastores on ONTAP

7. Netapp specific : NFS has autogrow

8. When using NFS datastores, space is reclaimed immediately when a VM is deleted

9. Performance is almost identical

Please list out if i miss anything & share comments

Thanks

Tags (1)
1 Solution

Accepted Solutions
NathanosBlightc
Commander
Commander
Jump to solution

Based on your attached document, there is many other factors that represent the iSCSI is better than the NFS in some cases, like the following list:

1. VMware PSA and load-balancing feature that is enable for the iSCSI, FC & FCoE, not the NFS.

2. iSCSI supports CHAP for authentication and improving the security.

3. Raw Device Mapping (RDM) feature is not supported by the NFS, but the iSCSI can do.

4. Boot from SAN is possible via the iSCSI not the NFS.

But regardless of comparing storage protocols, in many situations you can obtain benefits of both iSCSI and NFS. Ease of implementing and configuration for the NFS is a good characteristic but for the most of advanced features there are many shortcoming, especially in the lower versions of the NFS.

Please mark my comment as the Correct Answer if this solution resolved your problem

View solution in original post

4 Replies
scott28tt
VMware Employee
VMware Employee
Jump to solution

Moderator: Thread moved to the vSphere Storage area.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
NathanosBlightc
Commander
Commander
Jump to solution

Based on your attached document, there is many other factors that represent the iSCSI is better than the NFS in some cases, like the following list:

1. VMware PSA and load-balancing feature that is enable for the iSCSI, FC & FCoE, not the NFS.

2. iSCSI supports CHAP for authentication and improving the security.

3. Raw Device Mapping (RDM) feature is not supported by the NFS, but the iSCSI can do.

4. Boot from SAN is possible via the iSCSI not the NFS.

But regardless of comparing storage protocols, in many situations you can obtain benefits of both iSCSI and NFS. Ease of implementing and configuration for the NFS is a good characteristic but for the most of advanced features there are many shortcoming, especially in the lower versions of the NFS.

Please mark my comment as the Correct Answer if this solution resolved your problem
Exetus
Contributor
Contributor
Jump to solution

This appears to be a point a great confusion around several forums that I have visited.  I was able to recently complete some testing (in attempting to investigate and resolve another issue) with regard to Disk IO performance and iSCSI, NFS, and Local Storage. 

All this testing was done on identical VMs each running the latest Ubuntu LTS release.  I used "dd if=/dev/zero of=test-disk-io.out bs=1G count=1 oflag=dsync" to write data to the virtual disk on each VM.  Details on my setup are at the end of this post...  Testing to remote datastores (iSCSI and NFS) used a 10GbE SPF+ network.

Here is what I found:

  • Local Storage:  661Mbps Write to Disk
  • iSCSI Storage:  584Mbps Write to Disk
  • NFS:  240Mbps Write to Disk

Based on this testing, it would seem (and make sense) that running a VM on the local storage is best in terms of performance; however, that is not necessarily feasible in all situations.  On top of this, the performance impact when moving from Local Storage to iSCSI is notable but limited.  For my purposes, iSCSI storage is a no-brainer as I can offload the storage of VMs to my NAS while at the same time only suffering a minor performance impact.

Of course the story changes entirely when comparing iSCSI (or even Local Storage) to NFS.  While NFS does have its perks, the manner in which the NFS Client protocol is implemented in ESXi causes a greater level of overhead and higher performance impact.  Regardless of Sync Operation settings on the NAS server, ESXi forces updates to NFS shares with o_sync - essentially reducing the overall performance that can be observed with NFS.  This is not the case in terms of iSCSI as, being block storage, gets formatted with the VMFS filesystem and managed exclusively by ESXi.

Because of this forced o_sync, all write operations to the NFS share are required to sync - even if sync settings are disabled on the NAS's end.  A large amount of this performance impact can be remediated if the NAS provides an SSD-based Cache for the pool in which the NFS datastore is running.

 

Ultimately this left me a little bit torn...  I love the idea of NFS Datastores for VMware and believe this is the future over iSCSI-based storage.  However, the implementation of NFS Client in ESXi leaves a lot of be desired.  While and NFS share is easier to deploy, manage, and maintain it will require a rather beefy and performance capable NAS - there is simply no good way to leverage NFS-based Datastores unless you NAS has a large and high-speed cache.  As for me, I still have (and use) all three options - depending on the situation at hand and the requirements of the VM I plan to deploy.

0 Kudos
helm1
Contributor
Contributor
Jump to solution

When testing with dd you should use a file with random numbers twice as big as all available RAM. Writing can be cached, so test also the read speed (important for some backups)

The problem with NFS is, that it is based on RPC (Remote Procedure Call), which is a synchronous protocol, so each file-block is read or written synchronous over the network, while iSCSI works asynchronous with events. NFS has also no simple client side caching.

As copying data within slow RAM kills also performance, it's better to run iSCSI between guest and iSCSI target, not between host and target, where the host needs to convert it between virtual disk and network.

 

0 Kudos