VMware Cloud Community
Doc_Rice
Enthusiast
Enthusiast

iSCSI vs. NFS for ESXi SAN

I currently use Openfiler as my iSCSI SAN for 6 ESXi hosts. It works well, and I have anywhere between 5 - 12 virtual machines per host. I believe my processing and memory capacity can accommodate much more per host since my environment is generally testing / QA / development / research and it's usually low-usage. I've heard that the performance difference with NFS vs. iSCSI starts becoming more even when the number of virtual machines grows beyond 10 -15.

What are the pros and cons of considering one over the other?

0 Kudos
8 Replies
Jackobli
Virtuoso
Virtuoso

It all depends on the network adaptors.

If you are running iSCSI on appropriate hardware (iSCSI HBA) there shouldn't be much performance loss...

If you are running Software iSCSI, you are taking more and more load on the CPU.

But the most valuable arguments pro NFS are:

  • Size of the Datastore... with iSCSI you are on a VMFS and restricted to 2 TByte.

  • Handling, especially backup. With iSCSI you got physical LUN or flat files you cannot dive into. With NFS you got the Openfilers filesystem, from within you may use whatever you like to backup or manipulate files.

  • As it's pure TCP/IP, TOE should work and ease the load on the CPU for network transfer.

0 Kudos
christianZ
Champion
Champion

Just for info. If you search in forum for "Openfiler" then you will find that it is not the fastest unit (software) hier - especially for 6 Esxi hosts.

I would prefer to check Dell (MD3000i), HP (MSA2000i) or Infortrend

0 Kudos
kooltechies
Expert
Expert

Hi,

One more thing that you should consider is that NFS is an exported filesystem to ESX this is not same as VMFS so it is not as fast as VMFS. Moreover if you still want to go with NFS then I will suggest you to keep the virtual machine swap file on some VMFS volume either Local or FC,iSCSI storage.

In my opinion iSCSI any day is better than a NFS.

Thanks,

Samir

P.S : If you think that the answer is helpful please consider rewarding points.

Blog : http://thinkingloudoncloud.com || Twitter : @kooltechies || P.S : If you think that the answer is correct/helpful please consider rewarding points.
0 Kudos
Jackobli
Virtuoso
Virtuoso

One more thing that you should consider is that NFS is an exported filesystem to ESX this is not same as VMFS so it is not as fast as VMFS. Moreover if you still want to go with NFS then I will suggest you to keep the virtual machine swap file on some VMFS volume either Local or FC,iSCSI storage.

In my opinion iSCSI any day is better than a NFS.

Interesting opinion, you will find lot's of post claiming VMFS not to be fast. Especially under ESXi (cannot claim knowledge for ESX).

On the other hand NFS (v4) is able to cache files (on the OS side too, not only controller).

You suggesting to swap local or FC... if the OP had (4 GBit) FC and plenty 15k FC spindles, I think he hadn't done this post.

On a quick search, I found this file http://www.vmware.com/files/pdf/storage_protocol_perf.pdf

Post editted by Jackobli

0 Kudos
kooltechies
Expert
Expert

I don't think some one will need to spend fortune on storing the swap files, for that the size is very less mostly in Mb's.

Thanks,

Samir

Blog : http://thinkingloudoncloud.com || Twitter : @kooltechies || P.S : If you think that the answer is correct/helpful please consider rewarding points.
0 Kudos
Jackobli
Virtuoso
Virtuoso

Perhaps I was not clearly writing...

If he had fast FC, the OP wouldn't have asked, but would use his FC SAN. But he is stuck with his Openfiler.

Keeping track of a different position of the swap files would more complexity to the whole design with 6 ESXi hosts, each running 5-12 guests.

0 Kudos
Doc_Rice
Enthusiast
Enthusiast

Thanks for the comments so far. What I'm running is essentially a zero-cost, poor-man's virtualization solution (aside from the hardware which we already had). There is no dedicated hardware HBA and I'm using the software initiater available in ESXi. To add to the list of performance bottlenecks, the disk array that Openfiler is connected to is a really old Dell PowerVault 200S loaded with new 320 GB SCSI disks. If I do a clone via vmkfstools from one of the ESXi hosts (the template and target directory are both on the SAN), it takes a while and based on my SNMP readings on the port switches the physical machines are connected to, the clone / replication operation on the SAN tops out at roughly 30 Mbps. Pretty slow, but I'd guess this is a limitation of the SCSI backplane of the PowerVault. It's still do-able for us.

So yes, one of the reasons for asking about the iSCSI vs. NFS performance was regarding backups. As mentioned earlier, this isn't a demanding high-volume environment so I'm okay with what I have now. I'm just curious whether NFS would make a difference overall for my needs as I add more VMs down the line.

0 Kudos
DSTAVERT
Immortal
Immortal

I would like to also add a couple of big benefits to NFS.

Changes to iSCS require an ESXi restart. Not so with NFS.

NFS stores can be attached to many ESXi hosts. You can easily move a server from one ESXi host to another. Pause on one host. Add the paused VM to another ESXi host inventory. Unpause. Works as long as the hardware platform isn't too different.

NFS running on linux gives you simple direct access to the VM files. Use other tools for copying, moving etc.

Properly done NFS can be a real lifesaver.

-- David -- VMware Communities Moderator
0 Kudos