VMware Cloud Community
jmmarton
Enthusiast
Enthusiast

Better performance, iSCSI or NFS?

I'm about to deploy an ESX 3.5 server (VI3 Standard, not Enterprise) to our DR location. While for the most part it will be used as DR for virtual machines running in our main location, it will also be running a couple of production VMs for that DR location since it's an actual office as well as being DR for the main site. The storage I'm going to configure is an x86 server with a number of SATA drives in it that will be running SLES 10 SP2. I can either configure NFS storage or iSCSI, and if I use iSCSI I actually have an extra QLogic iSCSI HBA sitting around that I can install into the ESX host.

So that being said, what would give me better performance, NFS or iSCSI using a hardware initiator?

Joe

Reply
0 Kudos
20 Replies
one3cap
Contributor
Contributor

To my knowledge iSCSI would be to obvious choice. NFS to my knowledge is mostly used for things such and storing ISO files and such. There is quite a bit of overhead using NFS. I have heard of people using NFS storage because it is cheaper then FC or iSCSI for low performing VM's.

aguacero
Hot Shot
Hot Shot

Better performance. It will vary heavily on the whole view of your setup. I've seen performances vary from site to site and have seen iSCSI beat NFS and NFS beat iSCSI. I personally would lean towards iSCSI over NFS just cause it will utilize VMware's VMFS as compared to NFS format. I will see more future releases heavily connected to the VMFS partition as compared to NFS. But overall at this moment, you won't lose too much performance in difference going one way or the other.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
squirrelking
Enthusiast
Enthusiast

I also agree with the above statements that iSCSI will usually result in better performance.. one other thing to think about is support from VMware.. They are very reluctant to support VM's on NFS from what I've seen.. Since yours will be a DR situation, they might be a little more understanding but I wouldn't count on it... just something I've noticed..

---- VCP3 VCP4 VTSP VSP VMware Solution Provider Premier Partner VMware Authorized Consultant [http://blog.lewan.com]
Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

I also agree with the above statements that iSCSI will usually result in better performance.. one other thing to think about is support from VMware.. They are very reluctant to support VM's on NFS from what I've seen.. Since yours will be a DR situation, they might be a little more understanding but I wouldn't count on it... just something I've noticed..

Well I'm not sure how much support I'll get, anyways, since I'll be using the open source iSCSI target software (IET).

At any rate, thanks to everyone for the help and the tips! Looks like I'll just stick with iSCSI, then.

Joe

Reply
0 Kudos
AndrewSt
Enthusiast
Enthusiast

NFS!

iSCSI is a pain in the tush. NFS gets rid of the management aspects that are required for iSCSI. It scales per data store much better than iSCSI as well.

I am running both in two environments, and I find NFS blows iSCSI away. We have had some inadvertant network hiccups when the Network Admin had to reconfigure some spanning tree issues. NFS performed flawlessly - iSCSI blew up. Took 2 hours to recover the iSCSI mounted vms, and then another 8 hours to recover the Exchange server that was using iSCSI luns for it's storage. Enough of a hassle, that my boss blew for a small SAN to get us off of iSCSI.

Enough about experience - just think of this -

  1. iSCSI works just like a fiber channel lun - including the 32 instruction queue depth. It also places a higher load on the ESX host unless you use a dedicated iSCSI offload device (not a lot, but it adds up).

  2. NFS is just a file access, which is handled by the disk storage device. Simple, and as long as the target is able to handle it, scales really well. NFS thin provisions by default. You can do it by hand on iSCSI, but why when you don't have to?

-


-Andrew Stueve

----------------------- -Andrew Stueve -Remember, if you found this or any other answer useful, please consider the use of the Helpful or Correct buttons to award points
Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

iSCSI is a pain in the tush. NFS gets rid of the management aspects that are required for iSCSI. It scales per data store much better than iSCSI as well.

As far as management goes, I have no problems using iSCSI. We've been using it here at our main site for over a year now, though in the past few days we've now switched to our new FC SAN.

I am running both in two environments, and I find NFS blows iSCSI away. We have had some inadvertant network hiccups when the Network Admin had to reconfigure some spanning tree issues. NFS performed flawlessly - iSCSI blew up. Took 2 hours to recover the iSCSI mounted vms,

Sounds to me like you may have gotten lucky with the NFS storage there. Network hiccups can effect any sort of network storage, whether it's NFS or iSCSI.

  1. iSCSI works just like a fiber channel lun - including the 32 instruction queue depth. It also places a higher load on the ESX host unless you use a dedicated iSCSI offload device (not a lot, but it adds up).

Since we just migrated from iSCSI to FC at our main site, I now have three spare QLogic iSCSI dual-port HBAs. I can easily pop one into this new ESX host in our DR site, negating the performance hit the software iSCSI initiator might introduce.

Joe

Reply
0 Kudos
maishsk
Expert
Expert

I will second what Andrew said, I would prefer NFS over ISCSI.

Reasons:

  • Performance difference between NFS and ISCSI is negligible.

  • ISCSI creates a VMFS LUN (which in a way is a good thing but..) that means that the file system is unreadable to anything else besides an ESX server mapped to that ISCSI LUN.

  • Resizing a volume created in NFS - is so...... simple that it is really not funny and completely transparent to all the ESX servers immediately.

  • An ISCSI or SAN LUN created - will take up the full amount of allocated space from day 1 (you allocate 500GB - 500GB will be in use) NFS it is not so.

  • capability of mounting the NFS mount on other systems besides ESX (I can mount a vmdk from an NFS mount and pull out a single file from the VM if need be)

  • Storage Snapshot capability - ability to browse through previous snapshots from a defined number of hosts (not necessarily ESX)

  • Backup - This is a huge benefit of ours - no need for dedicated VCB server to backup VM's. Backup is done on the storage NFS level and from there to tape. We experience backup times of approx. 10x faster than anything we have on local agent over the LAN with these NFS backups.

  • ISCSI HBA - costs almost the same as a Fiber HBA so if I would already invest I would go for the fiber and not ISCSI - performance on Fiber would be much better.

Maish

Systems Administrator & Virtualization Architect

Maish Saidel-Keesing • @maishsk • http://technodrone.blogspot.com • VMTN Moderator • vExpert • Co-author of VMware vSphere Design
Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

  • Performance difference between NFS and ISCSI is negligible.

Ok, that's what I was trying to find out. Hmm...

  • ISCSI creates a VMFS LUN (which in a way is a good thing but..) that means that the file system is unreadable to anything else besides an ESX server mapped to that ISCSI LUN.

Not a problem since I only want to access this from ESX.

  • Resizing a volume created in NFS - is so...... simple that it is really not funny and completely transparent to all the ESX servers immediately.

Well, it may or may not be simple depending on how the partition was created on the Linux server. If LVM was used then sure you can resize quite easily. Still, for me it's a moot point because the initialize of this volume (NFS or iSCSI) is going to be 2TB, of which maybe 10% is needed.

  • An ISCSI or SAN LUN created - will take up the full amount of allocated space from day 1 (you allocate 500GB - 500GB will be in use) NFS it is not so.

Not a problem since I don't plan on trying to use this space for anything other than VMDK's.

  • capability of mounting the NFS mount on other systems besides ESX (I can mount a vmdk from an NFS mount and pull out a single file from the VM if need be)

Not needed. Again all I need is for ESX to use it.

  • Storage Snapshot capability - ability to browse through previous snapshots from a defined number of hosts (not necessarily ESX)

Wouldn't be a bad thing to have, but eventually when I want to do stuff with snapshots I'm going to use vRanger/vReplicator so the point is moot.

  • Backup - This is a huge benefit of ours - no need for dedicated VCB server to backup VM's. Backup is done on the storage NFS level and from there to tape. We experience backup times of approx. 10x faster than anything we have on local agent over the LAN with these NFS backups.

For the most part I don't have Windows VMs, and for the few I do have they have no data in them so VCB isn't even an option for me. I plan on just doing plain agent backups.

  • ISCSI HBA - costs almost the same as a Fiber HBA so if I would already invest I would go for the fiber and not ISCSI - performance on Fiber would be much better.

As I originally said, I have an extra iSCSI HBA. In fact I have three of them due to our conversion from iSCSI to FC in our main location. That's why I asked for performance comparisons of NFS vs iSCSI with an HBA rather than iSCSI with the VMWare initiator. If there are performance gains in that comparison with NFS I'm interested. If not, then iSCSI it is.

Joe

Reply
0 Kudos
admin
Immortal
Immortal

Here's a whitepaper that VMware recently published on this subject.

Here's a link from a NetApp blog that favors NFS over FC and iSCSI:

http://storagefoo.blogspot.com/search?updated-max=2007-09-11T17%3A02%3A00-05%3A00&max-results=5

Chris

Reply
0 Kudos
JohnGibson
Hot Shot
Hot Shot

This topic came up in a major multi customer/vmware meeting yesterday.

It turns out that with the ISCSI software ESX initiator, NFS performance will typically outperform ISCSI, and the flexibility of being able to rapidly provision NFS to hosts also seemed to be a deciding factor for some large organisations to use over ISCSI. the ESX software initiator will be improved, but probably not until next yearr.

Of course the preferred solution for me would be Fibre attached SAN disk, but im now reevaluating my view on NFS attached storage.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

IET iSCSI doesn't support multiple connections to one target, so if you want to use more than one ESX, you have to choose NFS.

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
wila
Immortal
Immortal

IET iSCSI doesn't support multiple connections to one target.

Are you sure this is still a problem? Sorry, but I think you are referring to a problem in an old version of over a year old, please have a look here

There have been patches for IETD 0.4.12 and IIRC it was addressed in 0.4.15 as you can also read on here

thanks,

Wil

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

Thanks for info, I'll check it. Last time a tried to set up iSCSI for datastore with IET was 6 months ago. Multiple initiators per target were not supported at that moment.

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

Here's a whitepaper that VMware recently published on this subject.

I'll check it out, thanks.

Here's a link from a NetApp blog that favors NFS over FC and iSCSI:

Well that's not surprising since NetApp's main strong point is NAS (e.g. NFS). I mean really, NFS over FC? Smiley Happy I'm sure if you read a blog from LeftHand they would tell you iSCSI is better than NFS or FC. You have to take that stuff with a grain of salt.

Joe

Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

It turns out that with the ISCSI software ESX initiator, NFS performance will typically outperform ISCSI,

Yes, but how about with an iSCSI HBA? As I mentioned in my original post, I have an extra HBA. So my question is how does NFS performance compare to iSCSI using a hardware initiator.

Joe

Reply
0 Kudos
jmmarton
Enthusiast
Enthusiast

IET iSCSI doesn't support multiple connections to one target, so if you want to use more than one ESX, you have to choose NFS.

Sure it does. Up until last week when we converted our main site from iSCSI to FC, we had been running three ESX hosts against a single box running IET 0.4.15 for about a year with no problems. VMotion, DRS, the works. It all worked absolutely fine. I believe it was 0.4.15 that introduced SCSI reserve/release support, making this possible. That version was released April 2007.

Joe

Reply
0 Kudos
systemmaster
Contributor
Contributor

Here's a whitepaper that VMware recently published on this subject.

The VMWare protocol performance paper is a lot of FUD and has nothing to do with real world performance:

  • the tests use only sequential I/O. Real world ESX servers do mostly random I/O

  • the tests use very large block sizes (up to 512Kbyte). Real world VMWare I/O is mostly small block sizes (4K, 8K, 16K, 32K)

  • most of the tests use only one VM. Real world setups run many simultaneous VMs.

  • the tests are run purely from cache, not from disk. Again, totaly unrealistic and not representative for real world performance.

All the paper does is prove that Fiber channel has higher bandwidth than Gbit iSCSI or NFS. But VM performance depends mostly on latency instead of pure throughput.

If you test real world performance (random I/O, multiple VMs, multiple I/O threads, small block sizes) you will see that NFS performance gets better and better as the number of VMs on a single datastore increases. At a certain point, NFS will outperform both hardware iSCSI and FC in a major way.

You won't find any independent iSCSI/NFS/FC performance tests on the internet because the VMWare EULA prevents anybody from publishing performance data without their approval. So you have to test it yourself and see.

Of course on a low end storage server, it probably makes little difference.

Reply
0 Kudos
admin
Immortal
Immortal

Here's an unofficial storage performance thread on the VMware forums:

Chris

Reply
0 Kudos
deploylinux
Enthusiast
Enthusiast

We investigated both iSCSI and NFS for a new ESX cluster late last year, and we were initially leaning towards NFS as all our existing infrastructure is unix based so NFS was a known quantity. iSCSI also seemed to just be an additional headache for what was essentially network file access. Unfortunately, things are not so clear cut.

What I think you will find out is that NFS outperforms and is a better solution to iSCSI in limited situations (e.g. if you have a netapp or are planning to get one). Generic Linux Servers running RHEL5 with multiple gigabit links teamed just didn't deliver the same scaleability in network performance via NFS as purposely built hardware does. So, for business, if you want to go with NFS that means netapp...and maybe snapappliance at the very low end, although I'm not sure if I would ever recommend that.

VMware itself seems to have a heavy preference for iSCSI, and there are alot more iSCSI vendors out there. Nearly all the recent Dell/EMC gear supports it extremely well andon the high end, equallogic has all the features that you could ever want. iSCSI is also going to get tremendously faster over time with 10GigE nics/etc.

So, in summary, in most cases -- I think the solution depends more on your choice of hardware vendor than performance. Performance wise, each can essentially match the other within a reasonable budget. NFS may be conceptually better and more useable for other cases, and has some advantages in thin provisioning -- but as I said, I wouldn't deploy it for production honestly unless I had a netapp backend. You're also going to be depending on netapp more for support if something goes wrong than vmware.

Reply
0 Kudos