We are currently looking at implementing NFS in production instead of using FCP or iSCSI. The NFS volumes will reside on NetApp filers. In performance terms we are not seeing any loss compared to iSCSI. The main benefit is the amount of space saved using NFS over normal LUNs. Also we are looking at thin provisioning and using sparse vmdks. I know this is not best practice but in real terms is there any issues using sparse disks.
Just looking for any advice or any showstoppers as rolling it out to a production environment just does not seem right to me.
Any advice or experience in using NFS & sparse vmdks would be much appreciated
We run ~1000 VMs across 36 ESX servers. All on Netapp over NFS.
The reason you don't hear much about NFS is because there are few issues NFS.
PnT Snapshots are just now showing up in the SAN world...
Goto Oracle in Austin.... Oracle (block level) over NFS is huge. Oracle 11 will have NFS client built in...
Vmotion(in seconds) a VM across storage devices is the holy grail. The first one to the table will be the storage choice for years to come. My bet is an a IP solution. NFSv5?
I don't have the experience of loading up NFS and using sparse disks, but I can say I had some serious issues with NetApp performance + NFS. It may have been that I had older F840's, but using ISCSI on the same boxes drastically mitigated the issues I was having with them.
Of course, YMMV, but I certainly would not use NFS for production systems if I could help it at all, especially on a NetApp that has the ISCSI functionality built in already.
I personally see NFS as a "second class citizen" in the ESX world. The fact that you're restricted to file-level operations (rather than block level) limits the functionality that is available to you. Today, the only thing that you're really "missing" is VCB - future features that you'll miss out on are anybody's guess.
I would recommend the use of iSCSI rather than NFS...
Message was edited by:
ken.cline@hp.com to fix a "technical" error
Message was edited (again) by:
ken.cline@hp.com because SOMEBODY wants to keep me honest
We run ~1000 VMs across 36 ESX servers. All on Netapp over NFS.
The reason you don't hear much about NFS is because there are few issues NFS.
PnT Snapshots are just now showing up in the SAN world...
Goto Oracle in Austin.... Oracle (block level) over NFS is huge. Oracle 11 will have NFS client built in...
Vmotion(in seconds) a VM across storage devices is the holy grail. The first one to the table will be the storage choice for years to come. My bet is an a IP solution. NFSv5?
I Recently visted NetApp a watched there ESX on NFS, very impressed..
I think its a watch this space..!!
I would recommend iSCSI over NFS...
Ken, you need to do another edit.
iSCSI is a block-service protocol and runs over IP.
NFS is a file-service protocol and runs over IP, too.
A grammatical error
It should have read "I would recommend the use of iSCSI rather than NFS..."
I think I'll go correct that
Ah ok, now I understand what you meant.
Sorry, english is neither my mother's nor my father's language.
Thanks for the explanation!
I recently had a meeting with Netapp who declared that NFS runs faster than ISCSI. Their machines come with both by the way. They claim that its down to VMFS (not knowing too much here on my account) and the fact that no real tests have been conducted by VMware using their technology.
Someone please come out of the closet and declare something on this because I have 4 ESX servers with no storage system. I am currently looking at EVA or Netapp.
It's been about a year since I compared ISCSI vs NFS on the same Netapp, but I do remember that ISCSI was slightly slower. If I remember right, ISCSI required more CPU overhead on the ESX server than NFS.
VMFS is still required when using ISCSI, so that adds another layer. Basically the questions is which performs better, VMFS or ONTAP. ONTAP has been around for 15 years, VMFS 3?
Overall, it's almost impossible to say which is faster. It all depends on how you setup your Storage Infrastructure.
Someone please come out of the closet and declare something
OK...as I said above:
With NFS you cannot use the VMFS filesystem because ESX does not have block level access to the raw storage. With iSCSI or FC, you can use the VMFS filesystem.
Now, what does that mean? Well...it means that you can't use VCB with NFS. It also means that MSCS is not supported on NFS. As for performance - that's going to be highly implementation specific. It's going to depend on your HBA's (or SW initiator for iSCSI or pNIC for NFS), your cable plant, your switching fabric (either FC or enet), and your storage solution (how much cache, how many spindles, how many controllers, what interface, etc.)
I'll just about guarantee you that you can find an NFS solution that blows the doors off of another iSCSI or FC solution - and the same for the other technologies.
I think that all three technologies are viable (they're all being used in lots of places) and it's going to come down to what are your (IT) business priorities.
Big-a
I will come out of the closet;
We have been rolling out an NFS based solution on NetApp filers and performance has not be an issue. Like all things everything needs evaluating and it annoys me when individuals make comments without any real grounding to do so. We have spent weeks going through the pro's and con's too many to list here, but it all boils down to business needs exactly what Ken has said.
I personally think VMFS and MSCS aren't a reason to stay with iSCSI or FC storage, we can still implement iSCSI at anytime with our solution because NetApp support NFS, iSCSI and FC across all their storage units that's a big bonus. Provisioning and recovering VM's is a breeze.
Hope this helps a little