VMware Cloud Community
gallopa
Enthusiast
Enthusiast

NFS vs. iSCSI

Hi all,

I am currently designing a VMware pre-production environment for an investment banking client. The environment will be fairly small (40-50 VM's) but will host some fairly heavy duty SQL databases. One of the purposes of the environment is to prove whether the virtual environment will be viable performace wise for production in the future.

A decision has already been taken to use IBM x3850 M2 servers and NetApp storage. The question I have is in relation to the connection protocols to the storage. After meeting with NetApp my initial thinking is to connect the Virtual Machine guests to the NetApp using NFS, with the databases hosted on the NetApp connected using iSCSI RDM's. The reason for using iSCSI RDM's for the databases is to be able to potentially take advantage of NetApp snapshot, clone, replication, etc for the databases. Some of the database servers also host close to 1TB of databases, which I think is far too big for a VM (can anyone advise on suggested maximum VM image sizes?).

The rationale for NFS over a fully iSCSI solution being:

  • NFS is easier to manage than iSCSI LUN's (this is the primary reason for leaning towards NFS. The client currently has no skilled storage tech's which is the reason I have moved away from a FC solution for the time being. I believe ease of management is a very important consideration of the storage infrastructure for this client)

  • Functions such as de-duplication, volume expansion etc are readily visible to VMware without the need for any admin changes to the storage infrastructure

  • Tools such as UFS Explorer can be used to browse inside snapshots to recover individual files etc without the need to fully restore the image

  • NFS should perform no worse than iSCSI and ‘may' see a performance benefit over iSCSI when many hosts are connected to the storage infrastructure

What are everyones thoughts? Is there anything in particular I cant do if we go down the NFS path? Apart from the fact that it is a less well trodden path, are there any other reasons you wouldn't use NFS?

0 Kudos
4 Replies
O_o
Enthusiast
Enthusiast

Just my opinion, but I doubt that those "heavy duty SQL databases" will run ok on NFS or iSCSI, if it is one thing that would help run them in near native speed, it's fast storage I think.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I generally lean towards iSCSI over NFS as you get a true VMFS and VMware ESX would rather the VM be on VMFS. But since you are talking about RDMs. There are claims that Windows with local to the Guest iSCSI initiators are faster than using an RDM presented over iSCSI. Note that an RDM will not work over NFS, you will need to use a VMDK.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
v01d
Enthusiast
Enthusiast

Some things to consider.

1. Both ESX iSCSI initiator and NFS show good performance (often better) when compared to an HBA (FC or iSCSI) connection to the same storage when testing with a single VM. This performance is at the expense of ESX host cpu cycles that should be going to your VM load. You are basically burning host cpu cycles for IO perfomance. The higher your IO load the fewer host cpu cycles available to your VM's (when they need it most).

2. Due to networking limitations in ESX the most bandwidth you will get between an IP/PORT <-> IP/PORT pair (i.e. ESX host to NFS Datastore or ESX iSCSI software initiator to an iSCSI target) is limited to the bandwidth of the fastest single nic in the ESX host. Even if you have ten 1gb nic's in you host you will never use more than one at a time for NFS Datastore or iSCSI initiator. This is the reason why guest initiators can offer better performance in many cases due to the fact that each guest initiator has it's own IP an thus the traffic from the guest initiators can be load balanced over the available nic's. Unfortunately, using guest initiators further complicates the configuration and is even more taxing on host cpu cycles (see above).

3. Most 10gb Ethernet cards cost more than an HBA.

4. Given a choice between iSCSI vs FC using HBA's I would choose FC for IO intensive workloads like Databases. Some ESX configuations still require FC (i.e MSCS)

0 Kudos
kjb007
Immortal
Immortal

There have been other threads that state similar to your view, that NFS on NetApp performs better than iSCSI. As Ed mentioned though, iSCSI has its own benefits, and you won't be able to hold your RDM's on NFS, they will have to be created on a VMFS. Since you have to have the iSCSI anyway, then I would test out the difference in performance between the two. I weighed my options between FC and iSCSI when I setup my environment, and had to go to FC. Although I was able to push a lot of throughput with iSCSI, the latency over iSCSI was just unacceptable.

Now, with NFS, you can also use jumbo frames which will help your throughput as well, so I may go with an NFS store until I had some concrete numbers to weigh the two.

Now, regarding load balancing, if you have multiple IPs on your NFS/iSCSI store, then you can spread the load of that traffic on more than one NIC, similar to having sw iSCSI initiators in your VM's, and I've seen arguments to both, but I generally don't like to do anything special in my VM's, and have my ESX abstract the storage from them, and prefer to manage that storage on the host side.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos