VMware Cloud Community
ablej
Hot Shot
Hot Shot

Thoughts on NFS vs FC

We are re-designing our VI3 enviroment and are considering NFS and FC (Currently using SW iSCSI). I just wated to see the communities thoughts on NFS and FC. If you have any success or horror stories on either NFS or FC they would be greatly appreciated.

If you find this information useful, please award points for "correct" or "helpful"

David Strebel www.david-strebel.com If you find this information useful, please award points for "correct" or "helpful"
Reply
0 Kudos
7 Replies
vmroyale
Immortal
Immortal

Hello.

I'm in the process of making this decision as well. I am pretty much at the end of it, and waiting on final pricing for each solution. We will be using NetApp filers and HP DL38x servers. At my last employer, I set up FC with HP EVA and HP blades. FC is great. It can be more difficult to implement, if you don't have the staff or know-how in-house. Performance wise, it was great. Never had any problems. With that being said, with this shop being a NetApp shop and the staff having very limited knowledge of storage and virtualization, I am hoping to implement NFS for its operational simplicity. I also plan on using a dedicated storage network for the NFS traffic, that the existing network team can maintain.

Here are some of the notes I presented to management, regarding FC vs NFS:

Fibre Channel Pros:

F/C has the best throughput and the lowest processor utilization of all the storage protocols.

Vmware feature sets are almost always supported in Fibre Channel first.

Supported by VMware and HP

Fibre Channel Cons:

The switches, host bus adapters, and Fibre Channel cabling that FC requires, can be an expensive new investment if not already in place.

Fibre channel requires a specific skillset to implement and operate - HBAs, zoning, fabric switches, etc. May require higher level support in some cases.

NFS Pros:

NFS has similar throughput to iSCSI and less processor utilization than iSCSI.

NFS requires an ethernet skillset and can be fairly easily implemented.and maintained by the existing network staff

NFS should be more efficient and easier to maintain from an operational perspective - no HBAs to fine tune, diagonose, etc. Expansions, shrink and disk operationss are easier.

NFS Cons:

Expensive license (but there is supposedly lots of margin in it)

Scaling NFS requires advanced networking complexities to be implemented, which the network staff should already know.

Other points to consider:

The performance results of the storage protocol comparisons tend to show overall performance differences in the range of 7-10% of each other. They are all very close. It ultimately comes down to design complexity, cost and ongoing operational costs of supporting it.

Definitely check out the Comparison of Storage Protocol Performance Performance Study.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
Reply
0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

Sure. Are you running big numbers or just curious?

  1. What kind of storage are you using or considering?

  2. What kind of disks?

  3. What kind of RAID?

  4. What kind of systems are you running?

  5. How many hosts?

  6. How many VMs?

  7. What kind of bandwidth (GB/s)?

  8. What kind of IOPS?

  9. What's your network look like?

  10. Do you already have a FC infrastructure?

  11. If so what speed is your FC infrastructure?

A few notes:

If using NetApp & dedupe, consider NFS as a simpler way to recover saved space. FC/iSCSI requires extents to do so.

Either storage protocol is reliable and used every day by thousands of systems. I've had successes and issues with all of the above.

For redesign look at 10G & Cisco DCE with blades to reduce those expensive ports down to a very reasonable count & potentially eliminate dedicated FC infrastructure.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Reply
0 Kudos
davidbarclay
Virtuoso
Virtuoso

Have you seen/read this?

http://www.vmware.com/files/pdf/storage_protocol_perf.pdf

Each have there own pros/cons. Personally I see mostly FC in the wild, but am a big fan of NFS solutions. NFS has great advantages, but the disadvantages can deter customers.

It can also depend on your storage vendor as to how useful these protocol are. NetApp are clearly innovators in teh NFS/VMware space.

Dave

Reply
0 Kudos
ablej
Hot Shot
Hot Shot

1. Netapp 3020 ( Will probably be implementing 3100 Series)

2. 300GB FC

3. RAID DP

4. All Windows 2003-2008. Very little SQL, Maybe a couple exchange roles, Mostly Web and Application Servers

5. Will be starting with 10

6. Currently 70. Within the next year expecting 225

7. Currently it hasn't really been measured ( Is there are any tools recommend for this)

8. Same as 7

9. I work for a University and are network is managed by another group ( So we dont we have much control over the network) but currently using 2 Cisco 3550's

10. No

11. No

If you find this information useful, please award points for "correct" or "helpful"

David Strebel www.david-strebel.com If you find this information useful, please award points for "correct" or "helpful"
Reply
0 Kudos
williambishop
Expert
Expert

The biggest issue is going to be that you don't want to run your storage over shared switches. Horror stories with any IP related storage almost invariably begin with "Our switch just seemed to start crawling, but they inisisted that it wasn't their VOIP or 200 users on the switch at the time--it must be our storage array".

By the time you buy an independant network setup and storage arrays, as well as add in the reduction in performance (7-10% turns out to be more when you read closer, add up the 7-10% penalty, to the CPU penalty which nets you fewer guests per host, plus several other negatives...and you'll find that FC and IP aren't that are apart), you realize that it comes down to a small amount of cash. In some shops, you have to insist on IP, FC is not for everyone--it can be complex. The good thing though, is once it's setup, it's pretty much rock solid for years. That said, Netapp has a reputation for decent performance (even if their actual disk capacity usage is only around 22%....iirc)

--"Non Temetis Messor."
Reply
0 Kudos
ablej
Hot Shot
Hot Shot

We currently already have two dedicated switches for our Storage network, so no cost there. Also our VM's use very little CPU so the overhead there does not worry us. To us though FC just seems a safe way to go. NFS does have a lot of great features which sparks our intrest.

If you find this information useful, please award points for "correct" or "helpful"

David Strebel www.david-strebel.com If you find this information useful, please award points for "correct" or "helpful"
Reply
0 Kudos
admin
Immortal
Immortal

A properly designed nework can alleviate many of the concerns of ethernt storage protocol and NFS is a perfect fit for VMWare environments. Low latency smal block random reads. Some of the thing to conider on th side of NFS which I think were not mentioned are the lack of a shared queue and no scsi 2 reservations on the entire datastore as wih VMFS using iscsi or fc.

Operationally - There is no comparison. The bar swings clearly in the favor of NFS.

  • Dynamically resize datastores on the fly. Just refresh the datastore in VC and the increased storage is automatically available to you. Experienced ESX admins will really appreciate this.

  • Volume or Datastore level backups and single VM or file restores. No need for Volume resignaturing and presenting entire restored datastores back to ESX and copying data back to its original location.

  • You can mount NFS datastores read-only so DR is a little easier to deal with. Replicated NFS datastores can remain mounted in ESX.

  • Now add in the features of NetApp (Deduplication on Primary) and being able to actually USE the saved space and there is no comparison.

Read NetApp TR3428 on proper setup as this is the key.

Reply
0 Kudos