VMware Cloud Community
samui
Contributor
Contributor
Jump to solution

How many NFS volumes can I present to the host?

I just got a new SAN for my virtual environment. I will be using NFS connections between the ESX servers and the SAN.

I've been asked to find out how many NFS volumes can be presented to the ESX server. I've found conflicting information and was wanting to get a second opinion. According to the "Best Practices" of our SAN vendor, it says that I should not present more than 32 NFS volumes to the ESX host. However, according to some online info I found, I can present 256 LUNS/Volumes to the ESX Host.

The reason this is a question is because we are looking at creating individual NFS volumes for each VM (approx 30 vms per ESX host) on the SAN. We would like to do this to take advantage of the SAN "snap/capture" software that it comes with. However, when you encorporate HA/DRS and vmotion to the ESX clusters, the number of volumes presented to the ESX server doubles.

So how many NFS volumes can I present to the ESX Host?

Tags (3)
0 Kudos
24 Replies
RussellCorey
Hot Shot
Hot Shot
Jump to solution

"And if you use ethernet to get to that storage you still don't have a SAN, you have NAS. But you do have a storage device that can join the SAN fabric... no flogi.. no SAN.[/i]"

SNIA defines a SAN as any network on which storage resources are accessed. This would include NFS SANs, iSCSI SANs, FCP SANs, or even CIFS SANs.

http://www.snia.org/education/storage_networking_primer/san/what_san

In short, the assertion that a SAN requires fibre channel is widely regarded as wrong among consultants and engineers who specialize in SAN design/implementation.

edit:

To clarify further, NAS is a form of SAN.

edit2:

Most of NetApp's product lines support every major storage networking protocol in a given chassis. It's all in what devices you install. My lab for example uses NetApp filers to test iSCSI, FCP, and NFS specifically with VMware.

samui
Contributor
Contributor
Jump to solution

Russell,

Thank you for the information. And I would like to throw two more cents into this scenario for all the world to read:

1) vmware calls an "Network File System" (NFS) volume a NAS volume.

2) In a layman's world, a SAN is centralized storage - doesn't matter if it is FC or iSCSI.

We are using a NetApp FAS3040, and we purchased the rights to do both iSCSI and NFS. We did away with the notion of FC as it is not a feasible option. Our server room is approx 2000sq ft (really friggin big), and the ESX servers are scattered throughout the room. There is already fiber running all over the place, and we didn't want to add even more to the picture. Due to white papers stating that NFS has better performance, we have chosen to have NFS partitions. My experience with vmware and SANs in the past have all revolved around FC and iSCSI SANs. In this scenario, if there are problems with NFS, we can always fall back to iSCSI.

Our setup is quite complex and quite redundent. We have two FAS3040s (active/active) that are connected via an interconnect and will failover to one another if there are problems. Each controller has one 2.3TB shelf. (If you would like to see a visio of the setup, I would be more than happy to email it.)

In planning for the carving of the volumes, I decided to group the VMs by location on the network (ie. DMZ, intranet, etc) and then splitting those groups even smaller by dividing up the VMs by IO usage. So going into the carving, I am looking at 8 volumes (6 - 300GB, 2 - 500GB). Once the VMs are moved to the SAN, we will dedup them.

What would be the advantages/disadvantages of using iSCSI over NFS?

Thanks,

Sam

0 Kudos
samui
Contributor
Contributor
Jump to solution

Flyinverted,

We should clarify a few things:

1) NFS = Network File System not Network File Share

2) SANs can be connected in a number of ways: . They are not an only fiber connection.

  • "iFCP"[1] or "SANoIP"[2] mapping SCSI over Fibre Channel Protocol (FCP) over IP.

  • iSCSI, mapping SCSI over TCP/IP.

  • ISCSI Extensions for RDMA (iSER), mapping iSCSI over InfiniBand (IB).

  • HyperSCSI, mapping SCSI over Ethernet.

  • FICON mapping over Fibre Channel (used by mainframe computers).

  • ATA over Ethernet, mapping ATA over Ethernet.

  • Fibre Channel over Ethernet (http://open-fcoe.org/)

3) vmware calls an NFS volume a NAS volume

I'm sorry if I confused you with how I used the terminology.

0 Kudos
DeeJay
Enthusiast
Enthusiast
Jump to solution

I'm not sure if I can add to this debate, but I'll try:

In terms of terminology, Netapp provide fibre channel SAN (FC SAN) connectivity to some of their filers. At that point, there is no network between server and storage so SAN would be an accurate term. Netapp's bread and butter has historically been 'NAS' devices, the Network part of which implies a protocol like iSCSI, NFS or CIFS is used between server and storage over an ethernet network.

Netapp sell the concept of a single platform which does fibre channel, iSCSI, CIFS, NFS, FC and soon Fibre Channel over Ethernet. I suppose that makes it either NAS, FC SAN or both. In terms of the name for storage, generally they use the term aggregate for a set of disks, on which you create a FlexVol, also referred to as a volume.

We've got 11 filers (6070's and 3040's) and use SnapManager for VI, NFS, DeDup, Flexclone and SnapMirror.

To use the technology to provide online backups for VM's you'd need to:

1) Quiece all VM's on a volume and effectively place a VM in hot backup mode so the VMDK's are in a consistent state. To do this you'll need VirtualCentre, VMWare tools on all guests and a script to perform the work.

2) Snapshot the volume

3) Take all the VM's out of hotbackup mode

You can then use:

1) Snapmirror to create a mirror of the volume containing the consistent Snapshot

2) Flexclone to create a space efficient read/write snapshot of the existing snapshot on the volume

Netapp have a free (but unsupported) tool called VIBE which will use their technology to do the above. Failing that SnapManager for Virtual Infrastructure is a supported tool for doing a similar thing, with more bells and whistles.

In reference to the volume debate; best practice is to group VM's with similar backup requirements (and backup retention details etc) onto the same volume (obviously setting a realistic maximum VM count per datastore). As has been said, you can restore individual VM's by using volume Snapshots regardless of the fact that other VM's were also snapshotted on the same volume at the same time. One VM per volume seems excessive and will certainly add to management time of the storage and VM infrastructure.

Darren

0 Kudos
tom_millar
Contributor
Contributor
Jump to solution

samui,

Firstly I would recommend reading NetApp's documentation on providing storage for ESX, Google NetApp Tech Report TR3749.

To summarise: NetApp best practise is to group similar VMs in the same volume for best results with Dedupe.

For example if you used a Windows VM template to create 100 Windows VMs generally they will consume the space for the Template plus any delta - often 10 - 15% so 100 20GB Windows boot vmhds in the same volume with dedupe enabled would use no more than about 25GB, instead of 2TB if they were each in their own volumes.

As has been stated by others ESX 3.5 supports up to 32 NFS datastores per host (Increased to 64 in ESX 4.0). NetApp recommends up to 250 vmdks can be allocated to the same NFS datastore.

For data that doesn't dedupe as well such as application databases use RDMs or one NFS datastore per application instance for maximum Snap shot and restore flexibility. ESX supports a maximum of 256 Luns per host for RDMs. NetApp recommends allocating 16 vmhds to each VMFS datastore.

Snap shots and Flex Clones are very useful for provisioning and short term data recovery, but the data is stored on the same disks as the original VM so for Disaster recovery, snapshots should be replicated to a secondary Storage System to cover for failure of the the primary Storage System.

Regards

Tom

0 Kudos