VMware Cloud Community
samui
Contributor
Contributor
Jump to solution

How many NFS volumes can I present to the host?

I just got a new SAN for my virtual environment. I will be using NFS connections between the ESX servers and the SAN.

I've been asked to find out how many NFS volumes can be presented to the ESX server. I've found conflicting information and was wanting to get a second opinion. According to the "Best Practices" of our SAN vendor, it says that I should not present more than 32 NFS volumes to the ESX host. However, according to some online info I found, I can present 256 LUNS/Volumes to the ESX Host.

The reason this is a question is because we are looking at creating individual NFS volumes for each VM (approx 30 vms per ESX host) on the SAN. We would like to do this to take advantage of the SAN "snap/capture" software that it comes with. However, when you encorporate HA/DRS and vmotion to the ESX clusters, the number of volumes presented to the ESX server doubles.

So how many NFS volumes can I present to the ESX Host?

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
wpatton
Expert
Expert
Jump to solution

32 NFS mounts is currently the max for ESX because of the required overhead, I am certain they are working on finding ways to increase that ceiling, but it is fixed right now at 32.

Make sure you change two settings to present 32 mounts:

Net.TcplpHeapSize to 30 and NFS.MaxVolumes to 32

After a reboot of the host, you will be able to have 32 mounts and reasonable overhead.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

*Disclaimer: VMware Employee* If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

View solution in original post

0 Kudos
24 Replies
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the forums - By default the maximum number of NFS mounts is 8 - I am not sure where you change the parameter to get more than 8 - also I do not htink the number of NFS would double since all ESX hosts would see the same NFS datastores -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
samui
Contributor
Contributor
Jump to solution

After a little bit of poking around in the ESX server, I found this advanced settings dialog box (attached picture). I can see where they are getting the maximum of 32 now. Do you know if it can go higher?

I anticipate that I will be going beyond 32 volumes presented to the ESX Server cluster. That is, if we wind up going with one NFS volume per VM.

For example; if one ESX server has 28 VMs - I would have 28 NFS volumes/datastores. On the 2nd ESX server that is clustered to this, I would have to add the 28 NFS volumes from the first ESX server so that I can take advantage of vmotion, HA/DRS, etc. And if the 2nd ESX server has 18 NFS volumes (again, one volume per vm), then this server would have 46 total NFS volumes. And these 18 volumes would have to be added to the first server giving it 46 NFS volumes in total.

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

In my expereince if a limit is identified on the advanced setting page it would be a hard limit - so 32 is the max - I now understand how you go past the 32 limit - and the sole reason you are doing this is because you want to take advantage of your NAS snapshot functionality - who is the manufacturer of your NAS device?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
wpatton
Expert
Expert
Jump to solution

32 NFS mounts is currently the max for ESX because of the required overhead, I am certain they are working on finding ways to increase that ceiling, but it is fixed right now at 32.

Make sure you change two settings to present 32 mounts:

Net.TcplpHeapSize to 30 and NFS.MaxVolumes to 32

After a reboot of the host, you will be able to have 32 mounts and reasonable overhead.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

*Disclaimer: VMware Employee* If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
markdjones82
Expert
Expert
Jump to solution

I am also wondering what exactly the reasoning is in making an NFS share for each VM. I would think this is not a very efficient way to set things up. I could be wrong, but it might slow down your Vmotion and HA as well.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
samui
Contributor
Contributor
Jump to solution

It's a NetApp SAN. And the "snapping" product is their Flexclone software.

0 Kudos
samui
Contributor
Contributor
Jump to solution

The sole reason that we would be breaking up the volumes per vm would be for the purpose of using "Flex-Cloning". It was thought that we could sell flex-cloning as a backup service to the departments that are using our vmware environment. Flex cloning takes a snap-shot at the volume level. It's easier to schedule and restore VMs using flex-cloning when the volume is only one VM. Whereas, if it's a volume of multiple VMs, then you wind up combing through the snapshot looking for the individual vm file - much like that of trying to identify a single file from a tape backup.

As far as performance goes, I don't know how this effects performance.

0 Kudos
wpatton
Expert
Expert
Jump to solution

You may want to look over some of the old convos, but a good one I saw in the past was with RParker, from these VMTN forums, on another blog site:

Basically, just like FC or iSCSI, you should not be presenting one LUN per VM. I know I have never followed that, and haven't seen others do it. We usually try to keep it to 20-30, others have pushed much higher, but 20-30 has always performed well for us.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

*Disclaimer: VMware Employee* If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
RParker
Immortal
Immortal
Jump to solution

. It's easier to schedule and restore VMs using flex-cloning when the volume is only one VM. Whereas, if it's a volume of multiple VMs, then you wind up combing through the snapshot looking for the individual vm file

That's not totally true. You can flexclone and ENTIRE volume and STILL only clone 1 VM out of it. It handles the file deltas on the backend, you still don't need 1 VM per volume. That's a waste. You can snap an entire volume, and clone 1 or ALL the VM's or any number in between.. they are just fiiles sitting on the Volume.

They have a VC addin that makes this feature seemless also, so you don't even have to know or see the volume at all. So you can click on a VM, "snapclone" that VM (or whatever the add-in is called) and thus you can clone a VM in about.. 10-15 seconds.

Netapp demonstrated this, and they cloned 250 VM's in under 2 minutes....

0 Kudos
samui
Contributor
Contributor
Jump to solution

I think you missed my point. Upper management was wanting to sell "flex-cloning" as a backup service to the individual VMs. Which means I would be backing up most of the VMs, probably not all - using flex-cloning. I wouldn't be using flex-cloning to increase my number of usuable VMs.

So, if I backed up the entire volume of 40+ VMs and only needed to restore only one VM, I would have to comb through the flex-clone, just like I would a tape backup to find the one VMDK to restore. And to aleviate any headaches of this, it was suggested that I find out if we can do one volume per VM. That's why I was tasked to look at creating individual volumes per VM.

I agree it's a complete waste of space and time management when it comes down to managing the number of volumes, but I had to ask.

As far as the netapp demonstration of the 250 VMs. I saw this on You-Tube and my only question with this is.... how are they handling the same SIDs that are created?

0 Kudos
T3Steve
Enthusiast
Enthusiast
Jump to solution

Don't want to be a stick in the mud here, but let's get some things straight.

A "SAN" is one or more servers connected to storage via a fibre channel switch.

An EMC Clarrion, HP EVA6000, IBM Shark are not a SAN, those are just storage enclosures.

A server with an HBA, a fiber switch and a storage enclosure comprise a SAN.

The commincation on a SAN is comprised of Initiator and target and se habla SCSI commands over fiber channel. Most implementations of fiber channel do use Fiber cabling however the "Fibre Channel" spec does include copper cabling (never have seen this though)

A NFS "Network File Share" is not a SAN. NFS connectivity is to a NAS. Network attached storage uses TCP/IP and does not use SCSI commands like Fiber channel does.

To confuse this even further a NFS NAS share can reside on a SAN. This can happen when a storage vendor's NFS appliance is added to a SAN enclosure. (EMC Celerra) The appliance does the TCP/IP to Fiber Channel translation and is transparent to the host.

With NFS there's an 8 share limit on VMware.

With Fibre Channel there's a 1024 path limit to all luns on a single host. In my environment we're getting close to that limit with redundant paths and lot's of LUNS presented to the hosts.

Which Netapp SAN do you have that allows you to do NFS?

VCP3|VCP4|VSP|VTSP
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

The vast majority of the NetApp product line supports NFS - it was their bread and butter long before they supported iSCSI or FC.

--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

You are correct a SAN is more that just the drive enclosure - but to expand on your statement "A "SAN" is one or more servers connected to storage via a fibre channel switch" it a network either FC or iSCSI that connects one or more servers to a pool of storage - this network carries SCSI command to allow the servers block level access of the storage - in a FC packet on FC and a IP packet in iSCSI - Also I have not heard of connecting to an NFS device via FC unless the FC network is configured for TCP/IP - and for ESX NAS/NFS is only supported if it is NFS version 3 carried over TCP/IP =

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
T3Steve
Enthusiast
Enthusiast
Jump to solution

>>> The vast majority of the NetApp product line supports NFS - it was their bread and butter long before they supported iSCSI or FC.

I know that.. but none of their "FC-SAN" models do NFS. Only the NAS products do NFS. .. NAS/SAN there is a difference.

The OP mentioned he had a SAN and was doing NFS. The point I was making is that it's possible to present storage via NFS from a FC-SAN based storage enclosure but VERY unlikely and the OP had the NAS/SAN terminology mixed up.

VCP3|VCP4|VSP|VTSP
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Ermmm - no.

Take the FAS200 series. Those devices will do NFS, CIFS, HTTP, FCP and iSCSI all from the same box and same set of disks. I've done it personally.

--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
T3Steve
Enthusiast
Enthusiast
Jump to solution

And if you use ethernet to get to that storage you still don't have a SAN, you have NAS. But you do have a storage device that can join the SAN fabric... no flogi.. no SAN.

VCP3|VCP4|VSP|VTSP
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Sure you do - you dont honestly believe that you MUST have FCP to have a SAN do you? How about iSCSI? Does that not count? Its Storage Area Network by any reasonable definition of the word. Does FICON not count? It doesn't do FLOGI's either.

At the end of the day, you insisted on being pedantic and asserting that "none of their "FC-SAN" models do NFS.", which is what I was correcting.

--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

SAN and NAS both provide network access to the storage - it is not if it is ethernet that determines whether it is SAN or NAS what determines is the difference is how the data on the storage is accessed -

  • SAN - it is block level access of the data using SCSI commands - so SAN fabric is used to carry SCSI from the server to SAN Storage device - if it is a FC network it is a SCSI in the FC packet and if it is iSCSI the IP packet will contain the SCSI command

  • NAS - is file level access to the data - so the network is used to file system commands either NFS or CIFS - typically the network is a TCP/IP and the packet carries either a CIFS or NFS command to the NAS device - in regards to ESX, ESX only supports NFS version 3 over TCP -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
RussellCorey
Hot Shot
Hot Shot
Jump to solution

Samui,

some considerations before you sell Flex-Cloning as the next big thing:

1. Multiple VMs per volume is not a bad thing. You should at the very least departmentalize them and flexclone entire departments or whatever logical division you'd decide to use.

2. You are effectively limiting yourself to 32 virtual machines max per cluster/datacenter. This will cause you to have ESX host sprawl as your capacity needs grow.

3. Make sure you either use snap manager for VMware or remember to take vmware tools snapshots or your flex clones will be worthless as backups.

4. "Combing" through a flexclone is trivial if your operations team properly names virtual machines. For you to restore an individual VM is as simple as mounting the clone and finding the one directory. With 1 VM per volume you're just sorting through volumes instead of virtual machine directory names so I'm not entirely sure where you're picking up any sort of benefit.

5. Using as a service for individual VMs when you can do the entire environment at effectively no additional cost seems a disservice. Flexclones don't work by literally copying the data but instead by way of using a given 'point in time' so you get no benefit from flexcloning a volume with 1VM in it over a volume with 1000 VMs in it.

6. You can take certain virtual machines that have low IO requirements and run ASIS on them to reclaim some storage. This is done on a per volume basis and if you're at 1 VM per volume you won't see a benefit.

To deal with SIDs you'll flexclone a volume with sysprepped virtual machines.

0 Kudos