VMware Cloud Community
AllBlack
Expert
Expert

VM per NFS Datastore in vSphere 5.1

Hi,

Early next year we will be upgrading to vSphere 5.1.

I would like to get an opinion on how many VM one places per NFS datastore.
With NFS more datastores is better but I'd like to get a feel of what others do in regards to numbers of VM that they place on one datastore.
With ISCSI for example we used 10 as a guideline.

Cheers

Please consider marking my answer as "helpful" or "correct"
0 Kudos
6 Replies
Rumple
Virtuoso
Virtuoso

NFS is file level locking so technically there are no limits that you can put on a single NFS volume as you don't suffer from SCSI reservation locks like you do with ISCSI

However, things to consider which really depend on the type of NFS server you are running is if you do snapshots at SAN level then consider how many VM's are in snapshot mode at same time if you use a tie in with VC.

For instance, If you are running netapp and using VSC to do backups, it does snapshots at the storage level (aka, NFS volume) and will run through and add and delete a snapshot from every VM thats on that particular volume so having large numbers can impact storage space and how long it takes for the backup cycle to run through when you have a hundred or so VM's on a single NFS mount (not that "I" would ever make that mistake....twice) :smileyshocked:

0 Kudos
jrmunday
Commander
Commander

I have a relatively high consolidation ration with regards to the number of VM's per NFS datastore. On paper the ratios look like they should be a problem, but in reality they work very well.

We have 40TB of NFS storage located on NetApp FAS3240 hardware in a MetroCluster configuration, split evenly between datacenters that are ~40miles apart - synchronous replication.

Site ASite B
NFS-DSC1.pngNFS-DSC2.png
** I relocated over 50 VM's last night (~6TB) for some planned weekend work, so these figures are actually higher.

Latency is generally under 10ms, and throughput is sufficient with svMotion operations going upto 450Mbs.

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
0 Kudos
mcowger
Immortal
Immortal

This is the way to go....only YOU can determine whats right (with input from your vendor).

There is no hard number, and the number is primarily based around:

1) What can you system support from an IO perspective

2) How comfortable are you with the size of the failure domain.

3) Anything else that might limit you (replication options, tiering options, etc).

--Matt VCDX #52 blog.cowger.us
0 Kudos
jrmunday
Commander
Commander

Yep, and also take features that your storage array supports into the design planning - for example de dupe, snapvol etc. It's not obvious from the images I posted, but I actually have ~46 TB provisioned (excluding NetApp snapshots) across the 40TB of NFS storage, and still over 20TB free.

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
0 Kudos
Rumple
Virtuoso
Virtuoso

Overall with 210 VM's and 3 NFS mount's we've had no real performance problems at all except when it takes about 45 minutes for the entire backup process to run per NFS mount when integrated with VSC.

We typically have 75 VM's all sitting with a snapshot all at the same time for about 30 minutes (start to finish) while netapp does its thing....

We had once instance where we've run the mount out of space because the volume filled due to some really heavy transaction processes running which chewed through space like crazy and almost brought down the environment but Netapp admin was able to grow volume in time...

With smaller number of VM's per LUN the impact if it stunnded the VM's would have been much smaller...but thats the balancing act...

0 Kudos
AllBlack
Expert
Expert

Thanks for the answers.

Pretty much what I expected. Will have a deeper look from the storage side of things and come with some conclusions from there.

Cheers

Please consider marking my answer as "helpful" or "correct"
0 Kudos