vSphere Storage Appliance

 View Only
  • 1.  Datastore Sizing on NFS

    Posted Sep 19, 2017 02:37 PM

    I'm using VMware on NetApp NFS All flash FAS where there is an option to autogrow volumes instead of creating a new volume and VMware datastore when I run out of space. 

    Traditionaly I've been using a standard 4TB datastore size.

    Is it better to keep a standard datastore size of 4TB and create new volumes when I run out of space or should I use Autogrow and let volumes become different sizes as autogrow grows them?



  • 2.  RE: Datastore Sizing on NFS

    Posted Sep 22, 2017 03:33 PM

    Both the configurations and designs are supported. It depends on your requirements.  If you have dynamic content running in your environment, its good to have an Autogrow configuration. For static content you can create Different volumes manually.



  • 3.  RE: Datastore Sizing on NFS
    Best Answer

    Posted Oct 02, 2017 10:23 AM

    Another consideration here is whether you want more control or less maintenance overhead. There's a possibility of running out of space on the array if volumes grow uncontrolled. So unless it's a highly dynamic environment, I typically take the risk-averse approach, which is the manual expansion.



  • 4.  RE: Datastore Sizing on NFS

    Posted Oct 02, 2017 02:21 PM

    Nick_Andreev That's well said - thanks.  So the risk management here is really balancing the risk of running out of space at the array level, vs. running out of space at the volume level.  Running out of space at the volume level is still a risk of course - since that datastore could go offline, vms could be corrupted, etc. 

    However running out of space at the array level is a risk with much greater impact and severity if it eventuates.

    So what you are saying is accept the risk of an individual volume running out of space and don't accept the risk of the whole array running out of space.

    In this environment capacity planning related to tracking growth of volumes is rather immature, so it would be just a matter of ramping up our trending/forecasting/monitoring of volume/datastore growth as much as possible.



  • 5.  RE: Datastore Sizing on NFS

    Posted Oct 11, 2017 08:25 AM

    Exactly right. Datastores in VMware don't go offline and VMs don't get corrupted, though. The normal behaviour is to pause a VM until there is space available. Which will cause an outage for the application/service running on the VM, but the failure domain in this case is rather minimal.



  • 6.  RE: Datastore Sizing on NFS

    Posted Oct 23, 2017 09:44 PM

    It's been a while but if I remember correctly in the case of an environment with Fibre Channel LUNs and/or NFS volumes I think I do remember some cases where the datastore filled up and the LUN/volume went offline as a result.  Not sure if that was the fault of ESXi or of the array itself, and I  can't give you 100% accurate description of the exact conditions but I think I've seen it.



  • 7.  RE: Datastore Sizing on NFS

    Posted Oct 24, 2017 05:56 AM

    You can get that on a storage array. For example if you have a volume on a NetApp storage array and a thin LUN bigger than the size of the volume, then if the LUN fills up the whole volume and there's no space left to write the data to, LUN will go offline.

    That's not the case with datastores, though. If you have thick LUNs/Volumes on the array and thin .vmdks, then when you run out of space on the datastore the behaviour is different. VM is paused, instead of taking the datastores offline.

    Hope that makes sense.