I need some advice on best practices for setting up datastores, and in particular nfs data stores. Things I am looking for is that is it best to setup datastores for different types of data or one universal datastore type. Such as we have VM's which we have virtual disks fo the os and then addtitional virtual disks for there data. So would it be best to setup a a datastore for OS virtual disks, and then other datastores for the data virtual disks, or would it be best just to create many datastores which can include all the OS and data virtual disks.
Also whats the best practice for the maximum size for datastores and how many virtual machine disks should be on one datastore.
Is there any best practices for setting up datastores.
NFS datastores do not suffer many of the caveats that block storage protocols do, mainly because the storage array has visibility into the specific VM a write or read is being issued to.
Assuming the storage array is using a single volume to host the NFS exports, I typically had NFS datastores configured for specific types of workloads for replication, backup, and deduplication, but not performance. Since the same back-end spindles were used, any number of NFS exports on the same volume would all share performance power, anyway.
I still like splitting out the OS partition from other application / backup partitions within the guest, mainly by way of unique VMDKs. You might split out the NFS datastores this way, too, but again - this would mainly be for backup and replication. Such as creating a policy to backup the OS export on a different schedule than an application export.
Also, refer to your storage vendor's documentation. It typically contains their recommendations on storage volume / export layout, along with maximum volume sizes.