Storage team created one raid group size of 4TB, Now if I want to map this lun on one of the ESXi host so what should I do
map two LUN size of 2TB and create two datastore
map one LUN size of 4TB and create one datastore.
Need ur assistance on the same.
It would be better to go ahead with two Lun's of 2TB each. Saying that, it always depends. Single big lun and smaller lun both has its positives and negatives.
A single bug lun would be easier to manage, but it will host more number of VM's. In that case locking would be more. it is always suggested to have somewhere around 15 - 20 VMs per LUN (server grade). Again it varies depending on the VM size and stuff. Also when you have a single lun of big size the IOPS would also be limited to the backing disks only related to those LUNs.
Whereas more smaller luns would be complex to manage, but would result in lesser lock and IOPS would be distributed over LUN's.
For further reading you can check these:
Also check the following:
You might want fewer, larger LUNs for the following reasons:
Saying the above 2TB Lun size with 15 to 20 VM per datastore seems to be sweet spot for most of the cases
I would like to chip in on this.
It is mentioned that more smaller LUNs would have the benefit of having less wasted space.
But my experience is different (or i am just confused, hence my post here).
We have a datastore cluster with around 15 datastores, each 2.1TB.
The total free space on the datastore cluster is around 4TB.
I want to deploy a 2TB VM, but i cannot because the biggest chunk of free space per datastore is around 400GB. And i do not want to spread VMDK's all over different datastores.
So i have 4TB wasted space that i cannot effectively use.
Moving around VM's in order to clear a datastore is not possible, no VM can fit anywhere else.
My only way to solve this (and to improve for the future) is to create a much larger datastore and add it to the datastore cluster.
THe idea is to completely get rid of the smaller datastores and replace them with just a few big ones.
This came to mind after seeing an exam question and my mind just cant get around this yet.
Does this make sense?
If i were to redesign the datastore clusters, i prefer having 3x5TB over 5x3TB.
You're always going to have "wasted" space no matter what size you standardize upon. Always. But in order to determine what general sizes work (and there may not be just one), you need to take into consideration a couple of things:
As with all things, there is no "one size fits all" approach here. There are multiple factors to consider. What works best for me in my environment may not work well for you in yours.