You can go bigger. With that said, unless you have a VAAI capable array, I would tend to stay at the 400 - 600GB LUN size. Bigger luns, usually means more VM's and therefore more SCSI reservations.
I'm not very familiar with SAN architectures and how one would go about upgrading them, but I can tell you that vSphere 5 supports volumes of up to 64TB thanks to the new VMFS5 filesystem. If you're also using RDMs, it can handle up to 2TB RDMs in virtual compatibility mode, and up to 64TB in physical compatibility mode.
Actually, you can even upgrade VMFS3 to VMFS5 online.
We’re upgrading our MSA 2012i to G3 controllers – my understanding is that those now have VAAI support but not VASA support.
We have vSphere Enterprise fortunately.
Initial review of the KB and other stuff I’ve seen suggests that it seems better to recreate all the datastores as brand new vmfs-5 datastores rather than upgrading…having to keep 20-25% free slack space per LUN seems wasteful if I could reduce the # of LUNs to 2 or 3 even.
We presently have <50 VMs and I must watch storage space CAREfully though – the new thin provision etc. storage reclamation feature(s) would help us a LOT – hopefully that will be working better by next year when we can upgrade to vSphere 5…
Thank you, Tom
Yes, although you can upgrade, and I have done it in our lab environment, of an online VMFS-3 datastores, if you can, I would present new (I would hate to corrupt a production LUN, if the "unprecedented" happened). This way you can use the unified block size. I think just enabling VAAI, will alteast allow you to take some storage related tasks off the hypervisor and offload them to the storage device, where they should be.
As for LUN sizes, I guess, it will depend. We thought about 3TB, once we roll to vSphere5
I'd have to agree with Troy. The problem previously was not that you couldn't do larger LUNs, it was just a balancing act between the IO on the LUN, vs its size and the number of vm's contained within it.
That being said, I used to use extents and combine several smaller LUNs to form a larger VMFS volume, to get the best of both worlds, but that comes with its own set of risks.
As of 4.1 though, I now use 1 TB LUNs, but have way too many datastores to manage, as opposed to the fewer larger volumes I had to keep track of previously.
Our total storage is about 2 TB, so I am thinking the same as you, like so:
1) learn more about the controller upgrade etc., hopefully this plus redoing the LUNs can all be done in one day
2) before #1 above, move all the VMs to either DAS (reconfigured for vmfs-5 online from vmfs-3) or NFS storage for a while and not let people log in etc. for one day
3) do #1 above, make the vdisk all 1 LUN, make it a vmfs-5 LUN, move all the VMs back onto it…then redo the local storage…
Doesn’t seem over-complicated yet, really depends on what HP support tells me about #1 above etc.
Anything more storage-wise we have to get an MS70 or whatever HP calls it for daisychaining storage or even a whole different SAN…time will tell, I work for a non-profit and must be careful money-wise even with HP GEM and other similar arrangements.
Thank you, Tom
I would stronly recommend against making 1 big LUN. If you do, you end up with only 1 queue to your storage, and making all your VMs compete for that, leading to reduced performance and higher latency.
Instead, keep 3-4 LUNs.
The more comments and suggestions the better…Hopefully more people will either join into this thread or people will blog on this topic in the coming months.
Thank you, Tom