We are running vSphere 6.0 on a new HPE MSA 2052 SAN. We have created a single Storage Pool with size around 14TB.
Just would like to see whether
1) We should create a single LUN with the Storage Pool OR
2) Should we create 2 Storage Pool (7TB for each) and create 2 LUNs ?
Storage pool can be one but the sizing of the LUNs depends on various factors. Type of the VMs in the environment, workloads they would be running, etc. If you want to isolate the SQLs, the Exchanges with the other non-critical VMs, you might want to go with more no. of smaller LUNs. Personally, I'm not a fan of one big datastore
Only if you have a need for a 14TB VMDK.
Reasons for multible LUNs/Datastore
- VMFS corruption would not kill all of your VMs at once
- SCSI reservations or other kind of bottlenecks like queues
- On your storage a LUN will be add to a pys. Controller module which is the primary owner which means it do all the work. The other Controller will be just sits there and do nothing. Utilizing all CMs in your storage will give you better performance
Yes, it is what I propose. However, consultant who sets up our new system suggests creating a single LUN.
What concerns me more is if we take snapshot of the LUN, I think it will be snapshot of the whole VMFS Volume that contains all VM (That is not desirable).
There are other reasons why it may still be beneficial to break these out into smaller chunks, the physical I/O path notwithstanding. For example, one gigantic VMFS datastore is a single failure domain even when it comes to logical corruption of the VMFS metadata. If that goes down, all VMs are down versus smaller ones where such logical corruption would be contained. There are lots of factors which feed into this decision and, consequently, there's no "one size fits all" approach.
What will be the downside if we create two Storage Pool instead of one ?
I am new to MSA. May I ask whether we need to create 2 Virtual Disks (Instead of 1) if we create 2 Storage Pools.
We use multiple 10TB LUNs (100TB in use) on vsphere 6.5, IBM v7000 SAN, we have not seen any performance issues.
If you only have the one single LUN then definately split, but, certainly for us, the size would not be an issue. (split to share i/o accross diff paths and diff controllers etc)
Having said that the bottleneck in all issues I have seen is always the back end spindles (unless flash), never the lack of paths (or not) or controllers being overloaded or queue dpeths (wherever they may be, there are 3 to begin with just at the VMware level, per VM os queue, per HBA phys adapter queue, device driver queue)
We have multiple 11TB and multiple 22TB VMFS 5 datastores and have never had any issues, but you should check with your storage vendor. Our storage is all solid-state from Pure and they have a detailed VMWare technical white paper and they basically do not care about size. VMFS corruption(sounds extremely rare) and the rare chance of needing a snapshot restore(if a VM was not included in backups or restore fails) both are better off with smaller datastores.
We are migrating to VMFS 6 and I am wondering if we are better off with smaller datastores, but not hearing any details or examples of why you should if your storage vendor does not care.
Does anyone have any experience with VMFS 5 or VMFS 6 datastore corruption? I was searching through google and it sounds extremely rare.
> Does anyone have any experience with VMFS 5 or VMFS 6 datastore corruption?
Yes - I try to make a living from that kind of problems but it hardly pays the bills
No - Thanks a lot. I am just wondering what the odds of this actually happening are. I read something about people having VMFS issues if they created dozens of snapshots across the same VM, etc. but starting to think that corruption of VMFS 5 or 6 is extremely rare to the point it is not a big of concern as it was 10 years ago with VMFS 3, etc. If someone responded and was saying "Yes - It happened to me with VMFS 5 and it was a disaster" then I would feel better about making decisions around avoiding those problems, but not finding much.
I know it happens, but thinking it is extremely rare and mostly with older SANs. Still looking for more examples, but if anyone has any recent stories in the past few years with newer hardware/VMFS, I would like to hear about it.