VMware Cloud Community
TonyJK
Enthusiast
Enthusiast

Should we create a single LUN with 14TB in size ?

Hi,

We are running vSphere 6.0 on a new HPE MSA 2052 SAN.  We have created a single Storage Pool with size around 14TB.

Just would like to see whether

1) We should create a single LUN with the Storage Pool OR

2) Should we create 2 Storage Pool (7TB for each) and create 2 LUNs ?

Thanks

Tags (1)
Reply
0 Kudos
16 Replies
SupreetK
Commander
Commander

Storage pool can be one but the sizing of the LUNs depends on various factors. Type of the VMs in the environment, workloads they would be running, etc. If you want to isolate the SQLs, the Exchanges with the other non-critical VMs, you might want to go with more no. of smaller LUNs. Personally, I'm not a fan of one big datastore Smiley Happy

Making LUN Decisions

Cheers,

Supreet

Reply
0 Kudos
MikeStoica
Expert
Expert

Depends a lot of what environments are you going to run on them. But i'd rather have more LUNs with smaller size, which gives you also more flexibility.

EricChigoz
Enthusiast
Enthusiast

I would say why not?

Though a lot depends on your environment and the need to for final luns.

Find this helpful? Please award points. Thank you !
Reply
0 Kudos
daphnissov
Immortal
Immortal

General rule of thumb (especially if you aren't doing your own calculations) is to create VMFS datastores between 1-4 TB in size.

IRIX201110141
Champion
Champion

Only if you have a need for a 14TB VMDK.

Reasons for multible LUNs/Datastore

- VMFS corruption would not kill all of your VMs at once

- SCSI reservations or other kind of bottlenecks like queues

- On your storage a LUN will be add to a pys. Controller module which is the primary owner which means it do all the work. The other Controller will be just sits there and do nothing. Utilizing all CMs in your storage will give you better performance

Regards,

Joerg

TonyJK
Enthusiast
Enthusiast

Yes, it is what I propose.  However, consultant who sets up our new system suggests creating a single LUN.

What concerns me more is if we take snapshot of the LUN, I think it will be snapshot of the whole VMFS Volume that contains all VM (That is not desirable).

Reply
0 Kudos
TonyJK
Enthusiast
Enthusiast

No.  Definitely not.

Reply
0 Kudos
continuum
Immortal
Immortal

maybe interesting ....
What are some considerations for selecting datastore size?


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
TonyJK
Enthusiast
Enthusiast

The argument by consultant - There is only a single Storage Pool, all Disks are controlled by a single Storage Controller Module.

Reply
0 Kudos
IRIX201110141
Champion
Champion

Your consultant is right. If he only create a single pool in the MSA 2052 than only one CM is utilized.

Regards,

Joerg

Reply
0 Kudos
daphnissov
Immortal
Immortal

There are other reasons why it may still be beneficial to break these out into smaller chunks, the physical I/O path notwithstanding. For example, one gigantic VMFS datastore is a single failure domain even when it comes to logical corruption of the VMFS metadata. If that goes down, all VMs are down versus smaller ones where such logical corruption would be contained. There are lots of factors which feed into this decision and, consequently, there's no "one size fits all" approach.

TonyJK
Enthusiast
Enthusiast

What will be the downside if we create two Storage Pool instead of one ?

I am new to MSA.  May I ask whether we need to create 2 Virtual Disks (Instead of 1) if we create 2 Storage Pools.

Thanks

Reply
0 Kudos
bigvern2
Contributor
Contributor

We use multiple 10TB LUNs (100TB in use) on vsphere 6.5, IBM v7000 SAN, we have not seen any performance issues.

If you only have the one single LUN then definately split, but, certainly for us, the size would not be an issue. (split to share i/o accross diff paths and diff controllers etc)

Having said that the bottleneck in all issues I have seen is always the back end spindles (unless flash), never the lack of paths (or not) or controllers being overloaded or queue dpeths (wherever they may be, there are 3 to begin with just at the VMware level, per VM os queue, per HBA phys adapter queue, device driver queue)

Reply
0 Kudos
drheim
Enthusiast
Enthusiast

We have multiple 11TB and multiple 22TB VMFS 5 datastores and have never had any issues, but you should check with your storage vendor.  Our storage is all solid-state from Pure and they have a detailed VMWare technical white paper and they basically do not care about size.  VMFS corruption(sounds extremely rare) and the rare chance of needing a snapshot restore(if a VM was not included in backups or restore fails) both are better off with smaller datastores.

We are migrating to VMFS 6 and I am wondering if we are better off with smaller datastores, but not hearing any details or examples of why you should if your storage vendor does not care.

Does anyone have any experience with VMFS 5 or VMFS 6 datastore corruption?  I was searching through google and it sounds extremely rare.

Reply
0 Kudos
continuum
Immortal
Immortal

> Does anyone have any experience with VMFS 5 or VMFS 6 datastore corruption?
Yes - I try to make a living from that kind of problems but it hardly pays the bills


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
drheim
Enthusiast
Enthusiast

No - Thanks a lot.  I am just wondering what the odds of this actually happening are.  I read something about people having VMFS issues if they created dozens of snapshots across  the same VM, etc. but starting to think that corruption of VMFS 5 or 6 is extremely rare to the point it is not a big of concern as it was 10 years ago with VMFS 3, etc.  If someone responded and was saying "Yes - It happened to me with VMFS 5 and it was a disaster" then I would feel better about making decisions around avoiding those problems, but not finding much.

I know it happens, but thinking it is extremely rare and mostly with older SANs.  Still looking for more examples, but if anyone has any recent stories in the past few years with newer hardware/VMFS, I would like to hear about it.

Reply
0 Kudos