VMware Cloud Community
Borat_Sagdiev
Enthusiast
Enthusiast
Jump to solution

Smaller LUNs or Bigger LUNs?

Hello all....

Can anyone recommend advantages / disadvantages of using more smaller LUNs or less bigger LUNs in a VI-3, NetApp 3050 environment? For example: should I carve up the SAN into 3 x 2 TB LUNs or 5 X 1 TB LUNs?

I have to make a decision soon and would like some peer input if possible.

Thanks!

0 Kudos
1 Solution

Accepted Solutions
MR-T
Immortal
Immortal
Jump to solution

I don't like lots of small LUNS. Too much to manage in terms of zoning and monitoring.

My preference is to use LUNs between 300 & 500GB.

By having a number of medium sized LUNS, you get good consolidation but don't suffer from SCSI reservation issues or too much admin overhead.

View solution in original post

0 Kudos
27 Replies
MR-T
Immortal
Immortal
Jump to solution

I don't like lots of small LUNS. Too much to manage in terms of zoning and monitoring.

My preference is to use LUNs between 300 & 500GB.

By having a number of medium sized LUNS, you get good consolidation but don't suffer from SCSI reservation issues or too much admin overhead.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

If you don't have VMs which require LUNs that big use smaller ones - maybe even 10 x 500GB.

To many VMs on a LUN will degrade performance.

But as usual it all depends Smiley Wink

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Size isn't the issue, it's spindles.

It doesn't matter HOW big a LUN is, if it's across multiple spindles (14 is the sweet spot). The SIZE isn't important, it WAS important years ago, because the drives were much smaller.

So 18g or even 9g that comprise a RAID took many more than it does now, so if you do the math 14 drives at 9g, is only 126G and 18g drives yield 252G.

Now, if you limit your LUN to space you will KILL the performance, 1 Big LUN isn't a problem, if it equates to same formula for speed it did years ago, so what if it's bigger.

The biggest problem is number of drives, you don't want too many, that will increase your risk of failure, but you need 12-14 drive raids to give you the best performance.

14 drives x 300G =4.2 TB. we are talking the same number of drives, SPACE is irrelevant. You just have more to do with than the drives years ago.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

Size does matter:

Create a 2TB LUN and put 100 VMs on top of it.

Start the VMs and see if you can do something useful.

You can't - due to SCSI reservation problems.

You are right - spindles matter too.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Well, not sure about the reservation problem, ALL 100 VM's can still be on SCSI channel 0.

so where do you see what the current SCSI reservation is, because I have a 2 TB LUN, I don't have 100 VM, but I have probably 50...

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

You're right, although I should have given more detail in my answer.

The reason 300 - 500 GB is a sweet spot in most cases is purley based on rough VM sizes.

I wouldn't want to put more than 15 VM's in a single LUN, and usually a VM uses around 30GB (for example). So I'd only get about 8 or 9 on a 300GB LUN (as I need space for vswaps, vmx, snapshots etc).

So following this through I'd get around 15 max with these sizes on a 500GB LUN.

It's not an exact science.

The other thing I should have mentioned is workload. Mixing low I/O and high I/O machines on the same LUN will give a better balance. Try to keep high workload machines on seperate LUNS.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

so where do you see what the current SCSI reservation is, because I have a 2 TB LUN, I don't have 100 VM, but I have probably 50...

AFAIK you can't - but ESX will populate the logs with this

0 Kudos
RParker
Immortal
Immortal
Jump to solution

"Operations that require getting a file lock or a metadata lock in VMFS result in

short‐lived SCSI reservations. SCSI reservations lock an entire LUN. Excessive SCSI

reservations by a server can cause performance degradation on other servers accessing

the same VMFS.

Examples of operations that require getting file locks or metadata locks include:

! Virtual machine power on.

! VMotion.

! Virtual machines running with virtual disk snapshots.

! File operations from the service console requiring

opening files or doing[/b]

metadata updates. (See “Metadata Updates” on page 33.)

There can be performance degradation if such operations are happening frequently on

multiple servers accessing the same VMFS. For instance, it is not recommended to run

many virtual machines from multiple servers that are using virtual disk snapshots on

the same VMFS. Limit the number of VMFS file operations that are executed from the

service console when many virtual machines are running on the VMFS."

0 Kudos
RParker
Immortal
Immortal
Jump to solution

We don't generally use snapshots, and once a machine is running Vmotion and power on isn't a problem, so it would appear unless you have 100 VM (which would very small footprints) on a 2TB LUN, SCSI reservation isn't a problem there either....

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

I know the statement from the manual.

The 2TB / 100 VMs was just an example.

Remember that DRS uses vMotion to balance the load.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

True enough, but if an entire LUN fails you are worried about restoring the LUN, however one very important point:

Say you format a hard drive, you have to wait until the format is finished before you can start putting files on it.

VMFS, each file gets restored in turn, and you don't have to wait for ALL of them to be restored before you can power the first one on.

I brought up all these issues with my team, and they all said the same thing you said, eventually they were quiet, once they figured out I thought this through thouroughly.

Everyone agreed later, that OLD way of doing things is what they based their initial reactions on, smaller LUNS is a thing of the past.

More drives = better performance, that's documented by just about every NAS company.

VM Ware even recommends keeping LUNS small, but I question what they are basing that on... Especially since they support a 2TB LUN, if there was a problem in the design, why would they allow it? I think the engineers are still of the old mindset, that keeping things smaller is better, but I have to disagree, especially when you consider that the backups are easily accomplished, and it's not a single drive you are restoring, they are individual files....

It's all risk, if a LUN goes down, you lose X amount of VM, that's true. But aren't ALL VM's important?

I realize one or two may not impact the company, whereas more would, but Like I said, you restore the LUN, restore the VM's one at a time, and go from there.

for my purposes, it's much easier to manage, than to worry about sheer nubmers of LUNS.. not to mention differences in configuration..etc.. That is a nightmare in and of itself.

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

Although you dont use snapshots, most of the backup utilities available for taking the entire vmdk first place this into REDO mode hence snapshot.

keeping the number of VM's to a minimum on a LUN is just good practice.

If you can run 50 VM's on a 2TB LUN without any performance issues then excellent job.

Until you start testing these things, it's hard to define a one size fits all solution.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Only if you use Automatic clustering! I turn that off, I have it set to Partial, I don't need the Console dictating when and where my VM's are.. That takes the fun out of it!

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

Regarding the large size of the LUN, that's not in dispute.

I've setup VM's with 1TB LUNS to store lots of files, but I personally wouldn't create a 2TB LUN and fill this with lots of VMs.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

"Until you start testing these things, it's hard to define a one size fits all solution."

Truer words were never spoken!

I will be guinnea pig, sign me up. I tested, fine, its been this way for 8 months no problems yet!

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Well I know you and Oreeh are excellent VM's admins, I have seen your posts..

Just playing devil's advocate, I know you know what you are doing, and you have some very good things to say. I don't doubt your knowledge at all Smiley Happy

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

Good man.

If I had more time on my hands I'd love to find a perfect fit for most situations, but for now I'll go with my 300/500 rule keeping between 10 & 15 VM's per LUN.

Nice talking to you though.

0 Kudos
oreeh
Immortal
Immortal
Jump to solution

If I had more time on my hands

oh yes

>I'd love to find a perfect fit for most situations, but for now I'll go with my 300/500 rule keeping between 10 & 15 VM's per LUN.

I wholeheartedly agree :smileygrin:

0 Kudos
Borat_Sagdiev
Enthusiast
Enthusiast
Jump to solution

Thanks for the info guys... MR-T, I am going to follow your recommendation as it fits well with some other client requirements.

0 Kudos