VMware Cloud Community
jeronimoniponi
Contributor
Contributor

LUN size

We are currently planning a deployment for ESX. I was wondering what the best strategy for vfms deployment:

1. One big LUN for let say 30 virtual machines.

Or

2. One LUN for one virtual machines.

The average size for a virtual machine will be 16 gb. I am aware of the lun size spreadsheet provided by Ron Oglesby.

We are planning to run 30 virtual machines on one esx host. One farm of esx hosts will be created with 4 to 8 esx hosts (vmotion should be possible between all esx hosts).

0 Kudos
66 Replies
MR-T
Immortal
Immortal

Somewhere in between might be best. I wouldn't create individual LUNS for each VM, this would be an administrative nightmare.

We have a standard 500GB policy when creating most of our LUNS.

I wouldn't put all 30 VM's in a single LUN, maybe split you're 30 into 3 groups.

If you had 30 machines on the same LUN, you could experience high scsi reservations when using REDO logs.

0 Kudos
DoratheExplorer
Enthusiast
Enthusiast

I agree with Mr-T and would also suggest if you are running Prod/Dev VM guests that you separate these onto different LUN's.

Your more likely to be powering up/down more often and during the day for DEV VM's and therefore start putting SCSI reserv. on the disks.

0 Kudos
mreferre
Champion
Champion

http://www.redbooks.ibm.com/redpapers/pdfs/redp3953.pdf

Appendix C.

No rocket science in it ... just common sense. As others said .... "in medio stat virtus"

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
DoratheExplorer
Enthusiast
Enthusiast

Nice book.

That will give me something different to read on the train tonight.

Thanks.

0 Kudos
shill
Enthusiast
Enthusiast

Generally I agree with Mr.T however, the safe answer is "it depends" we run nearly 25 across 5 ESX servers on a single 400GB LUN. By the look of the SAN analyzer, I can add a few more to the LUN. Even if I maxed out the space, I would probably create a second LUN for more machines rather than expand it.

I agree 1 for each is a management nightmare.

So use some measurement tools and know your environment.

Message was edited by:

shill@cygem.com

0 Kudos
douwe
Contributor
Contributor

Een extra LUN voorkomt toekomstige stress op netwerkverbindingen (Mirrorview).

Message was edited by:

douwe

0 Kudos
douwe
Contributor
Contributor

Do not create an extra LUN for the Pagefile for every Virtual.

Message was edited by:

douwe

0 Kudos
DoratheExplorer
Enthusiast
Enthusiast

Why do you recommend a separate LUN for your pagefile ?

Surely your then be creating many smaller vmdk's just for the pagefile.

Is that not bad disk management ?

I can't see any benefits there.

We have a base build of 8Gb for a W2003 build which includes the o/s, pagefile, and application.

Only when additional storage is required, i.e. database, do we add another vmdk to the guest.

0 Kudos
AMcCreath
Commander
Commander

OMG! Do not try and create vmdk's for pagefiles....

What a nightmare...

Smiley Sad

0 Kudos
shill
Enthusiast
Enthusiast

That is not a very manageable solution. You want to give your guests enough ram to avoid paging if at all possible. We have fine performance and have never done this kind of thing.

0 Kudos
Ken_Cline
Champion
Champion

Now in english. I would recommand to create an extra

LUN for every Virtual Server for the Pagefile (swap)

Put your flame-proof armor on...

I think that's a very bad practice. First, it would be an administrative nightmare to have to manage a separate LUN for each VM - just for a swapfile that will (hopefully) never be used! Keep in mind, also, that you can only present a MAXIMUM of 128 LUNs to an ESX host. This is not typically a problem; but with a dedicate swap LUN per VM, you could quickly exhaust your quota.

I realize that - on physical hardware - you often create a separate swap partition. That's fine, and you can do the same thing in a virtual environment, if you want. You can even create a separate .vmdk, if you're so compelled - but DON'T create a dedicated LUN for your swapfile.

Take advantage of your virtual environment. Create a VMFS volume or two, drop .vmdk files on it to support 10 - 20 VMs per volume, and relax Smiley Happy

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
Bovine
Hot Shot
Hot Shot

Through trial and testing, I've standardised on LUNs of 128Gb each.

I started with a LUN of 512Gb, then quickly broke it into 2x256 but still see locking contention so 128Gb's my sweetspot. Our VMDKs are between 4-30Gb each.

0 Kudos
Paul_B1
Hot Shot
Hot Shot

While we don't have a seperate LUN for every VM we do have a seperate LUN for each Partition. For instance:

LUNxx-Prod-C-1 (system)

LUNxx-Prod-D-1 (data)

LUNxx-Prod-E-1 (logs)

LUNxx-Prod-S-1 (swap)

Our largest LUNs are the ones containing the Data volumes which are at 200GB. It may look a bit strange, but when we implemented ESX we had limited available SAN storage, so this is what we worked out as far as ease of management. We haven't had any issues thus far.

Message was edited by:

Paul.B

0 Kudos
Ken_Cline
Champion
Champion

Paul,

While this will work, it won't scale well. ESX has a limit of 128 LUNs, so if you have four LUNs per VM, you've effectively limited yourself to something around 30 VMs. And since you need all the LUNs for all VMs to be visible to all hosts in your farm (if you want to use VMotion...) - that's 30 VMs per farm, not per host.

This approach also introduces significant management overhead, which can be a big deal in some shops.

If your approach is working for you, that's great! I'd just hate to try to implement it in an environment with hundreds or thousands of VMs.

KLC

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
Paul_B1
Hot Shot
Hot Shot

I may have not been clear exactly how our LUNS are used.

On any given LUN we may have multiple VM disk files for various different VMs. We just keep everyone's "C:\" volume on one of our designated "C" LUNS and everyone's "D:\" volume on one of the designated "D" LUNS and so on. Maybe that clears things up a bit. We DO NOT use a one to one lun -> VM scenario, which as you have pointed out would become 1. A pain to manage and 2. A limiting factor in our deployment. We will have over 300 VMs once our consolidation project is complete.

0 Kudos
Ken_Cline
Champion
Champion

Whew! Thanks for clearing that up...I've run across others who do the same, and in some ways it makes sense (you can tune your LUN to its intended purpose)...

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
Paul_B1
Hot Shot
Hot Shot

Hahaha.. Smiley Happy Yeah, I can see how the confusion could arise, sorry about that. We had a consulting company come in on our initial deployment and this was their recommended setup for ease of use and to help manage disk i/o and LUN usage long term. So far it's worked really well for us

0 Kudos
Seraph
Contributor
Contributor

Hi,

We have a one lun for each vm.

I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree,

to make a new lun and make it visible to the esx servers takes about 10 minutes, that is no prob i think.

And if a vm is removed you can remove the lun and use the space on you filer for another function.

Cheers,

0 Kudos
paulo_meireles

We have a one lun for each vm. I cannot say witch is better but as some people say it is an administrative nightmare i cannot agree,

You must have very few VMs. We have a bit over 200 VMs in our infrastructure, among 8 physical servers, on our EVA 5000. The EVA 5000 has 2 controllers, and every server has 2 HBAs. If we had one LUN per VM, we would have 800 paths to the LUNs (every LUN would be visible 4 times: 1 per HBA and 1 per EVA controller). That is well beyond the 128 hard limit on ESX.

We have some 256GB LUNs and some 512GB LUNs, and distribute the VMs among them. Lighter machines go to bigger LUNs (more machines per LUN). We don't create and delete LUNs all the time; we set them some time ago, and then added some, but it stays relatively unchanged. Avoid change whenever possible: I never heard of "spontaneous" problems arising in stable machines; however, upgrades and changes tend to be troublesome... Or, as they say: if it works, don't "fix" it.

We also give every machine its own .vmdk just for paging, and have LUNs specifically for holding .vmdk with pagefiles. We tend to give machines a bit more memory than strictly needed, so pagefiles are there to give machine owners that "warm fuzzy feeling", as they're seldom used (if ever). Indeed, we're considering putting these .vmdk files in some lower grade storage, and keep the real stuff in the EVA. Another reason why we have dedicated .vmdk for pagefiles is because of backups: pagefiles tend to be placed on the C: drive, but we have a policy of keeping C: as small as possible, so we can keep restore times low. Pagefiles are not backed up, so they need not to be restored.

Having different LUNs for different kinds of volumes (one LUN for system drives, another for pagefiles, etc.) also allows optimizations on the storage level - like using vraid1 for databases and vraid5 for system drives and pagefiles.

Paulo

0 Kudos