VMware Cloud Community
Stu_McHugh
Hot Shot
Hot Shot
Jump to solution

1 LUN per VM/guest? surely not?!

Can anyone point me to best practices or advice me why not to do this as I'm finding it hard to convince another team of mine not to do this.

They are most concerned about the IO over head when snapshots are taking via IPstor. I has said not to use one LUN per guest as it's wasteful, the vm snapshots could fill up a LUN and cause the guest to fall over but I don't know how to get over there concern of IO.

Anyone else got any solid information I can throw at them?

Many Thanks

Stuart ------------------------------------------------ Please award points to any useful answers..
0 Kudos
1 Solution

Accepted Solutions
jbogardus
Hot Shot
Hot Shot
Jump to solution

This is VMware's whitepaper on planning Storage performance that addresses this issue:

It is does make management more difficult to consider putting a small number of VMs in each LUN. Implementing a general policy of 1 LUN per VM for an entire environment I can't imagine making sense for any environment, and I don't think a policy needs to be taken to that extent for even the busiest VM. Just to begin with I'd consider that the C:/root drive of most VMs is not very heavily utilized so as a general policy several of them can be on a shared LUN to ease management. It makes sense to generally place data in separate virtual disks from the OS, so that is where you can begin to consider placing fewer of those virtual disks together within one LUN/VMFS volume or RDM based on the expected IO load of each data virtual disk.

View solution in original post

0 Kudos
9 Replies
dburgess
VMware Employee
VMware Employee
Jump to solution

This is a common problem.I think there are two good approaches to this.

1) An analysis type approach and then monitor for bad citizens, if there are any you may have to conceed these will need dedicated VMFS volumes or RDM's.

2) You could also create a quarantine area for new VM's.

So create a VMFS/LUN big enough to hold up to 10/15 VM's say. Create a second VMFS to hold incoming VM's. Once they are established and running ok, you can monitor disk activity for a while etc, if you are confident they are not i/o monsters then svmotion them onto the shared LUN, repeat monitoring activity, worst case you can shuffle them back to a dedicated VMFS/LUN. You can repeat the process and effectively sort the guests into good shared candidates and not so good ones. It is usually likely that if the guest is so i/o intensive it is causing problems with the shared LUNS it may require special setup in any case.

msemon1
Expert
Expert
Jump to solution

I have seen various recommendations on LUN sizing from 300GB to 600GB. Ours are actually larger. We have some that are around 1.5TB. ( I didn't configure) Having enough space for 10-15 VM's I think is the recommendation to avoid I/O issues and SCSI reservation problems. What is your reason for 1VM per LUN? Are you doing some type of LUN snapshoting?

Mike

azn2kew
Champion
Champion
Jump to solution

The only reasons I could see as 1:1 ratio here is implementing Hyper_V environment where Microsoft using Cluster Shared Volume that requires to have 1 LUn for each VM which is ridiculously wasted and SAN admin will not be happy to provision 1:1 ratio for every single VMs on the cluster. Imagine if you have 200+ VMs that's not a good practice. It would be reasonable if using RDMs for some servers but not all 1:1.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

VMware vExpert 2009

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
0 Kudos
beyondvm
Hot Shot
Hot Shot
Jump to solution

As others have said, its very wasteful and will be a management nightmare. Also, 255 VMs in a cluster would be your top limit then assuming that you aren't using any RDM or other LUNs (255 is the maximum amount of luns that can be viewed by 1 esx host), which is low considering the growing power of host machines. The concerns of your team members are valid sure, but the benifits of not doing it his way far out weight the risks in my oppinion.

---

If you found any of my comments helpful please consider awarding points for "Correct" or "Helpful". Thanks!!!

www.beyondvm.com

--- If you found any of my comments helpful please consider awarding points for "Correct" or "Helpful". Thanks!!! www.beyondvm.com
0 Kudos
JohnADCO
Expert
Expert
Jump to solution

Is this for a data center type application?

If not? There really is nothing wrong with the 1 lun per VM. I prefer it overall.

Administration doesn't seem any tougher with it.

The most VM's we run on a given host is only 24 and that is probably why administration isn't so tough on us.

0 Kudos
jbogardus
Hot Shot
Hot Shot
Jump to solution

This is VMware's whitepaper on planning Storage performance that addresses this issue:

It is does make management more difficult to consider putting a small number of VMs in each LUN. Implementing a general policy of 1 LUN per VM for an entire environment I can't imagine making sense for any environment, and I don't think a policy needs to be taken to that extent for even the busiest VM. Just to begin with I'd consider that the C:/root drive of most VMs is not very heavily utilized so as a general policy several of them can be on a shared LUN to ease management. It makes sense to generally place data in separate virtual disks from the OS, so that is where you can begin to consider placing fewer of those virtual disks together within one LUN/VMFS volume or RDM based on the expected IO load of each data virtual disk.

0 Kudos
sketchy00
Hot Shot
Hot Shot
Jump to solution

There are many candidates that are far more qualified to comment on this than I am, but I'll add my two cents. There are also many great resources on the matter. One definately worth reading is: Many others will be able to point you to all of the specific benefits and caveats. My thoughts are more on the practical application of it in a production environment.

Assuming one falls within the I/O thresholds, it seems to come down to manageability. Conceptually, a pool of something can often be easier to manage than a bunch of little granuals, whichs is the direction that so many IT related things are going. (think about an old physical server you have. Would you want a bunch of drives with disparet content and available space, or would you want a pool of storage space?). Now of course, if the analysis says that peformance is being affected by two many VM's on a specific LUN, well then, move a few out of the pool and see if you fall back within your thresholds. I think it's simply easier to manage, and allows for easy adjustment later, with no need to determine if you've created a LUN large enough for the VMDK and an occasional temporary snapshot or two.

By the way, if we have some VM's that I know will be pulling a lot of I/O (e.g. sql or exchange), then I use some guest ISCSI initiators with MPIO that will handle the bandwidth.

One of the benefits to everything virtual is that I have been taking a "have a need, prove it" approach. We have many developers here that insisted they needed a machine to run some app of theirs and it needed to have 16GB of RAM, tones of horsepower, loads of I/O, etc. When I demonstrated it ran fine with 768MB, the silence was deafening. I simply never had this type of visibility to resources before. Its wonderfull. Hats off to vmware.

0 Kudos
JohnADCO
Expert
Expert
Jump to solution

As stated, no real reason not to do it for the typical SMB model. With a san the 1 lun, per 1 VM, per one Virtual Disk makes san redundancy way easy to manage.

0 Kudos
jbogardus
Hot Shot
Hot Shot
Jump to solution

Also think about how you will manage resizing your data volumes in the future. My earlier suggestion about putting the OS and Data into separate virtual disks is really important for this purpose. One of the really nice management benefits of VMware is being able to resize the data disk as needed and only really needing to allocate it's initial size to what you think is needed in say the next year. If you make 1 LUN per VM and put both OS and Data in one LUN/datastore (or worse for VMs with expanding data sets, in one virtual disk) then the amount of expansion you can do on the data volume is limited by the size of the datastore. However if you put the OS in a shared datastore and the data volumes in RDMs there is not that limitation. There are also options in vSphere for thin provisioning to overallocate data volume sizes without initially dedicating the storage space, but even in that case if you put OS and Data in 1 LUN/Datastore for each VM you will probably find that you're making future resizing or adjustment of VM drives unnecessarily difficult.

0 Kudos