VMware Cloud Community
jeronimoniponi
Contributor
Contributor

LUN size

We are currently planning a deployment for ESX. I was wondering what the best strategy for vfms deployment:

1. One big LUN for let say 30 virtual machines.

Or

2. One LUN for one virtual machines.

The average size for a virtual machine will be 16 gb. I am aware of the lun size spreadsheet provided by Ron Oglesby.

We are planning to run 30 virtual machines on one esx host. One farm of esx hosts will be created with 4 to 8 esx hosts (vmotion should be possible between all esx hosts).

Reply
0 Kudos
66 Replies
AMcCreath
Commander
Commander

Just a bit of feedback regarding CLARiiONs and ESX best practices from EMCWorld 2006.

EMC recommends that each meta LUN presented to ESX should not exceed 250GB.

Would they tell me why when I ask? Would they heck-as-like! (Brit talk for I-don't-think-so)

When the presentation is published I'll send a link.

Reply
0 Kudos
Seraph
Contributor
Contributor

Hi,

You are right to say that we have a small enviroment, about 30 VM.

But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers run the VM from it at the same time?

What i mean is when u have multiple esx servers reading and writing to the same vmfs can this not create problems, and if that is not the case could you do it also with a ext3 partition on witch all servers can write at the same time?

Cheers,

Reply
0 Kudos
paulo_meireles

You are right to say that we have a small enviroment, about 30 VM.

I thought so... Smiley Happy

But when u use 1 (or more) large vmfs partitions with your VM on it do you let different ESX servers run the VM from it at the same time?

Precisely.

What i mean is when u have multiple esx servers reading and writing to the same vmfs can this not create problems (...)

VMFS is a concurrent filesystem, meaning it's designed to be concurrently accessed from several hosts. If your environment is so small, I suggest you create a few big LUNs and present every LUN to every ESX server. This way, you will be able to use VMotion (if, of course, you have Virtual Center). Be sure to read the requisites in terms of switch names.

and if that is not the case could you do it also with a ext3 partition on witch all servers can write at the same time?

Unfortunately not, as ext3 is not a concurrent filesystem... You would end up with filesystem corruption after a few filesystem write operations.

Paulo

Reply
0 Kudos
Seraph
Contributor
Contributor

We already use vmotion and it works great.

One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level.

But snap shots has not worked out to be a good solution for disaster recovery/backup.

Thx for the info.

Reply
0 Kudos
paulo_meireles

One of the reasons we used 1 lun/volume per vm is that we wanted to implement snap shots of the VM. This is done on volume level.

You can do it at the file level, by creating redo logs. This can be done while the VM is running.

Paulo

Reply
0 Kudos
yorkie
Hot Shot
Hot Shot

Just a bit of feedback regarding CLARiiONs and ESX

best practices from EMCWorld 2006.

EMC recommends that each meta LUN presented to ESX

should not exceed 250GB.

Would they tell me why when I ask? Would they

heck-as-like! (Brit talk for I-don't-think-so)

When the presentation is published I'll send a link.

Hey Mr McCreath, did they specify if this was the CX500 or CX700? Did they also provide any data (doubt it). I know you would ask the right questions...

I'll use my contacts to find out why this is, but I think this will be the usual case of all conjecture and no data Smiley Sad

Cheers

Steve

Reply
0 Kudos
asteel
Contributor
Contributor

Question-

If I have (for example) two 250GB LUN's that each have a VMFS volume, if VMA is running on VMFS A, can I use VMotion to move it to a host accessing VMFS B?

Reply
0 Kudos
northwest
Enthusiast
Enthusiast

Regardless of where VMA is running, the VMDK file will still reside on VMFS A when you vMotion. VMFS A would have to be presented to both ESX servers that you intend to vMotion between.

An ESX server can read and write to multiple VMFS volumes at the same time. Multiple VMFS volumes may be presented to multiple ESX servers at the same time.

Reply
0 Kudos
asteel
Contributor
Contributor

Beautiful- so the issue is not which VMFS volume a VM is on, but whether both ESX hosts in the VMotion move can see both VMFS volumes....

Thanks!

Reply
0 Kudos
northwest
Enthusiast
Enthusiast

Both ESX servers need to be able to see the VMFS volume which the vmdk file for the VM to be vmotioned resides.

Until you decide to move it, the vmdk file stays on the VMFS volume you created it on. It requires a power off of the VM and a copy (or migrate) to move the VMDK to another datastore (VMFS volume).

You can migrate a running VM between ESX servers who use the same datastore, or shut down the VM to move the VMDK between datastores.

Hope this helps.

Chris

Reply
0 Kudos
joepje
Enthusiast
Enthusiast

@Paul: How are you going to upgrade yar environment to VI-3 while yar VM-files are distributed accross multiple LUN's ?

I am interrested in yar migration scenario.

Reply
0 Kudos
phyberport
Enthusiast
Enthusiast

This thread worries and confuses me. We are a somewhat small shop with 30 VMs. We have 3 VMFS LUNS that we use for the VMDK files. All our VMDK files are for guest "C:\" drive. For any server that needs a "D:\" drive, we create a LUN and attach it to the VM as a System LUN/Disk.

I'm in the process of upgrading to VI3, Virtual Center, Vmotion... Are we limited in the way we have set this up? I'd like to know that if I wanted to, I could go from 30 VMs to 300.

Do most people run their "D:\" drives in a VMDK file? Our file server has a 240 Gig LUN attached to the VM as a System LUN/Disk. Should I be running a 240 Gig VMDK file instead?

Reply
0 Kudos
joepje
Enthusiast
Enthusiast

whel, the point is all vm's on the vmfs2 LUN should be turned off while upgrading to vmfs3.

Reply
0 Kudos
paulo_meireles

All our VMDK files are for guest "C:\" drive. For any server that needs a "D:\" drive, we create a LUN and attach it to the VM as a System LUN/Disk.

There's a limit on the number of LUNs that can be visible by an ESX box, so this is not a scalable strategy.

Are we limited in the way we have set this up? I'd like to know that if I wanted to, I could go from 30 VMs to 300.

No, you won't be able to scale that far with one extra LUN per VM.

Do most people run their "D:\" drives in a VMDK file?

Can't speak for others, but we only use RDMs (Raw Device Mappings) when strictly necessary - which means "clusters", which, by the way, we avoid like the plague. We end up using .vmdk files for pretty much everything.

Our file server has a 240 Gig LUN attached to the VM as a System LUN/Disk. Should I be running a 240 Gig VMDK file instead?

We have one machine whose data disks are one 500GB and one 300GB .vmdk files...

Paulo

Reply
0 Kudos
phyberport
Enthusiast
Enthusiast

Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native.

One other question. Do these large vmdk files sit on one LUN? If one is supposed to only make the LUN a certain size for ESX, then it would seem you'd have to create one LUN per vmdk??

Reply
0 Kudos
paulo_meireles

Thanks for your reply paulo. I guess our intial thinking was from a performance aspect. Vmdk files would have slightly more overhead then running disks native.

The VMFS filesystem is highly optimized performance-wise. The performance hit is negligible.

One other question. Do these large vmdk files sit on one LUN? If one is supposed to only make the LUN a certain size for ESX, then it would seem you'd have to create one LUN per vmdk??

I understand your concern, but we intend to put those 2 .vmdk files in a big LUN (at least 1TB, maybe more) along with other .vmdk files. We have carved our EVA 5000 with 256GB and 512GB LUNs for VMFS, but we now intend to scrap these and create 1TB LUNs instead - of course, carefully migrating the .vmdk from the old LUNs to the new ones. I have yet to see a machine that definitely needs its own LUN for performance reasons. Even then, I'd have it share a VMFS volume with other less hammered .vmdk files.

Paulo

Reply
0 Kudos
m8trixdg
Enthusiast
Enthusiast

Hi Guys,

I felt like I need to add my 2cents.

When I work with my customers I follow a set of guidelines that they can tweak. I mainly focus on EMC hardware but also deal with Netapps, HP and IBM.

This is my best practice / thought process:

I create a filesystem with the following naming conventions

vmfsos01 - For the C: drives

vmfsdata01 - For 😧 drives or data volumes smaller then 100G's

vmfspage01 - For Pagefiles or P: drives

vmfsother01 - For Templates or anything else that is needed

When the admin needs more space they just increment the number to something like vmfsos02

This naming convention helps the admins to very easily and quickly add vmdk files in VC without searching for workbooks.

Now for the sizing. Based on VMware whitepapers, EMC dos and my own testing I have set my number of VM C: per vmfs volume at 10 per LUN.

So to size my vmfsos01 I have used a standard C: size of 20GB.

So 20GB per C: x 10 VM's = 200GB Now I need to account for snap space and swap space in esx3.0. So I add 50 Gigs for snap space and 2 gigs per VM for swap. The final calculation is:

(20GB per C: x 10 VM's) + (2GB for swap x 10 VM's) + 50GB for snap = 270 GB

There is obviously some leway but I like to make ALL of my vmfsos0x volumes the same size.

I make the vmfsdata0x volumes anywhere from 300 - 500 GB's. this is because if there is a VM that needs more then 50 - 100GB for data then they will probably want a physical LUN / RDM.

For the vmfspage01 I make it 2GB x 10VM's = 20GB + 10GB for snap = 30GB

So in the end MY Best Practices I teach my customers is

vmfsos01 - 270GB For the C: drives

vmfsdata01 - 300GB For 😧 drives or data volumes smaller then 100G's

vmfspage01 - 30GB For Pagefiles or P: drives

vmfsother01 - 500GB For Templates or anything else that is needed

Some other reasons for this best practice is based on replication, DR and backups.

Lesser number of VM's on the VM the easier replication will be and the more granualar you can set it to be. Also you can tier your backups and replications schemes for each VMFS so having a smaller number of VM's is more beneficial.

The last thing to note is IF any of my custimers wants to do replication ath the VMFS level then I change my nameing scheme sometimes to inlcude the replication type they are using.

i.e.

vmfsosMV01 - Added MV for Mirroview

vmfsdataSC01 - Added SC for Sancopy

vmfspageNO01 - Added NO for none

I hope this helps. If you have any questions just PM me.

Thanks,

Brad

Tweet: @bmaltz Certs: EMC Solution Expert,VCP 3 & 4, VTSP, vExpert 2010 and VCDX #36
Reply
0 Kudos
paulo_meireles

Your detailed description made me understand a very important variable: the number of servers in the same farm. We have 8 HP DL585, and they're all working together, seeing the same LUNs. As these servers will, eventually, accommodate about 500 VMs (they're currently running a bit over 200 VMs and their occupation is about 40% - this is under ESX 2.5) we would have an awful lot of LUNs to deal with if we broke things down as you do.

In the other hand, your way of doing things is very nice for smaller environments. Sometimes, scalability is not an issue.

In our case, we could do two things: either break the servers in 2 farms of 4 servers each, or have bigger LUNs. We're going for the big LUNs (about 1TB each); others may be more comfortable with more but smaller LUNs.

In the end, it's more a matter of personal choice, as I guess performance will not be much different.

Paulo

Reply
0 Kudos
m8trixdg
Enthusiast
Enthusiast

Hi, This does scale for larger customers obviously. I have some customer with >400 VM's. In those instances I have reworked the numbers while still maintaining the same methodology of laying out the VMFS volumes. You can just scale the LUN's to be larger.

Again though, you need to take into account tieiring of the servers at a certain point.

Brad

Tweet: @bmaltz Certs: EMC Solution Expert,VCP 3 & 4, VTSP, vExpert 2010 and VCDX #36
Reply
0 Kudos
mreferre
Champion
Champion

Paulo,

what you are saying makes sense. However your setup makes me think of a mainframe thing called parallel sysplex .... which is a shared data tightly coupled type of cluster. Now I know for sure than even in this high-end architectures there might be problems that might eventually force you to bring all nodes off line because of this pervasive points of contentions in this tightly coupled cluster. If these things happen (rarley fortunately) I guess that they might be happening (certainly less rarely) on this x86 platform.

I can't detail the exact situations where things might go wrong (perhaps a "bastard lock" or things like that) but whenever I hear about all ESX nodes bundled in a tight cluster layout I tend to feel uncomfortable.....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos