VMware Cloud Community
DirtySouth
Contributor
Contributor

Multiple VMFS Volumes on Same Spindles

I've been searching the web and EMC/VMware documentation for a definitive answer on this, so forgive me if this has been addressed already. We are planning to virtualize a large percentage of our servers. At this time we have 4 ESX hosts (Blades) attached to an NS-120 via 4GB Fiber Channel. I've got a shelf with 14 1TB SATA (7200) spindles. I understand that VMFS utilizes a SCSI locking mechanism to prevent file corruption so that you can have multiple hosts accessing the same LUN. My concern is, what issues could arise from having multiple VMFS volumes on the same spindles.

For example, if we create 4 500GB VMFS volumes. Each VMFS (LUN) will be on 7 Spindles (Raid 5). If I dedicate the spindles to the LUN, then I will end up wasting a large amount of disk space. At the same time, I don't want to over-subscribe the IO on each spindle. Does anyone have any recommendations on how to balance between the two or anything else I should consider? Thanks in advance.

Reply
0 Kudos
7 Replies
RParker
Immortal
Immortal

The only stipulation is you cannot have more than 1 VMFS PER LUN. You can have as many VMFS PER VOLUME, we have 2 to 3 VMFS per Volume because the LUNS are smaller.. and we take advantage of large number of spindles.

DirtySouth
Contributor
Contributor

Thanks...that does help. I may be confusing the terms though. When you say "Volume", are you referring to the Raid Group (Raid 5). I was referring to each VMFS as it's own volume.

In our case, we could potentially have 10 VMFS, i.e. 10 LUNs, per Raid Group. To me, that seems like it would be over-subscribing the IO greatly. If I were to limit myself to only 3-4 VMFS, I should have plenty of IO availble, but alot of unused disk space. Does that make sense?

Reply
0 Kudos
joshuatownsend
Enthusiast
Enthusiast

I generally shoot for no more than 4 LUNS (each formatted with a single VMFS) per RAID group on my Clariion installs, but I am typically using 15k FC drives, not 7.2k SATA.

Simply having more LUNS on a RAID group shouldn't oversubscribe the disk - the workload of the VM's in each LUN will do that. You'll only be able to drive so much concurrent IOPS to the spindles in the RAID group before you over run it.

Josh

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

Please visit for News, Views and Virtualization How-To's

Follow me on Twitter - @joshuatownsend

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Please visit http://vmtoday.com for News, Views and Virtualization How-To's Follow me on Twitter - @joshuatownsend
DirtySouth
Contributor
Contributor

Thanks, Josh. That also helps. I realize that everyone's environment is different, but how many VMs do you typically run on a VMFS? I'm trying to get as many opinions as possible so I can better gauge where to start.

If anyone knows of any good documentation on this particular subject, could you please point me to it? Thanks in advance!

Reply
0 Kudos
vmroyale
Immortal
Immortal

Hello.

Eric Siebert provides some excellent information here.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
Reply
0 Kudos
joshuatownsend
Enthusiast
Enthusiast

I'll run 15-20 VM's per VMFS depending on their workloads. Some of the SCSI reservation/locking issues are improved with vSphere so densities can increase a bit.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

Please visit http://vmtoday.com for News, Views and Virtualization How-To's

Follow me on Twitter - @joshuatownsend

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Please visit http://vmtoday.com for News, Views and Virtualization How-To's Follow me on Twitter - @joshuatownsend
Reply
0 Kudos
DirtySouth
Contributor
Contributor

Thanks guys, this is all good info. It looks like in our scenario, we will end up not using all of our disk space since we'll probably run out of IO before bits. I'd rather run burn disk before IO though.

Reply
0 Kudos