VMware Cloud Community
nabeelsayegh
Contributor
Contributor

SAN Allocation Best Practice - 1 LUN or Multiple LUNS

I have some questions about allocating (and ultimately using) SAN storage in a ESXi 4.1 cluster:

To setup a conceptual example for you guys to comment on; let's say I have a 4 node cluster that has redundant 4GB/s Fiber Channel connections to a SAN (Cisco MDS Switches and a mix of DMX, Clariion and Xiotech arrays). Let's say that I have two pools of storage in my DMX. Pool 1 will support my boot OS's (C:\) and Pool 2 will be 'data' volumes (D:\).  Pool 1 has 500GB of usable storage and Pool 2 has 2TB of usable storage. Assume I have used all the available storage in pool 1 and created a single 500GB LUN, allocated it to the ESX cluster and this volume is what I use to park all my VM's (c:\) on.  I create 10 VM's and for each VM, I need to give it an additional 200GB's from Pool 2.

For Pool 2, which configuraiton would be more desirable?

  1. Create 1 big 2TB LUN and allocate it to the ESX Cluster.  Then, for each VM, add a 200GB virtual disk created from the 2TB datastore, or...
  2. Create individual 200GB LUN's on the SAN and allocate them separately to each VM.

I know that this is very simplistic and things like work load, RAID type, etc can influence the answer.  What i am looking for is if there are any obvious advantages for doing one verses the other relative to performance that ESX does (under the covers) that would make you lean toward one verses the other.

(Sorry if this is too similar to other posts...just wanted to give a more specific example.)

Reply
0 Kudos
4 Replies
a_p_
Leadership
Leadership

That won't work. The current limit of a LUN for ESX is 2 TB minus 512 Bytes.

Unless needed I usually don't put the OS and Data on different LUNs. You will have to set the block size for the base datastore of the VM (usually the one with the OS disk) to the same block size needed to store the larger data disks. (http://kb.vmware.com/kb/1012384)

André

EDIT: Just saw you edited your post from 5TB LUNs to 2TB LUNs.

Reply
0 Kudos
nabeelsayegh
Contributor
Contributor

Considering I have no intention of taking snapshots or storage vmotioning on any of the VM's in question, i am not sure that i would be concerned with having VMDK's on a VM that are created from storage with differnet block sizes than the OS (boot) VMDK.

The underlying question here is: Do ESX handle I/O performance better with multiple (small) storage LUN's or fewer (large) LUNs.

Reply
0 Kudos
a_p_
Leadership
Leadership

You should be concerned about the block size. ESX(i) will not allow you to attach e.g. a 400GB virtual disk to a VM created on a VMFS datastore with 1 MB block size.

As for the performance. IMO ESX(i) does not perform differently on small or large datastores. However, depending on the application you run, you may benefit from using different storage tiers. E.g. by putting the DB and log files of a database on faster RAID10 LUNs.

André

Reply
0 Kudos
nabeelsayegh
Contributor
Contributor

Andre,

Thanks for the replies. To clarify, we actually have this configuration in place.  ESXi will absolutely let you do this.  We have a 40GB boot VMDK created from a 500GB data store with a 1MB block size and an additional 500GB VMDK allocated from 1TB datastores configured with a 2MB blcok size. One thing i neglected to mention is that the 500GB volume is attached to a secondary pvscsi controller.  Now, what you can do with the VM (regarding snapshoting and vmotioni) is a different story but this config does indeed work.

Reply
0 Kudos