VMware Cloud Community
jonnysideways
Contributor
Contributor

Datastore Configuration on a 4.7TB Dell EqualLogic PS4100X SAN

We have just configured our new  Dell EqualLogic PS4100X SAN. We have 4.7TB available to our ESX 5 Enviroment.

Assuming we just want to provision the storage to the ESX environment in the form of datastores, does any one have any advice on how to configure the storage, as in how many volumes to create, and how many data stores to have on them.

cheers

John

Reply
0 Kudos
10 Replies
alamosajimmy
Contributor
Contributor

Did you get some advice on this?  I'm facing the exact same setup (and yes, I currently have the same md3000i you used to have).  If you'd like to work together on your setup and mine, I'm available anytime at jbelknap@ci.alamosa.co.us

Reply
0 Kudos
mcowger
Immortal
Immortal

I'd shoot for 1 VMFS per volume, and 6-10 volumes (depending on your needs).

Don't do:

1 giant datastore

1 datastore per VM.

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
alamosajimmy
Contributor
Contributor

Very interesting, because of course I was going for the 1 volume per datastore (for each vm).  Can you help me understand your storage a bit more?  I'm still struggling with the overall idea of sans.

I'm running about 15 virtual servers atm.  several sql and 1 exchange, so by your advice I'd do something like set up say a raid 10 vmfs of 2 tb with 7 volumes (one for each vm) and then another raid 5 vmfs of 2 tb with 8 volumes (for the other 8 vm's)?  Adding to that, maybe a raid 5 VMFS of 2 tb for snapshots and another for backups?

Reply
0 Kudos
mcowger
Immortal
Immortal

Note:  Volume is the wrong term in this context.  A 'volume' is basically a LUN (in this context).   A single VMFS will contain multiple virtual disks (VMDKs), which what you mean when you refer to volume.

So, what I'd recommend is something like this (with the addl detauil you've added):

2 RAID 10 LUNs exported to your hosts, each with a VMFS.  Put 3-4 of your SQL/Exchange VMs on each one of these.  Size of the LUN should be dictated by how much space you need.

2-3 RAID5 LUNs, each with a VMFS on it.  Evenly split your other VMs across these filesystems.

No need for a snapshot LUN (snapshots are generally kept WITH the VM), but one for backups could be used if you want.

The point here is to strike a balance between ease of management (not having TOO many LUNs) and good performance (1 giant LUN would be bad for performance).

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
alamosajimmy
Contributor
Contributor

It's starting to finally make sense!  as the fog lifts, I see another question arise.  What do you think of a seperate lun for large amounts of user data ?

With your advice, I think I'll end up like this:

2 RAID 10 LUNs  each with a VMFS.  Put 3-4 SQL/Exchange VMs on each one of these.

2-3 RAID5 LUNs, each with a VMFS on it.  Evenly split other VMs across these filesystems.

1-2 Raid5 Luns, each with a VMFS on it.  Evenly split for Large User Data Drives, and backups.

You've been a GREAT help!  Want a stab at our physical layout? Smiley Happy  It's attached as we are now.  Going to be adding an equallogic PS4100x 14.4 tb raw, and changing the 2900 servers over to 2 720's.

Reply
0 Kudos
jonnysideways
Contributor
Contributor

looking at your setup its very similar to ours.

We have 3 R710 Poweredges, running through a stack of 2 6224 Powerconnects. 4 Nics per server on the vswitch vnetwork, and 4 nics on the vswitch iscsi network.

The thing i cant get my head around is that i have set my enriroment up so that the iscsi traffic is on vswitches that are physically on a separate network to the network traffic, but I am told presenting the iscsi volumes directly to the san is the way to go, but this means having iscsi and vmnetwork sharing a vswitch, which I thought owuld cause more traffic?

Have you any thoughts?

so from ready up and taking advise from Dell San Technical they suggest the following:

create datastores, not to large, and have vms on them, for the c: drive only. Then add other drives to those vm's using direct presentation to the O/S i.e. Windows in my case.

again, still some fog ;0)

Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

My suggestion, is a balance of number of volumes / vms.   A 1 volume / VM is a bit excessive for most configurations.

Another benefit to multiple volumes is SCSI Command Tag Queing.   Each SCSI device negotiates a CTQ "depth" or number of simultaneous IOs it can handle.  Once that is filed, I/O to that disk is paused until the queue starts to drain.   With multiple servers accessing one or very few volumes, CTQ depth can become a bottleneck on your SAN.   Typical CTQ depth values are 16-64.  32 being very common.   So each volume would have it's own CTQ depth of "X".  When one volume is busy the other volumes can still be doing IO.  Also, with ESX v4.1 and above you should leverage VAAI.  That's an API that allows storage to offload a number of functions to the array.  For full feature support of that, you need to be running EQL FW v6.0.1 or greater. The minimum FW revision is 5.2.5.  A very important feature of VAAI is and enhanced locking mechanism called ATS.  That reduces the need for SCSI reservation off volumes for certain functions. (Starting a VM, creating a VM, creating a snapshot, updating metadata, etc)  When a volume is reserved, only that node reserving it, can access it.  Until it is released, all other nodes must wait.  This has the potential of being a bottleneck. Another reason to have mutliple volumes.  While one is reserved, others may not be and can process IO.

This also carries forward into the VM itself.  When you have multiple VMDKs or Raw Device Mapped (RDM) disks, by default VMware uses just one Virtual SCSI control to access them all.   This can quickly become a bottleneck.   The OS only has one queue to handle all disks. The SCSI initiator (controller) can only talk to one disk at a time.   ESX allows you to have up to four virtual SCSI controllers per VM.   So you can ADD SCSI controllers to reduce the count of disks per controller.  The OS now has up to 4x paths to get to the disks.  Each disk has it's own CTQ as well.  This means more IO can be in flight at once.  So you C: drive IO won't block your SQL DB and Log disks.  

Also remember that VMFS volumes always need free space for logs, swap space and snapshots.  I like to pad my basic volumes at least 100GB.  In case some one creates a VMware snapshot and forgets to delete or merge it.  That snapshot will grow over time and consume all free space.

That 100GB is subjective and appropriate for my needs.  You milage may vary.

With EQL arrays disabling Delayed ACK and Large Recieve offload can improve performance.

I just posted some info on that today.

http://communities.vmware.com/message/2128529#2128529

This blog which has info agreed upon by Vmware, Dell/EQL, etc  has more info on this.

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

You should also disable Delayed ACK and Large Recieve Offload (LRO)

How to Disable Delayed ACK

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100259...

Solution Title
HOWTO: Disable Large Receive Offload (LRO) in ESX v4/v5
Solution Details
Within VMware, the following command will query the current LRO value.

# esxcfg-advcfg -g /Net/TcpipDefLROEnabled

To set the LRO value to zero (disabled):

# esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled

NOTE: a server reboot is required.


Info on changing LRO in the Guest network.

http://docwiki.cisco.com/wiki/Disable_LRO

Hope this helps!

Regards,

Don

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

James Belknap wrote:

2-3 RAID5 LUNs, each with a VMFS on it.  Evenly split other VMs across these filesystems.

1-2 Raid5 Luns, each with a VMFS on it.  Evenly split for Large User Data Drives, and backups.

Placing "VMs" and "User data" on different LUNs doesn't make sense. Your data is inside those VMs.

Placing backups on the same physical SAN you are backing up would also be a worry.

Reply
0 Kudos
lobo519
Contributor
Contributor

Since when can you have multiple RAID types on a single Equallogic array??

You have to pick a single RAID type for each enclousure.

Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

You can't have different RAID types in one physical member array.

Reply
0 Kudos