VMware Cloud Community
glee9
Contributor
Contributor

Best Practices for HDS AMS SAN, Lun Sizing, Multipathing etc...

Hi all,

I've got 11 ESX hosts connected to an HDS AMS 500 via Emulex LP-11000e HBA's on ESX 4.0 Build 164009, and am running into major latency issues when doing moderate to heavy I/O on a 200GB LUN.

I'm only running one VM on the host, and the have increased the lun queue depth to 64, though ACTV shows only 32. QUED=0, DAVG=300+,KAVG=0.00,QAVG=0.00

I've tried all three multipathing methods (MRU,Fixed,Round Robin) with the same results. My SAN admin said and is insisting that smaller LUNS (20GB concatenated would be better, but my understanding from the forums and documentation is that multiple VMFS extents leads to greater loss of data failure of any one of those extents fails (particularly the 1st extent). Also 50 LUNS concatenated to create a 1TB VMFS volume seems like an administrative headache.

Here is the info I've receieved about the SAN config:

> Are the paths through the same ports on the array?

There are 2 paths, primary and secondary, each path has 2 interfaces to the

SAN which operate in a round robin manner.

> Are the LUNS on two different storage processors? (and they are Active/Active,

> yes?)

The LUNS are all on a single controller, the AMS 500 does not share load

between both processors, however, we split RAID groups between both

controllers to achieve some rough load balancing.

> How many disks (spindles)?

RAID 5 5+1 SATA

> How many RAID arrays?

1

> Caching enabled/disabled? (read and/or write?)

There's tons of cache.

this article looked interesting, but seemed to pertain to the many small LUNS vs fewer large LUNS scenario.

http://communities.vmware.com/message/1206061#1206061

Also, there are other non-ESX hosts attached to the same array, using the many concatenated LUNS setup, and they are doing well with I/O.

I'm very new to the SAN world, and would very much appreciate any opinions or suggestions as to best practices.

Thanks, all!

-g

0 Kudos
3 Replies
MHAV
Hot Shot
Hot Shot

Well first of all congratulations for having a AMS beacuse it's technoligical probably one of the best midrange SAN. It's not that performant than a AMS2000 but good.

Usally you use LUN's like 300 - 500 GB for ESX-Servers. If you have the need for a larger Volume like for a VM that hosts a large database you are using a RAW Device for that particular VM. The other possibility is to use the ADD EXTEND Function within the Datastore Configuration where you add a couple of your 300-500 GB LUNs to a larger LUN - please not bigger then 2 TB.

RAID 5+1 is ok and one RAID as well that you have enough spindles within the RAID.

We recommend as many other people I heard not to put more than 8 ESX Hosts together on a LUN (SCSI-Reservations).

Regards

Michael Haverbeck

If you find this information useful, please award points for "correct" or "helpful".

Regards Michael Haverbeck Check out my blog www.the-virtualizer.com
0 Kudos
glee9
Contributor
Contributor

Hi Michael, - good info Thanks!

My SAN guy is telling me that 300GB is too large, and that multiple 20GB's would be better. His reasonng for I/O makes sense in that it spreads the queues out over several LUNS as opposed to overloading a single LUN. Thing is, I'd like to stick with large LUNS (less admin overhead ...for me :-> How do folks get the performance they need with single large LUNS?

Cheers,

0 Kudos
NicholasFarmer
Enthusiast
Enthusiast

Ive had the same question in my mind for a while.

If a 20 gig Lun spans across 14 disks in the Array and another 300 gig lun spans the same 14 disks, how is there any increase in performance when its the same disk and all through the same SAN interface?

I could see an increase if the Luns were on differnt spindles thus increasing the disk count but depending on how the SAN admin spans the Luns, i dont get how you can gain performance by having a lot of small Luns across the same disks as one larger LUN.

Anyone know?

I guess the total solution is for everyone to have ten racks of Hitachi Tagma storage.

0 Kudos