VMware Cloud Community
apconsultant
Contributor
Contributor

HP MSA SAN Help

I need to get an ESX VI3 Cluster up and running quickly for testing next week, and around here, test boxes become production. So I want to configure the SAN and disks correctly the first time before we are 'stuck'.

We already purchased 2 MSA1500CS SAN kits. The dual fabric SAN is up and running with dual paths with default fixed zoning. The MSA's have the new firmware and are running in Active/Active mode connected to two disk cages each. Preferred Path Mode in the ACU are Manual with arrays in each disk cage on separate controllers. The LUNS on the ESX Hosts are configured as Fixed Paths with the Preferred Paths matching up to the MSA's.

I've gotten this far thanks to vmtn and the VMware Links page. So is this a valid setup? Any potential problems with this configuration?

We will start off with 2 ESX hosts and expand that to 4 as we virualize our 40 servers in our datacenter.

Thanks!

0 Kudos
7 Replies
piacas
Enthusiast
Enthusiast

I've done quite a few VI3 installs on MSA's. They work well for low I/O VM's. When you say default zone, does that mean you setup zoning or is it left wide open?? I would always zones the fabric. Also are you using SSP? Make sure to enable that and set host type to linux. How are your disks configured?? Did you take say 14 disks and put them in an array and then carve LUN's from thator did you take the disk and create multiple arrays and then create whole LUN from that? You can do either way, but if you create fewer larger Array's and them create multiple LUN's from that, you spread the I/O over more disks.

0 Kudos
apconsultant
Contributor
Contributor

Thanks for the reply. Default Zoning is a setting on the HP SAN Switch. I believe it means everything plugged into the switch can communicate with each other. I use SSP on the MSA's to control which HBA's see which array.

The disk configuration is what I need help with. This will be a med-high IO SAN environment where I have much more disk space than needed. So I would like to configure the Arrays / LUNS for the best performance possible.

Current setup:

One Active/Active MSA1500 is attached to two drive cages, and each drive cage is connected to a separate ?controller/channel? of the MSA via SCSI. And each drive cage is filled with 14x 300GB disks. I currently have 4 array's configured like this on the MSA...

Array1 (Cage1 Disks 0-6)

Array2 (Cage1 Disks 7-14)

Array3 (Cage2 Disks 0-6)

Array4 (Cage 2 Disks 7-14)

In VMware they are named LUN1, LUN2, LUN3, and LUN4 respectively. So in my setup a LUN corresponds to a whole Array.

It's not too late for me the reconfigure this.

0 Kudos
mitchellm3
Enthusiast
Enthusiast

You may want to think about having each array spead across shelves instead of located only on one. This will spread your I/O across two SCSI channels instead of just one.

5474
Enthusiast
Enthusiast

It also gives you some redundacy if half a shelf fails.

apconsultant
Contributor
Contributor

Like this? Does spreading the array increase IO or is this only for redundancy?

Array1 (Cage1 Disks 0,7 Cage 2 Disks 0,7)

Array2 (Cage1 Disks 1,8 Cage 2 Disks 1,8)

...

Also, are disks 1-6 and 7-14 of a shelf on separate SCSI channels?

0 Kudos
apconsultant
Contributor
Contributor

Spreading IO across multiple shelfs appear to hurt performance a bit. Maybe this is due to the Active/Active firmware on the MSA's, I dunno.

What appears to increase performance is larger arrays with smaller luns carved out. I've had the best results with a 14 disc array (1 shelf) and 300-500GB luns.

I will probably use Raid6 ADG & Spare with such large arrays.

0 Kudos
nobleone
Contributor
Contributor

you'll probably find that RAID 6 doesn't provide adequate performance to support 4 hosts and 40 guests. We run RAID 10 on ours and it works pretty good. We have 6 hosts and about 35 - 40 guests.

Definitely keep the LUN size down to keep the disk queue length low. I'd recommend 200 - 250 GB in size.

0 Kudos