VMware Cloud Community
erickmiller
Enthusiast
Enthusiast

MSA2000 feedback

Anyone used the MSA2000 yet? I know the availability is quite low. I didn't know it was a re-branded Dot Hill unit.

Thought I'd pass on a link to a couple people who have used it (not necessarily with ESX):

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1231639

Eric K. Miller, Genesis Hosting Solutions, LLC

- Lease part of our ESX cluster!

Eric K. Miller, Genesis Hosting Solutions, LLC http://www.genesishosting.com/ - Lease part of our ESX cluster!
0 Kudos
15 Replies
Modrus
Contributor
Contributor

Hi Eric

I've just installed on here in the UK and it's working really well. The customer's using two ESX hosts connected directly to the unit with around 1.8TB of shared storage.

Vmotion is not configured as they are using foundation licensing but SP failover works 100%, you just have to enable 'Interconnected' under 'Host Port Configuration' to enable both hosts to see all of the storage via both controllers. The install is pretty quick although it did take about 10 hours to build the RAID5 (7 * 300GB SAS disks). You can get basic (IOPS and Bandwidth) stats out of the array and performance seems good. No problems so far although I agree with the HP post that the rail kit is awful.

Simon

0 Kudos
jaygriffin
Enthusiast
Enthusiast

I am using a dual controller 2012i with 7 ESX servers and 48 virtual servers. I have HA and DRS configured. Have uses Vmotion and Svmotion extensively. Works well.

0 Kudos
J-D
Enthusiast
Enthusiast

Modrus,

do you have experience with the MSA 2000 with SAN switches? In your previous mail you used directly attached ESX's and then you need to enable the interconnect switches.

I am using the MSA 2000 with SAN switches and I have a problem when I disconnect all cables to one controller. There is just no failover!

I'll create a new post for this... the weird thing is that I was expecting to see 4 paths to the same LUN but I only see one path. I have 2 LUN's and the paths shown are vmhba1:0:1 and vmhba2:0:1 for the first LUN and for the second LUN I see vmhba1:1:2 and vmbha2:1:2 so it's as if one controller has taken ownership over one LUN and the other over the other LUN.

0 Kudos
O_o
Enthusiast
Enthusiast

We're using it in conjuction with a c7000 blade from HP, works well, 6 blades running ESX 3.5 ... Had some issues with powersupplies, but they're all fixed now (could be the datacenter's powergrid though).

0 Kudos
J-D
Enthusiast
Enthusiast

Are you using SAN switches? Have you tried unplugging one MSA controller to check redundancy?

How many paths do you see to your LUN's?

0 Kudos
O_o
Enthusiast
Enthusiast

The switches are in the blade enclosure yes ... Brocade switches. I haven't tried unplugging one MSA controller, I'll see if I can test that. The paths I need to check to, but I can't do this right now, I'm at another customer right now, and they aren't using any of this setup, just local storage.

0 Kudos
J-D
Enthusiast
Enthusiast

just found this in the manual:

Using Host Port Interconnects

When the internal connections between host ports are enabled through SMU, host

port 0 on each controller is internally connected to host port 1 on the other

controller. This provides redundancy in the event one controller fails (failover) by

making volumes owned by the failed controller accessible on the surviving

controller.

Enable port interconnects when controller enclosures are attached directly to hosts

and high availability is required, or when switch ports are at a premium and fault

tolerance is required but highest performance is not.

When ports are not interconnected, volumes owned by a controller are accessible

from two of its host ports only. Use this default setting when controller enclosures

are attached through one or more switches, or when they are attached directly but

performance is more important than availability.

=> So I interprete this as "never use interconnect ports" when using SAN switches which is confirmed by more info in that pdf.

I did try to enable this and did a rescan but this seemed to get into a loop.

With an EVA I see 4 paths and I thought I would see the same here. With a MSA 1500 there are only 2 but that's normal as each controller there has only one fiber cable.

Message was edited by: Ken.Cline to fix text size formatting

0 Kudos
J-D
Enthusiast
Enthusiast

I think I know the reason why I only see 2 paths. A MSA 2000 (and a MSA 1500) with 2 controllers in active/active setup is not really active/active but they are sharing the load. So if you have 2 LUNs, one controller will take control of 1 LUN and the other controller will control the other LUN.

If you unplug 2 fibers to one controller then this will not alert the second controller to take control of the LUN's "owned" by its colleague. Only a failure of a controller (like unplugging it) will cause this transfer of ownership.

I tested this and can let you know that unplugging a controller didn't cause a crash. The VM's kept running happily. Unplugging 2 fiber cables however is something you shouldn't do... in a real life scenario though I think chances are higher that a controller/SAN switch crash instead of 2 fibers breaking.

I hope the above is correct...

0 Kudos
J-D
Enthusiast
Enthusiast

I have to change my above reply to something bad...when I removed one controller, the one LUN that it was controlling didn't house VM's but a RDM volume. I didn't check if that was still reacheable after unplugging.

Last week on a test-setup with an MSA 2000 we unplugged one controller and the LUN went away. We didn't see it coming back Smiley Sad

Now yesterday that original customers with an MSA 2000 (fibercontrollers: 2012fc) had a hardware problem with the secondary controller. The RDM volume disappeared and didn't return automatically on the secondary controller. We had to shutdown everything and reboot.

HP arrived with a new controller. We configured it, plugged it in and the RMD volume disappeared again. Instead of shutting down we now did a rescan on the vmhba level (in the GUI, right click on the HBA and there rescan). The toplevel rescan didn't work. After the rescan RDM LUN got visible.

I don't think this is a RDM issue but something with the MSA as we encounterd this on a testsetup with only VMFS LUNs. HP said it was VMware's fault for not being able to wait long enough to see the new path. The HP technician said all OS's should be able to last 30 seconds without storage and thus cache this info.

Either VMware isn't rescanning fast enough or isn't able to cope with this but this MSA is on the supported list! What is going wrong here?

We were using the default path setting which is fixed. Is MRU better? I read this somewhere but don't know why it would be better. Fixed path's explanation is "use the preferred path when available". MRU is just the most recently used...

Can someone else please tryout the redundancy and unplug one of the controllers? I guess you better wait 30 seconds before concluding that the LUN is gone.

Thanks in advance for feedback.

0 Kudos
Krede
Enthusiast
Enthusiast

Hi J-D

Did you manage to get failover to work?

0 Kudos
ChristianS
Contributor
Contributor

Hi,

there is an issue with RDM's and SAS Disks on MSA2000. The problem is, ESX does not know if the SAS RDM is local or shared storage. There for you need to do special konfiguration

see link:

Note:

This procedure is unique to SAS implementations. ESX cannot determine

if the SAS lun is local or shared, and therefore must be configured

through the CLI

CLI commands

For a Physical Compatibility Mode RDM:

vmkfstools -z /vmfs/devices/disks/vmhba#:#:#:0 /vmfs/volumes/VOLNAME/VMNAME/DISKNAME.vmdk

Note:

Be certain to use the Canonical Path (vmhba4:8:3:0) instead of Physical

Path (vmhba4:10:3:0)

0 Kudos
marcin8
Contributor
Contributor

Hi. I have a quick question. You have been using 2012i for a while so I'm guesing that you have tried to update firmware. I will start using my box in next week and before that I would like to update box, found the newest firmware J210P12-01 (16 Mar 2009) - the instruction are simple and straight forward,, but found also this: Online ROM flash componet for Linux - HP SW 2000 Modular Smart Array Drive Enclosure I/O Modul:

http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=uk&prodTypeId=12...

do you know what it is and if it applies to 2012i box, and if shall I install (how??) this upgrade?

Thanks /Marcin.

0 Kudos
A13x
Hot Shot
Hot Shot

does the MSA2000 still have this issue with vSphere? It doesnt seem right inorder to map a LUN you must make a VMDK and map it to existing vmdk. what happens if the lun has data on it must that data be copied to vmdk and then mounted? Do you get vmdk performance or direct SAN performance?

I assume this is the MSA2000 g1 and not g2

0 Kudos
MHAV
Hot Shot
Hot Shot

We just bought 7 MSA2324fc - after upgrading the BIOS it looks fine. The Unit has 2 Controllers that we direct connect to the 2 FC-Cards in the DL380G5.

We use 300GB SAS Drives and use a RAID10. The only bad part about is that you cant use more then 16 Disks of the max 24 Disks to build a Raid Group.

Regards

Michael Haverbeck

If you find this information useful, please award points for "correct" or "helpful".

Regards Michael Haverbeck Check out my blog www.the-virtualizer.com
0 Kudos
wennit
Contributor
Contributor

Does the RDM issue still apply with the G2 series or has it now been sorted?

I need to create large 5TB/10TB arrays and i think the best way is though RDM.

0 Kudos