VMware Cloud Community
golemb
Contributor
Contributor

LSI Logic SAS3041E-R

Hello

Quick question, on the HCL the LSI Logic LSI Logic 3041E SAS controller is listed. Does this also include the 3041E-R, the LSI product breif says that it uses the 1064E controller which is also on the HCL. Has anyone used this for an ESX installation. I just want to get RAID 1 support for the ESX OS as my onboard Host RAID controller is not supported.

Thanks

0 Kudos
16 Replies
saltyhacker
Contributor
Contributor

Did you ever find out if this worked? I too am thinking about using this controller for it's RAID 1 functionality. I know ESX wants SCSI and the SAS chip presents itself as SCSI even if the drives on the backend are SATA (I believe?). I want to take advantage of the SAS -- SATAII interoprobility and save some $$ on the pair of drives.

0 Kudos
Henno
Contributor
Contributor

Me too.

0 Kudos
dilidolo
Enthusiast
Enthusiast

I have SAS3041X-R which is the PCI-X version. I didn't use it in ESX but I have many LSI SAS/SCSI cards. From my experience, I can say it will work as LSI intergrated RAID is firmware level which does not require additional RAID drive other than SCSI/SAS drive. If the OS has the MPT driver for the chipset, in this case 1064, the array will be recorganized.

0 Kudos
Henno
Contributor
Contributor

Sorry but what do you mean by "does not require additional RAID drive other than SCSI/SAS drive"?

Also, I couldn't find anything sensible about MPT from Google. Is MPT some generic RAID controller? And is "1064" your chipset?

Excuse me for being ignorant Smiley Happy

0 Kudos
dilidolo
Enthusiast
Enthusiast

MPT is fusion mpt, it's what LSI calls their SCSI/SAS, 1064 is the chipset on SAS3041.

For non-hardware RAID, you need RAID driver in OS to make the array appear as single disk. With LSI, if the OS has SAS/SCSI driver for the chipset on the controller, it will see the array as single disk.

0 Kudos
Henno
Contributor
Contributor

I was under the impression that SAS3041 is a hardware-based RAID controller. Is it not?

0 Kudos
Henno
Contributor
Contributor

You probably are saying that SAS3041 is a hardware-based RAID controller and doesn't need an additional RAID driver with chipset driver. Am I correct?

0 Kudos
Henno
Contributor
Contributor

:smileygrin:

I now get what you meant in your earlier post. You missed the last "r" in word "driver". And I thought you were actually talking about a drive. :smileygrin:

0 Kudos
dilidolo
Enthusiast
Enthusiast

It's firmware based RAID, so only support RAID 0,1 and 10.

Anyway it will work

0 Kudos
dilidolo
Enthusiast
Enthusiast

I run Openfiler and connect 4 * 1T SATA disks to the card, export datastore to ESX using NFS and iSCSI, runs great.

Performance wise, read hits over 200MB/s, write is about 80MB/s, good enough for my test lab.

0 Kudos
Henno
Contributor
Contributor

I was thinking of setting up something similar. Would it work with ESXi?

0 Kudos
Henno
Contributor
Contributor

I mean would ESXi support NFS over iSCSI?

0 Kudos
dilidolo
Enthusiast
Enthusiast

You need to check HCL for ESXi, see if the chipset is listed there, I think 1064 for PCI-X version and 1064e for PCI-e?

0 Kudos
admin
Immortal
Immortal

I was thinking of setting up something similar. Would it work with ESXi?

The SAS3041E-R "works" without any hacks or tweaks in ESXi -- That is to say, it works out-of-the-box as of 3.5.0 (build-123629). I have the PCI Express version of this controller attached to two Hitachi 1TB (3.0gb/s) drives.

I have noticed the following caveats:

When I install ESXi onto a RAID-1 (mirror) configuration, where ESXi resides on the first several hundred megabytes of the disk, it seems that every three or four reboots, the array is degraded and/or out-of-sync.

The system health indicators in VirtualCenter detect the alarm condition on the card, warning me that the array is degraded, and which disk is out of sync -- so it's at least talking to the controller and picking up on that.

Syncing the array, as far as I can tell, needs to be done in the controller's BIOS menu -- takes about 8 or 9 hours to sync the two disks offline. I have read other reports that you must also go into the controller's BIOS menu to replace a failed mirror disk, and that it takes about as long.

There are drivers and software utilities for Windows and Linux, which will monitor / repair the arrays, but these are obviously not included in stock ESXi, and I haven't dug further (yet) to see whether they can be installed in the service console under ESX....

(Yes it was a new install with no VMs. I had nothing invested in its data, but I did want to know how long this would take in the "real world" -- It's a real pain in the a** when it does this randomly, and frequently for no apparent reason).

I updated my system BIOS (Dell Poweredge 830, now has rev A04) and the firmware on the LSI card. I have made the following observations:

1) If I install ESXi on USB or the PERC (SCSI), dedicating my entire SATA raid-1 mirror to the VMFS (instead of having BOTH ESXi AND my VMFS reside on the SATA) then it seems to unmount / sync the LSI cleanly, on a consistent basis.

(This is all purely anecdotal of course. I suspect that when ESXi lives elsewhere, it unmounts the VMFS and syncs it, then initiates shutdown -- it appears to shut down the USB or PERC considerably more cleanly than it seems to shut down and sync the SATA....)

2) If I delete the RAID configuration and use it as a non-mirrored disk, it works fine, syncs, and unmounts fine.

3) Since I went ahead and did THAT, I have the option (from LSI) of flashing RAID firmware, or Initiator-Target firmware, onto the controller. The card has a significant performance increase with the Initiator-Target firmware in place, and this is how I am presently using it.

(................. of course, since it isn't a redundant array, this begets the question: Why the heck don't I just do a crude hack to use the 830's on-board sata?)

TO that point -- yes, I am probably going to get a couple of IDE flash cards and hack ESXi to boot off of those, diskless. If I can't figure out a driver/software solution in the Service Console (this is not mission critical, I could pick up an adaptec 3805 for $250 heh) .... then I might drop this controller and the disks into an iSCSI target running .. well, you say OpenFiler (Linux), I say FreeNAS (FreeBSD) -- but, whatever. At least on the iSCSI target, I can run the appropriate drivers and monitoring utilities to stay on top of the mirror's integrity, as it's exported to ESX.......

re iSCSI: just FYI, I'm crossing over a couple of Compaq SX-1000 (e1000) fiberoptic NICs between ESX and iSCSI specifically for the mount -- it's almost as good as the real thing.

Hope that gives you some info to start off with.

-R

0 Kudos
sdguero
Contributor
Contributor

rgrodevant et al,

I'm seeing similar behavoir from a small percentage of VMWare 3.5.0 servers with LSI SAS3041E-R (FW v1.28.02) running RAID-1 at a customer site. We are recieving reports that RAID syncornization, degradation, and even failure (after a few days of being degraded).

Does anyone know if there have there been any updates on this issue? Does it happen in ESX 4.0 as well?

I don't see any more infmration in the forums or in any VMWare documentation. Any help is appreciated.

Ryan

0 Kudos
dilidolo
Enthusiast
Enthusiast

I'm not an expert on this, but here is what I guess. It has nothing to do with OS or even the controller.

What type of disks are used? Consumer SATA, Enterprise SATA or single port SAS, dual port SAS? Disks make big difference in raid array. Enterprise SATA and SAS have better error detection and recovery, but SATA is half deplux while SAS is full duplex, meaning SATA can only write or read but SAS can write and read at the same time.

Because SATA is half-duplex and single port, there is no way to talk to the drive to see if it is still functional while it is attempting to re-read the data. This can result in the drive being marked as “bad” or “missing” and taken offline, even if the drive was fully functional.

0 Kudos