VMware Cloud Community
mavermc
Contributor
Contributor
Jump to solution

Dell PERC 6/i not showing up?

I've got a Raid 6 setup on a Dell PERC 6/i on a Supermicro H8DGU

running a trial of the esxi 5.5u2

No matter what I try I can't get the Dell raid card to show up in the storage adapters. I'm also not getting any devices showing up for passthrough if that  means anything.

I've got the AMD-V enabled and once I got the fast initialization done (it's doing the background initialization now) I saw it pop up on the list as a possible drive to install esxi. Which tells me the esxi does see the card and the raid. But once I'm in vsphere nothing shows up. Haven't found a lot of posts about people not able to pick this card up so I'm guessing the problem is just me. Any ideas of things to check?

1 Solution

Accepted Solutions
cykVM
Expert
Expert
Jump to solution

But I guess even with "Boot Support" enabled on the Perc it still doesn't show up in ESXi?

I just checked your motherboards specs at Supermicro. It's using riser card(s) for the PCIe slots. Just to rule that out because there's a lot of cabling on the Perc: is the card well fitted to the riser port?

You may check if ESXi generally see the device by runiing "lspci -vvv" on the CLI/ssh. This lists the devices seen by the kernel, if the Perc is not listed there I would consider the Perc being defective or some strange PCIe connection issue.

I've a Perc 6/i running fine at a customer site on an older DELL Poweredge 2950 with DELL customized VMWare 5.5 Update 2 but on SAS disks. But that's all DELL from software and hardware side. I presume the DELL customized VMWare image does only run on DELL systems.

I found several traces that the Perc 6/i might run a little mad if you are using disks not certified by DELL. Also the Perc 6 controllers don't seem to support drives larger than 2TB (see for example: HDD Support for 2.5TB, 3TB Drives and Beyond - TechCenter Extras - Wiki - TechCenter Extras - Dell C...)

So you may probably run the test to disconnect all your drives first and remove the RAID config from the Perc's firmware and see if ESXi detects and lists it. If that does not help either I would run the test on different hardware/mobo.

View solution in original post

0 Kudos
19 Replies
cykVM
Expert
Expert
Jump to solution

The Perc 6/i should be based on the LSI SAS 1068 chipset. You need to install the LSI CIM providers for brief health data shwoing up in ESXi.

Could you take a screenshot of your "storage adapters" view and post it here?

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

ScreenClip.png

The next step was going to be adding the extra drivers for health status but for now I was just trying to get it to show up so I could start using it. Because I have the feeling that esxi isn't detecting. These look like they belong to the 6 sata ports on the mobo (not sure why the naming convention is out of wack). I'd imagine like other systems I setup that a whole new controller should show up whether or not the raid was ready. But out of all the bios settings I've tested (and 1 or 2 settings in the raid bios) I've never gotten it to show up here. I did have to do some screwing around to get it to show up during the install of esxi. I wasn't planning on using the raid for my data store but I figured the first step would be to get the bios to properly hand off to the OS and in the install procedure it look like that happened.

0 Kudos
cykVM
Expert
Expert
Jump to solution

Was the Perc 6/i installed ("plugged in") as you installed VMWare on that server? Have you checked if the driver is loaded, e.g. on CLI/ssh with "esxcfg-scsidevs -a"

Otherwise you may have to install the megaraid_sas driver, see for example VMware Compatibility Guide: I/O Device Search

0 Kudos
cykVM
Expert
Expert
Jump to solution

Adsditionally you may check if the firmware of the Perc probably needs an update.

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

Firmware is up to date at 3.5a

I did the command you asked for through SSH and all that was returned was the same 6 SB700 devices. I'm looking up how to install the megaraid sas drivers now. Will these drivers still work ok with my raid being sata?

0 Kudos
cykVM
Expert
Expert
Jump to solution

SATA drives should be ok.

For driver installation you may refer to: VMware KB: Installing async drivers on VMware ESXi 5.x and ESXi 6.0.x

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

I ran the command and got this back:

/tmp/drivers # esxcli software vib install -v /tmp/drivers/scsi-megaraid-sas-06.

803.52.00-1vmw.550.0.0.1331820.x86_64.vib

Installation Result

   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

   Reboot Required: true

   VIBs Installed: VMware_bootbank_scsi-megaraid-sas_06.803.52.00-1vmw.550.0.0.1331820

   VIBs Removed: VMware_bootbank_scsi-megaraid-sas_5.34-9vmw.550.2.33.2068190

   VIBs Skipped:

All looking ok, but after reboot there wasn't any difference it still doesn't show up in storage adapters, I'm still thinking this is an issue in the bios. Or could it be I have a raid card that the bios works and is able to raid, but is broken and can't talk to an OS? before I setup esxi I setup the bios as optimal defaults, the only think I changed was turning off the Sata to IDE emulation and change the SATA mode from native IDE to RAID (dothill) but that was all for the internal controller that should have nothing to do with the dell card right?

0 Kudos
cykVM
Expert
Expert
Jump to solution

Yes all that BIOS settings are for the onboard stuff. Does the Perc show up on boot and give you that CTRL+R message to go into the RAID Config Tool?

0 Kudos
cykVM
Expert
Expert
Jump to solution

Another thing: Is there a battery installed with the controller and do you have the chance to test another OS to see if the controller is actually working?

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

I guess that's the next step is to try it with another computer. It has a battery (don't know of the charge status but it's been plugged in for a few days)

And the dell perc bios shows up, that's how I setup the raid 6 and got it initialized. I have another supermicro board (dual xeon) That I can try it out on, does anyone know that for sure the dell perc will show up in storage adapters even if it doesn't have an active raid configured? While I can test it on another system I'd rather not go through the hassle of moving all the cabling and drives to another server if I don't have too.

0 Kudos
cykVM
Expert
Expert
Jump to solution

It should definitely show up under "storage controllers" and on the CLI with above command even if no disks are connected to it.

But you may for testing put up a RAID0 or JBOB config with just one disk if you have a spare one.

Another question: is the RAID 6 volume being listed in POST (after the CTRL+R message is shown)?

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

I couldn't find a way to pause the post screen to check and it goes by too fast, but when in the raid controller bios I enable the switch saying something like Enable Boot Support, then my raid shows up as a boot device in the supermicro bios. And it shows my raid 6 in that menu. I guess if I installed esxi to the raid would be forced to recognize it haha.

0 Kudos
cykVM
Expert
Expert
Jump to solution

But I guess even with "Boot Support" enabled on the Perc it still doesn't show up in ESXi?

I just checked your motherboards specs at Supermicro. It's using riser card(s) for the PCIe slots. Just to rule that out because there's a lot of cabling on the Perc: is the card well fitted to the riser port?

You may check if ESXi generally see the device by runiing "lspci -vvv" on the CLI/ssh. This lists the devices seen by the kernel, if the Perc is not listed there I would consider the Perc being defective or some strange PCIe connection issue.

I've a Perc 6/i running fine at a customer site on an older DELL Poweredge 2950 with DELL customized VMWare 5.5 Update 2 but on SAS disks. But that's all DELL from software and hardware side. I presume the DELL customized VMWare image does only run on DELL systems.

I found several traces that the Perc 6/i might run a little mad if you are using disks not certified by DELL. Also the Perc 6 controllers don't seem to support drives larger than 2TB (see for example: HDD Support for 2.5TB, 3TB Drives and Beyond - TechCenter Extras - Wiki - TechCenter Extras - Dell C...)

So you may probably run the test to disconnect all your drives first and remove the RAID config from the Perc's firmware and see if ESXi detects and lists it. If that does not help either I would run the test on different hardware/mobo.

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

I got it! thanks to you, I was suspicious of the riser card, because this isn't the riser that came with the mobo. It's a supermicro riser but I pulled it out of an intel box. it seemed secure to me, but just to test I put the raid card in the 2nd slot and it booted up fine and the controller and raid are showing up in esxi.

Since you have experience working with this, I wanted to ask for one last piece of advice.

My application isn't high performance, mostly sequential reads. So for the sake of keeping it simple for me, I was just going to run something like Openfiler to deliver the raid, NFS style to my other centos installs on the VM. I don't really care to set up a whole iscsi thing just for this small application. So my question is, I want emails to be sent to me if something is happening with the raid since this system for the most part won't be monitored. I noticed the LSI health status isn't showing up in esxi which may have something to do with the wrong driver being installed (notice from the code I posted above I think I installed an esxi 6 driver on the 5.5 machine). Will openfiler or some other file serving os be able to peak at the raid? or does esxi have some kind of support for notification?

(FYI still getting "host does not support passthrough configuration" if that is part of the possible solution)

Thanks man you've been a huge help. I'm happy to see the raid finally come to life.

0 Kudos
cykVM
Expert
Expert
Jump to solution

Glad I could help.

First of all (personally) I won't use Openfiler anymore, there was no update and active development for some years and support in their communities is pretty much non-existing anymore.

There are some other options like Starwind or maybe even FreeNAS. Another community member listed some of the options here: Re: openfiler configuration

For the Perc monitoring you may try LSI's Megaraid storage manager (MSM), but not sure if it works without issues with the Perc. You will have to install the CIM providers from LSI for MSM to work.

cykVM
Expert
Expert
Jump to solution

Will look into the passthrough thing later, I think that might be due to hardware restrcitions. I have no experiences with Opteron systems. With Intel boxes you have to enable VT-d to pass-through a PCI device - I think it's called IOMMU/VT-i with AMD systems. Maybe even passthrough does not work with a RAID5 or RAID6 on the Perc 6/i and only with RAID0 or RAID1.

For the health status in host view you also need the CIM providers being installed. That has nothing to do with the drivers. But anyway I would probably switch back to the previously installed driver

0 Kudos
mavermc
Contributor
Contributor
Jump to solution

Finally got health status! this is great, except it says one of my 8 are already bad, but besides that it's great to see the health status is working. I tried to setup the megaraid storage manager on a fresh windows 2012 install, and I got it to discover (which I hear is the hardest part) but when the software opens it must not being looking at the right device, because there's no device just "bus 3,dev 0" and it doesn't show any virtual drives or physical, zeros across the board.

So, could be a driver issue, could be more complicated than that. But because the data does seem to be in the health status I'm going to start by trying to troubleshoot the storage manager. First I figure I'll setup a script in the esxi host to email me about hardware health changes so at least I can get up and running with monitoring the raid.

Without passthrough though (I'm starting to doubt that it's possible on my motherboard) My only option would be to create a datastore on the raid and send it to a host correct? I haven't found a distro yet for providing that storage to the rest of the network, maybe I'll just setup a centos minimal and share via NFS that way.

0 Kudos
cykVM
Expert
Expert
Jump to solution

I think it's not easily possible to mail the helth status directly from the ESXi host. Usually vCenter is used for that (trigger alarms etc.), but that needs a license.

MSM is kind of tricky to install, sometimes it needs a certain driver version and other times an older version of MSM only works correctly.

It's also my guess, that passthrough does not work with you mobo/CPU, what you could do is creating a datastore on the RAID and put one large virtual disk (or several smaller ones) on that datastore. You can't pass through the datastore itself.

0 Kudos
MaxCraft
Enthusiast
Enthusiast
Jump to solution

StarWind also offers HA NFS storage in their Free Version, which is a great thing to have for free Smiley Happy

0 Kudos