During fresh installation of ESXi7, it complains about incompatible hardware with number 1000:0073 1028:1f51
I can complete the install.
The incompatible device it complains about, is the Perc H310 embedded RAID controller, which is on the compatibility list with the exact same number.
The driver is loaded:
lsi-mr3 Broadcom Native MegaRAID SAS driver for vmkernel 7.712.50.00-1vmw.700.1.0.15843807 VMW Tuesday, April 07, 2020, 17:20:48 +0200
Commands esxcli storage core adapter list and esxcli storage nmp device list do not list the adapter.
Any suggestions are very much appreciated
I have a lot of Dell R620s with varying PERC controllers.
The vSphere 7 image and the Dell customized image does not discover local storage on the PERC H310 Mini controller. (firmware version 20.13.3-0001)
The vSphere 7 image and the Dell customized image does discover local storage on the PERC H710 Mini controller. (firmware version 21.3.5-0002)
I would suggest to check with VMware as well as DELL about this issue. As I already suggested to different thread for same issue.
I purchased a H710 Mini off eBay; confirmed, this does work, I can see my storage now.
Utilising the Dell Customised image and the same firmware level.
May find a number of surplus H310's on eBay soon!
I have almost the same problem here: listed as "supported" Cisco UCSC RAID SAS 2008M-8i not recognized after upgrade to ESXi 7.0
So both cards are listed as supported and yet not working, which is odd because they are actually both OEM (hence different SVID/SSID) versions of the LSI MegaRAID SAS 9240-8i known to work. The driver changed from "vmkernel" megaraid_sas in 6.7U3 to "native" lsi_mr3" in 7.0. And the driver map simply does not list all supported models
I would say this is a bug. Can not open a support ticket, as I use VMUG Advantage
Is there a firmware upgrade available for the H310's? If yes, I would definitely try that first. I say this because I've encountered similar problems with ESXi7 and HP P420 & P420i controllers. Once up to the latest firmware on the cards, ESXi7 was all good with it.
When a worked at Dell a few years back, we always had to have the latest firmware loaded on every device in the PowerEdge inventory.
Im on a R630 using PERC H730 Mini running Firmware version 25.2.2.0005. The Compatibility Guide says this should work on ESXI 7. Skyline Health for Vsan says this device is not compatible with version 7. Im thinking of rolling back my ESXI hosts to 6.7. I haven't found a way to make Skyline happy yet.
I also could not get my PERC H310 to work. The driver does dee the device but reports a firmware incompatibility which can be seen by an ssh into the ESXi host and running the dmesg command.
Since the H310 HBA had 20.13.3 firmware which I believe is the latest available, I concluded it was not going to work and gave up trying.
Swapped the HBA for a PERC H710 - firmware 21.3.5 based on reports I saw in this thread. Not sure that the H710 firmware version actually matters. The adapter is recognized and I also can confirm the volumes are seen in ESXi. The H710 CacheCade feature seems useful as a replacement for the removal of the vFlash Read Cache that occurred in 7.0. SSD cache seems desirable when using slow "capacity" rotating disks.
I am doing this in a Dell R710 server. My R710 has a deprecated CPU so I also had to do the AllowLegacyCPU=TRUE workaround, after swapping CPU chips for L5640's. These CPU chips use lower power and extend the instruction set to a later version than the original CPU chips, they have encryption support instructions demanded by some software defined networking products.
So after a few cheaper-than-a-new-server hardware upgrades, my R710 seems to be still usable for 7.0, even if in an unsanctioned, unsupported good-enough-for-homelab mode.
VMware have learned from Dell that the H310 and the H710 devices are not supported in 6.7 or 7.0
These cards are supported with 12th generation of servers & below.
Both 6.7.x, 7.0.x don’t support any of 12G servers.
H310 or H710 is not listed as supported RAID cards in our 6.7.x guide
We are in the process of removing all H310 and H710 listings on the HCL on ESXi 6.7 and beyond as they are not supported.
I ran into this issue myself when I attempted to use an H310 in my lab environment, and so based on your experience combined with my own, I decided to open a bug report in order to determine whether the VCG/HCL entry for the H310 was an oversight. It turns out it was, and so it has now been removed.
That being said, as many have noted, the H710 does work. This is not an accident — although the H310 was never tested with vSphere 7.0, the H710 was. So, assuming you are like me, and simply looking for a card that works correctly in your own lab environment, you can rest assured that the H710 was tested successfully for vSphere 7.
That being said, as slackandsteel noted above, you will not see the H710 on the VCG/HCL because the latest servers that Dell supports the H710 with are 12th generation PowerEdge servers, which were not tested with nor supported by vSphere 7.
Hope this helps!
I can confirm that the PERC H710 is a drop-in replacement for the H310 in a Dell PowerEdge T420 and works with ESXi 7.0. Picked up one for $95 on Amazon.
A couple of things to keep in mind if you're going to do this,
- Since the H710 does not support Non-Raid drives (that's considered a low rent feature), if you use any Non-Raids with your H310 then you'll need to either copy the datastores off of them in advance or ensure you have some way to attach the drives to your host that doesn't involve the H710. This will be especially problematic if your Non-Raids are SAS instead of SATA since you can't easily attach them via USB.
- When you boot ESXi for the first time don't panic when you see the drive devices recognized but don't see your datastores. Since the controller hardware changed, the device IDs of your drives will no longer match the signatures in the datastores so ESXi will consider them to be LUN snapshots instead of targets which don't get mounted automatically. You have two options: 1) CLI into the host and manually mount them persistently using esxcfg-volume -M, or 2) resignature them with esxcfg-volume -r. Google vmfs resignaturing to help you decide.