VMware Cloud Community
Tekvektor
Contributor
Contributor

Adaptec 6805 + ESXi 6 + Maxview

I've been putting together a whitebox esxi 6 host for my lab at home, and was attempting to get adaptec's maxview setup so that I can utilize the utility to grow the raid 6 array as I slot more drives into the case. I followed the provided instructions having already folded the latest drivers for the Adaptec card into the ISO as part of the install to have the array  visible during the installation

esxcli software vib list | grep aacraid  gets me the following

scsi-aacraid               6.0.6.2.1.41024-1OEM.600.0.0.2494585  Adaptec_Inc                                                       VMwareCertified 2016-01-26

I disabled the watchdog server

and set the software acceptance level

esxcli software acceptance set –-level=CommunitySupported

and I successfully I install the vib's downloaded with the install package for Maxview VMWare (msm_vmware_v2_00_21811)

esxcli software vib install --no-sig-check -v /tmp/vmware-esx-provider-arcconf.vib

esxcli software vib install --no-sig-check -v /tmp/vmware-esx-provider-arc-cim-provider.vib

and reboot per the documentation, however after the reboot, the storage controllers are no longer recognized and the most of the GUI is unavailable.

Access to the shell via SSH is much slower and specific commands may or may not run.

I can reinstall ESXi and leave the datastores alone and on the post install reboot they come back.

Any Help would be appreciated. . .

20 Replies
Lennier
Contributor
Contributor

I Use a 71605E and ESXi 5.5, but have had similar problems.

A Complete Powercycle schould bring Up the Storrage again.

Also you realy need the laterst Firmware and Drivers, otherwise Adaptec crap will never work.

Reply
0 Kudos
robkin
Contributor
Contributor

I had same problem with Adaptec 6805 + ESXi 6 + Maxview.

Previously I had older Firmware (build 19147) and newer maxView v2.00.21811 but I was unable to read my Adaptec Storage and after some time VMWare crashed with purple screen of death.

I solved it by using following combination:

ESXI 6.0 Update 1

Firmware on 6805 updated to 5.2.0 Build 19176

AACRAID Driver v1.2.1-41024 for VMware

maxView Storage Manager downgraded to v1.08.21375 for VMware

Everything seems to run without problems now (of course it can still fail in long term test, but it needs time to evaluate).

Rob

briandenmark
Contributor
Contributor

Hi,

I have experienced the exact same problem since I upgraded from ESXi 5.5 to 6.0 Update 2. I have an Adaptec 6805 controller.

Yesterday I tried to update the following:

Adaptec - Adaptec Driver: Microsemi Adaptec RAID 6805 Firmware/BIOS Update Ver. 5.3.0 Build 19198 Do...

Adaptec - Adaptec Driver: Adaptec RAID Driver v1.2.1-52011 for VMware Download Detail

Adaptec - Adaptec Driver: maxView Storage Manager for VMware 5.x and 6.x Download Detail v. 2.02.22404http://storage.microsemi.com/en-us/speed/raid/storage_manager/msm_vmware_v2_02_22404_zip.php

Still the same problem. When installing vmware-esx-provider-arcsmis.vib the controller is no longer recognized. If I only install vmware-esx-provider-arcconf.vib I can see the controllers and start the VM's but if I try this command in a VM everything freezes:

arcconf.exe SETVMCREDENTIAL "HOST" 5989 root "PASSWORD"

arcconf.exe getconfig 1

I have not tried with an old version of arcconf and arcsmis. A complete powercycle does not help.

Are you still running without problems and have you tried to update to ESXi 6.0 Update 2 and have you updated maxview arscconf and arcsmis?

Regards

Brian

Reply
0 Kudos
briandenmark
Contributor
Contributor

Hi,

I got it working with the following:

Adaptec - Adaptec Driver: Microsemi Adaptec RAID 6805 Firmware/BIOS Update Ver. 5.3.0 Build 19198 Do...

Adaptec - Adaptec Driver: Adaptec RAID Driver v1.2.1-52011 for VMware Download Detail

Adaptec - Adaptec Driver: maxView Storage Manager v1.08.21375 for VMware 5.x and 6.x Download Detail

I have tried with the latest maxView Storage manager and latest arcconf and arcsmis, but it does not work. So it seems to come down to maxView Storage Manager version since everything else can be upgraded.

Regards

Brian

Reply
0 Kudos
volsxx
Contributor
Contributor

Reply
0 Kudos
Manservic
Contributor
Contributor

Hi. I have the same problem. I have several adaptec 6405 and 6405E controllers. With version 1.8 works well, although the maxview console does not seem to cool too well or not cool.

We want to test the new drivers for more than 1 year but we have not managed in any of our hosts to update it and make it work.

The console stalls and does not respond. We can not explore the datastorage, and at the end we see a purple screen of death.

These controllers are implemented in Esxi 6 servers, version update 2. (kernel build 3620759)

We have installed the version of maxviewstorage v2.02.22404 in ESXI. the arcconf, and the arc cim provider.

The controller used is adaptec scsi-aacraid 6.0.6.2.1.41024-1OEM the Adaptec Web site.

The BIOS version of the Adaptec controller is 19198

We've never gotten Maxview 2 to work on our hosts.
We use ASUS TS300 / RS300 servers with Ikvm7 / 8 and Hp Prolian Ml310

Has anyone got it to work?


I opened a thread but no one answered: Adaptec Maxview crash with version 2 or higher. Arcconf not work in ESXI 6


Regards

Reply
0 Kudos
Vme332c
Enthusiast
Enthusiast

(i reposted from the other, relevant thread linked, so that this info will be availbul to all as google often directs you to this thread, not the other)

I think you guys are in for a rude awakening, after following this threads excellent info/steps to get this working -> maxview ALWAYS shows Optimal, even when you pull drives and esxi web int->manage shows degraded, even after a rescan in maxview, ALWAYS optimal).  also i have never been able to get arcconf to work under any guestOS (win or linux). it always shows error, CIM (something).  Ive confirmed via curl and web browser, that cim port 5989 is accessable.

While i spent days fighting with these adaptec Raid cards to get monitoring working ( series 6 and series 8 adaptec cards, 6805 and an 8805 ),  i found this great thread which finally is the only known solution (all other cim providers get you PSOD after 20-30m post boot).  Ofcourse all you need is the adaptec VIB to mount a adaptec datastore (easy) what good is a raid card without the ability to MONITOR ITs status??
It was nice of adaptec support to answer the phone,  but they were useless,  lady said, we have never heard of any issue with VMware, on any of our cards.  Only option is to file a case with them, which lady warned would take several months, and most likely will offer no solution (nice).
So adaptec is a dead end on vmware.
edit: update,  i WAS able to get arcconf working , you wont believe this,  but the solution is to use the LATEST guestOS arcconf version (from the 8805 card,  version v2_06_23164)https://storage.microsemi.com/en-us/speed/raid/storage_manager/msm_vmware_v2_06_23164_zip.php
at this link (again, just arcconf, for win64, its in a folder named cmdline , in the archive linked below.    maxview from his version wont show any card found):
Adaptec - Adaptec Driver: maxView Storage Manager v2.06.23164 for VMware Download Detail
So to recap (and this is abhorrent, adaptec):
on 6.0 or 6.5 esxi,  you need to run the old drivers and CIM .vibs (and old arcconf.vib)  listed in posts/replies above (the 1.08....)
and then use the lastest arcconf.exe (for guest os win64, link is a few lines up), and that will give you accurate data from your raid card (both 6805 and 8805).
-no matter what you do, with MSM,  best case - you will not get any valid data from MaxView Storage manager (MSM).   worst case, PSOD 10-30m after boot (and alt-F12 filled w SCSI timeouts once esxi boots,  the newer CIM VIBs must be crashing the card's OS is all i can think of)
(maxView is still no go in terms of useful data/working).  I have also tried every adaptec driver/arcConf.vib/arc-cim.vib combination (very old versions, as well as latest apr 2018 versions for the 8805 card)
Reply
0 Kudos
horace_ng
Contributor
Contributor

I was able to use remote arcconf 1.08 in ESXi 5.5, but lost the ability to do that once upgraded to ESXI6.5U2.

I tried with Adaptec - Adaptec Driver: maxView Storage Manager v2.06.23164  but it doesn't have 'arcconf setvmcredential' command, is there something that I missed?

Reply
0 Kudos
Vme332c
Enthusiast
Enthusiast

horace , i seem to recall  having this issue at some point (ie newer versions of arcconf in gos not having the setvmcredentials command ),  i don’t recall  exactly how I got around it, it might’ve been to use the installer / setup.EXE (in windows guest os, which during setup will ask you to provide your esx credentials).

some other questions, first off which Adaptec card are you testing this with ?   also in 6.5u2   web interface, under monitoring, are you able to see some health parameters from the Adaptic card? (I E green checkmarks related to your adaptec card?)   if not this may indicate an issue with your cim vib or version (not the guestos nor arccon)

below are my notes of exactly what works (and i did a 2nd separate host  to confirm this works, and it did, I’m about to do a third in the next week or two)- note, i don’t *think* the scsi-aacraid version matters but  i’m going to keep it consistent anyway.

i WAS able to get arcconf working by using the OLD CIM vibs + current win64 arcconf (from: msm_vmware_v2_06_23164.zip)

Versions you want:      (VMware ESXi 6.5.0 Update 2  - BELOW IS CORRECT ON FINAL SETUP I WILL USE - JUN 23 2018)

scsi-aacraid         6.0.6.2.1.56009-1OEM.600.0.0.2494585  Adaptec_Inc VMwareCertified

arc-cim-provider        1.08-21375                            Adaptec VMwareAccepted 

arcconf                   1.08-21375                            Adaptec PartnerSupported 

on guest os  (i’m using a win7 64 guest)

Set the login/pass arcconf will use (this is used to create the VIMCREDxxxx.txt file in the arcconf folder, on GOS)

arcconf SETVMCREDENTIAL 192.168.1.158 5989 esxiUSER esxiPASS

Reply
0 Kudos
mcisar
Contributor
Contributor

Has anyone tried (or hopefully had success) getting this royal concoction working on ESXi 6.7 ?  

I've gone through all the steps of the working 6.5 configuration which I have used successfully before.  No luck there so tried using the latest Maxview/Arcconf versions instead, no PSOD (so far) but arcconf on the client won't connect with the server with a "connection failed".

Port 5589 on the server seems to be open and responding to a basic tcp (ie telnet) connection.   I have tried resetting the passwords just to be on the safe side.

Thoughts?

Reply
0 Kudos
ucola
Contributor
Contributor

Same issue on ESXi 6.7U2, any one that has running it?

Reply
0 Kudos
_Arda_
Contributor
Contributor

Hi,

I managed to run MSM on server 2019 for my Adaptec 72405 on ESXI 6.7U3 without passtrough.

Arcconf and cim provider version 3.02-23600. Driver version 6.0.6.2.1.58012. MSM version Version 3.02.00 (23600) (not GOS).

The main problem seems to be missing libcmpiCppImpl.so called by libarcRAIDProvider15.so. libcmpiCppImpl.so is a part of CMPI library, however ESXI uses SFCBD (maybe a variant of it) instead. So there is no libcmpiCppImpl.so in ESXI and this is the problem.

When I check /var/log/syslog.log I got this warning : "doLoadProvider: dlopen(/usr/lib/cim/libarcRAIDProvider15.so) failed, dlerror: libcmpiCppImpl.so.0: cannot open shared object file: No such file or directory". If you have the same error, probably this thread will help you.

Even I cannot install CMPI library (rpm), I tried something different. libarcRAIDProvider15.so should call libcmpisfcc.so instead of libcmpiCppImpl.so (where libcmpisfcc.so should be a part of SFCBD library and should do the same as CMPI as a CIM service ). .

I put a symbolic link in the same folder (using ssh of course):

ln -s /usr/lib/libcmpisfcc.so.1 /usr/lib/cim/libcmpiCppImpl.so.0

With this symbolic link, I am addressing libarcRAIDProvider15.so to libcmpisfcc.so instead of not found libcmpiCppImpl.so. Then libarcRAIDProvider15.so work without any issues. If everything properly configured in ESXI and Server 2019, login MSM and add system with management protocol ESXI and port  5989.

Now it seems working fine. However after reboot of the host, symbolic link should be recreated. I should try for a persistent creation of this symbolic link.

Please forgive me if I did a mistake in this message, thus I am not an expert on these topics. Its a dirty solution but seems working. Until Adaptec will correct the arcconf and cim provider vib's, I will use this method to monitor and control my arrays. Kind regards...

Reply
0 Kudos
kagurazakakotor
Contributor
Contributor

I managed to use Adaptec RAID 71605 on ESXi 6.7U3 build-15160138, including arcconf and maxView Storage Manager.

My 71605 is flashed to the latest firmware build 32118.

The driver, arcconf and msm is downloaded from the Adaptec RAID 81605ZQ support page, all of them are the latest one. Though all of them claim not to support Series 7, all of them works perfectly.

At the time I posted this reply, the driver version is v1.2.1-58012 and the storage manager version is v3.04.23699.

Hope this will offer some help.

-------- Updated 2020-04-21 --------

If you download the latest driver for Windows and open the .ini file, you will found it do support older cards like Series 5, 6, 7, even Series 2. So I think using the latest driver is totally ok.

Dazztee
Contributor
Contributor

Adaptec 71605 + ESXi 6.7

Omg Fixed ! Thank you kagurazakakotor , used msm an drivers https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-81605zq/ 

had to change to "arc-cim-provider_3.04-23699"  from "arc-cim-provider_3.07-23850" from within the pack in my case, and fully remove msm from the win10 PC by deleting left over files in C\Program Files then install newer MSM,

My set up ESXi-6.7u3 (usb install), 71605 controller, 8x6tb WD reds Raid6, 2x 500GB ssd Raid1 for vm's, xeon 2692v2,

my raidcard Raid6 is setup as 1 storage drive for main Ubuntu homeserver vm, and various other ssds for other vms etc,

so all my drivers were installed afterwards and not via a custom iso,

esxcli software vib install -f -v /tmp/vmware-esx-provider-arc-cim-provider.vib --no-sig-check

hope this helps someone as i spent 2 days

 

 

Tags (2)
Reply
0 Kudos
Dazztee
Contributor
Contributor

Adaptec 71605 + ESXi 6.7

Omg Fixed ! Thank you kagurazakakotor , used msm an drivers https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-81605zq/ 

had to change to "arc-cim-provider_3.04-23699"  from "arc-cim-provider_3.07-23850" from within the pack in my case, and fully remove msm from the win10 PC by deleting left over files in C\Program Files then install newer MSM,

My set up ESXi-6.7u3 (usb install), 71605 controller, 8x6tb WD reds Raid6, 2x 500GB ssd Raid1 for vm's, xeon 2692v2,

my raidcard Raid6 is setup as 1 storage drive for main Ubuntu homeserver vm, and various other ssds for other vms etc,

so all my drivers were installed afterwards and not via a custom iso,

eg , esxcli software vib install -f -v /tmp/vmware-esx-provider-arc-cim-provider.vib --no-sig-check

hope this helps someone as i spent 2 days

 

 

Tags (2)
Reply
0 Kudos
Dreizza
Contributor
Contributor

Has anyone managed to get the Adaptec 81605ZQ to function in passthrough? I've tried yet the VM it is attached to fails to power on. I have reserved memory, btw does the firmware/driver have something to do with it?

Reply
0 Kudos
Dreizza
Contributor
Contributor

So after posted this question, I did some thumbing around with posts for video card passthrough and applied it to my issue. After some trial/error (some minor hair-pulling and screaming) I finally got my issue to work. So the following is what I altered to get it to successfully boot.

Install esxi driver for adaptec 81605ZQ = not required as the default works

Virtual Hardware:
            CPU:
                   hardware virtualization > checked (required as this was my last thing I did)
                   cpu/mmu virtualization > hardward cpu/mmu

VM Options: (Options below being questionable)
                   Advanced > Edit Config:
                                   vvtd.enable = false
                                   pciPassthrough0.deviceId > 0x028d
                                   pciPassthrough0.vendorId > 0x9005
                                   pciPassthrough0.systemId > 61fa4526-0768-e2fa-8c09-0cc47a943940

Edit Passthru.map: (this might be the same as the "pciPassthrough0.****" lines but did anyway
                    vi /etc/vmware/passthru.map
                                  # Adaptec 81605ZQ
                                  9005 028d d3d0 default

Reply
0 Kudos
Dreizza
Contributor
Contributor

Amending to prior update post:

I upgraded from esxi 6.5 to 7.0 in hopes that everything would "just work" and well, it didn't. In the process, it mentioned in the error message to and "vhv.allowPassthru=TRUE" to the "Advanced > Config" then remove the following:

Virtual Hardware:
            CPU:
                   hardware virtualization > uncheck

You have to uncheck the above before the "vhv.allowPassthru=TRUE" will add.

So for you all that have an adaptec card and don't want to fight with configuring it within ESXi, this is a viable solution.

 

 

 

Reply
0 Kudos
Dreizza
Contributor
Contributor

Also, something i just learned, if you do a re-install you will have to remove it from the config then re-add it or else it will fail to power on.

Reply
0 Kudos