daBuddha
Contributor
Contributor

PROBLEM: Areca 1883i upgrade to 5.5 Machine

Having problems with a new ARC-1883i install under VMWare ESXi 5.5. Hoping you can help.
 
This is an upgrade, we are replacing the Supermicro X9DRD-xxx integrated LSI 2308 controller running in IT mode (the JBOD model), with the Areca running JBOD. We are hoping for a significant performance bump.
 
The machine has 25 SATA III disks ( 16x2TB, 9x256GB SSD ). We installed the card, moved all but the System disk to the Areca 1883i, installed the latest VMWare drivers, and brought the machine back up. At that point the only disks visible to VMWare  were  those that previously had been handled Raw, and the system disk (which is on a motherboard SATA port).  None of previous disks that contain VMWare datastores were visible at all (15x2TB).
 
We upgraded to 5.5 U1 hoping to solve the issue, no luck. Verified most recent Areca Driver, and Firmware, all up to date. The Areca BIOS sees all the disks, no issue. As a voodoo debugging step, also switched between JBOD and individual disk pass-thru, no diff.
 
We  disabled, and removed the MPT2SAS driver, imagining some conflict, no luck.
 
We are hoping this is a known issue when upgrading off of an integrated controller to that of a discreet Areca controller under VMWare.


Any idea what is going on here?

Tags (5)
0 Kudos
8 Replies
daBuddha
Contributor
Contributor

Thanks for the response.

Yes we are running the latest driver. Motherboard is running latest 3.0a BIOS. All but the system disk is on the SAS bus. The LSI 2308 under VMWare sees all disks, Areca only sees those that had been Raw.

In your case, was the Areca BIOS seeing all disks? In our case it does.

ESXi 5.5 is seeing some disks on the Areca, ( 8 of 25 ), just not the ones with associated with established VMs. - the ones that are Raw and didn't have VMFS on them are the ones the controller sees.

Were you seeing some disks but not others under ESXi?

The motherboard we are running:

Supermicro | Products | Motherboards | Xeon® Boards | X9DRD-7LN4F</title> <META NAM....

0 Kudos
daBuddha
Contributor
Contributor

We are going to take another shot at this on Wednesday (it is a production machine), we've gotten a beta replacement driver from Areca ( 1.20.00.20 ), we will also try removing from inventory the datastores currently attached via the LSI controller ( at best a hail mary )

I find it hard to believe no one else has encountered this problem,  this is a simple replacement of a disk controller, from one brand to another. Got to believe folks have done this repeatedly, especially given the numerous (so numerous as to be meaningless) log warnings in found in 5.5:

Device xxxx performance has  deteriorated. I/O latency increased from average value of yyy microseconds to zzz microseconds.

and the advice found in other threads concerning the error, recommending an upgrade to a caching controller.  Upgrading from an integrated LSI controller to that of a supposedly superior Areca controller seems to obvious.

Anyone else have advice on increasing the probability of a successful upgrade?

0 Kudos
daBuddha
Contributor
Contributor

The 1.20.00.20 driver made no difference.

Did get an anemic response from Areca support, that makes little sense:

"Sorry for the late reply. I just learned that JBOD mod has proprietary signature and format. Thus, LSI and Areca are not compatible in format. "


Not sure what this means. Labeling maybe?


This fails to explain why the Raw/Pass-thru disks,  those that were LSI controlled, are visible to Areca and ESXi, just the ones that have VMFS on them aren't visible. Also all disks are visible to the Areca BIOS.


LSI sees all disks:


LSI Sees.JPG


Areca sees a subset:


Areca Sees.JPG

We are really starting to regret going with Areca,  more than 3 days wasted on what should be a no-brainer upgrade.  And unable to get any real support for a product they just pushed as certified.


Anyone done an Upgrade like this?


0 Kudos
gregsn
Enthusiast
Enthusiast

Like Areca support said, the LSI signature and format on the drives is not compatible with the Areca controller.  It is usually possible to upgrade controllers of the same manufacturer but taking drives which have been signed & formatted under one brand and moving them to a completely different brand is unlikely to work.  Changing hardware RAID controller brands typically requires writing new signatures to the drives (and most likely formatting the drives).

0 Kudos
daBuddha
Contributor
Contributor

Here's my confusion then, all disks are JBOD ( both Areca and LSI ),  The pass through / raw disks are visible to the Areca, just not those disks with VMFS on them. If the signature / format is not compatible, why are those disks from the LSI controller visible and able to be worked with?

Why the subset? Shouldn't it be all or nothing?

0 Kudos
daBuddha
Contributor
Contributor

Just heard from Areca support, they sent me an update to the 1883i firmware, with this note.

This firmware resolves a problem on VMware.

Because VMware read information inside HD and generated a unique ID for each single HD.

Our firmware read this ID incompletely.

So old firmware treat each HD on RAID are identical( same ID) so only first one can be recognized.

New firmware can read ID as long as enough so each hard can be distinguished.

Scheduling window now to test if this resolves the issue.

0 Kudos
gregsn
Enthusiast
Enthusiast

That's a good point.  You would think JBOD disks should be visible through any controller.  It will be interesting to see if the firmware fixes it.

0 Kudos