VMware Cloud Community
geforce20111014
Contributor
Contributor

Adaptec 6405 poor write performance

Hi,

My System specs are as follows:

CPUs: 2x Intel Xeon E5620

Motherboard: Intel S5520HC
Ram: 12x Kingston KVR1066D3D4R7S/4GI for a total of 48GB
USB: 4GB Kingston DataTraveler for booting ESXi 4.1
RAID Controller: Adaptec 6405 (firmware build 18301) with AFM-600 NAND FLASH MEMORY BACKUP for 6 SERIES
HDD: 4x Seagate Constellation ES SAS 1TB ST31000424SS (firmware 0006)

After suffering poor write performance the first time I went ahead and produced a fresh install of ESXi 4.1 Update 1 installed to the usb stick listed above.

I then installed the VMware ESX/ESXi 4.1 Driver CD for PMC Sierra aacraid from http://downloads.vmware.com/d/details/dt_esx41_pmc_aacraid_11728000/ZCV0YnR0aipiZColcA==

After installing and restarting ESXi I could then see the raid array and create a datastore.

Currently the raid is configured as a RAID 10 I have tried RAID 6 with the same poor write performance. Write Cache is forced on and ZMM is at optimal charge!

I have been testing performance with Crystal Disk Mark running in a Windows 2008 R2 x64 VM.

Sequential Read is 286MB/s and Sequential Write is 28MB/s.

On another system which is an i7 950 with Adaptec 5805 with BBU running ESXi 4.1 with the same hard drives but configured in a RAID6.

Sequential Read is 246MB/s and Sequential Write is 207MB/s.

I would like to get the 6405 writing atleast 5x faster than its currently operating instead of sending it off as ewaste!

Thank you in advance for any assistance you can offer me.

Update: Tried Paravirtual SCSI Controller and a vdisk on the Windows vm for a bit of fun. The test results are 325MB/s Sequential Read and 28MB/s Sequential Write.

Reply
0 Kudos
37 Replies
idle-jam
Immortal
Immortal

i think it's due to the absense of BBU adaptec 6405 that caused the slow performance ..

Reply
0 Kudos
geforce20111014
Contributor
Contributor

The BBU has been replaced by AFM-600 NAND FLASH MEMORY BACKUP for 6 SERIES which is Zero Maintainence and doesnt require battery replacement as it uses capactiors to maintain charge while the memory is written to NAND FLASH.

Reply
0 Kudos
znebzuga
Contributor
Contributor

Hello,

I'm thinking to upgrade my controller (mine is 3ware 9690SA 512mb cache).

So, before to buy,I like to see how this new adaptec 6000 series performs and one of this unique wich calls my attention was the ZMCP approach. Ok, I have BBU on my 3ware and is boring (like others wich use BBU's), constant monitoring, great delays to protect cache,etc.

So, accidentally, I discovered this review...oh man,,,performance is a headache on your system?

Do you recommend your adaptec 6405?

Is firmware immature yet?

I like my 3ware 9690, I know thats old tech, made for HDD's only but I have 2x Corsair SSD F120 attached on it and works more less well. Is not very quickly because firmware or controller nature is for sequencial works, not random. Although, cache speed is around 1gb/s seq read and write.

What do you think about change 3ware for Adaptec?

Regards and thanks for everything.

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

did you check to ensure the Write back was enabled in the card bios...

Reply
0 Kudos
geforce20111014
Contributor
Contributor

I have been incontact with Adaptec support in regards to this.
The 6405 is currently no where near as mature as the 5 series and also the 6 series is not supported in ESXi until 5.0 so the 4.1 driver which has been posted is currently a mystery.

I'm still waiting to hear more on the progress of this.

Reply
0 Kudos
geforce20111014
Contributor
Contributor

Write back cache was most certainly enabled!

I had no issue with performance when using the 6405 with pci passthrough to a 2008 R2 vm.

Reply
0 Kudos
znebzuga
Contributor
Contributor

IT's very strange only 28mb/s...

Look this: http://hardforum.com/showthread.php?t=1611613

Ok it's different because is not across of VMware but it's not bad for 6405...

The 2 person haven't ZMCP on their controllers.

I'm not sure if you check the wrong settings but don't believe of course, so I think is a firmware bug...

I hope wich adaptec resolve this.

Do you have performance problems on windows installed directly on controller?

Reply
0 Kudos
amstim
Contributor
Contributor

Hi Geforce,

Did you ever figure this out?

I am about to deploy a 6805 with 8x SAS 15k drives attached.  25MB/s is not going to cut it.

I've not tested yet however, like you I assumed it would work since there is a driver.....

perhaps buying 5805s is the way to go?

Reply
0 Kudos
geforce20111014
Contributor
Contributor

I have tested the 6405 in passthrough to a Windows 2008 VM and the performance is not limited to 28MB.

Reply
0 Kudos
geforce20111014
Contributor
Contributor

Currently I have retested the card with the raid built by the 5805 and performance is better around 140MB/s write mark but currently I get over 240MB/s write with an 5805.

From talkining with Adaptec support the 6405 firmware is not as mature as the 5805 and in RAID10 senarios the 5805 is quicker from what I have been told.

Also the current driver doesnt show the device name correctly in the Storage Controllers list and its just listed as AACRAID (the name of the driver) so its still early days yet.

With 8 SAS drives you would be better off with the 5805 just to have less headache trying to figure out whats going on. Unless of course you are aquiring the hardware for a testing rig.

Reply
0 Kudos
amstim
Contributor
Contributor

I am planning on running raid 5. So perhaps that will make a difference. Odd that the array created with the 5805 but running on the 6805 would perform better.....

Is the stripe size the same as when created with 6805? lol it really could not slow you to 28 MBPS any setting..

Damn well i guess i will give it a go with the 6805...see what happens

8x 300gig sas 15k was expecting quite alot even with raid 5.... As long as i can pull over 100 can limp till ESXi 5.0

Reply
0 Kudos
amstim
Contributor
Contributor

Btw the device name is wrong running 4.1 using my 3405 too....so not sure if that's a great indicator

Reply
0 Kudos
amstim
Contributor
Contributor

Ok now I am confused....

I am preping a new server for deployment to a customer, not ESXi. No VM anything just bare metal running 2008R2 (64bit). Adaptec 6405 with ZMM. 2x Cheatah 15.7 300gig 15k drives.  RAID 1.

Read speed 273 MBs, write speed 29 MBs...... sound familure.... *Sigh* at least we know now its not an issue with just ESXi.

Both drive cache and contoler cache is on, ZMM status is normal.

Nice work Adaptec.....the 6 series sure is cool, man oh man.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

Have you confirmed that write caching is enabled?

Always unfortunate that new technology takes a while to catch up to the marketing departments claims.Smiley Sad

-- David -- VMware Communities Moderator
Reply
0 Kudos
amstim
Contributor
Contributor

Yep its on for sure.  You can easily tell by sending a test of less then 500 megs, since the controler has 512 onboard any write test under 500 megs nets about 1200 MBs.

I think ive cracked it though.

Get this.  Turning OFF the individual hard drives write cache brought the write speed up to 205 MBs

So the magic crazy formula is.

Adapter Read and Write cache ON

Hard drive cache OFF

I tried all combinations and all other then the above had 28ish MBs write cap.

Buggy Buggy.

Update/Edit:

Just for fun i tried turning the DRIVE cache back ON.  Performance is fine now..... I even power cycled to be sure.

Can't explain it...  too damn similar to Gforces problem to be a coincidence though.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

.Write caching at the drive level would bypass the safety provided by the backup cache. Not a good idea to have it enabled. Is it turned on by default?

-- David -- VMware Communities Moderator
Reply
0 Kudos
amstim
Contributor
Contributor

Indeed it does expose the 32 Meg or what have you on each drive to the possibility of data loss during a power interruption. The ZMM or backup battery only protects the Adapters write cache.  When in an environment with very little power interruptions I've found it is quite safe to leave enabled.  We always deploy an APC with servers too so that helps.  To be honest even though many power failures I've never really lost data having the cache on, that's with a deployment of over 150 servers.  Still I agree with you on principal is not a good idea, especial if you don't know the risks.

Having the Drive cache on however does give a  nice speed boost, in the old days of generation 1 raptors you pretty  much had to have it on or your performance was horrid, or go from raid 1  to raid 10, but that doubles your cost and failure chance.

Yes the drive cache is turned on by default. I can confirm that is the default on many Adaptec products, including 1220SA, 2405, 3405, 5405, and 6405.  They will bark at you quite a bit when creating the array however that you should probably turn it off. Smiley Wink

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

32 meg is or could be a significant loss since it could be spread over several VMs.

-- David -- VMware Communities Moderator
Reply
0 Kudos
DSTAVERT
Immortal
Immortal

Forgot to add "nice find". Smiley Happy

-- David -- VMware Communities Moderator
Reply
0 Kudos