Hello
I have a ESXi 5.1 host using an Adaptec 5405 card, first of all , status/healt of the storage is not displayed on vSphere Client.
Copying files withig the datastorage takes too long.
ESXi Host its a Xeon Sandy Bridge E3-1245 V2 with 32GB Ram
Adaptec 5405
4x 500GB SATA Drives
On one of the guest OS which is configured with 4gb ram i have run this command to compare with other users results, and this is what im getting.
xxx@xxxx [~/]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 200.147 s, 5.4 MB/s
Copying files on my single drive desktop PC is faster than that.
I have found a adaptec 5405 driver for ESXi 5, but i havent installed it because i dont know if is compatible with 5.1
The Adaptec 5405 is listed as compatible on Vmware HCL, also Adaptec site says that drivers will be automaticaly installed by Vmware ESXi.
Do you guys recommend to install that adaptec driver on my ESXi 5.1 host?
I'll really appretiate your comments.
Note: There is no load on the server. There are only 3 vm which are used for Windows development purpose.
thanks.
Welcome to the Community,
I think this is less a driver issue than an issue with a missing battery backed write cache. ESXi itself does not do any write caching and relies on the RAID controller's capabilities. RAID controllers without battery backed write cache operate in write-through mode by default (for security reasons) and only allow write-back mode with the battery attached.
André
I think you summed up the issue your self, you need a new card! Ive had good success with 9260-4i LSI
Thank you so much for your reply.. ..
Well, it is true that there is no BBU on the Adapter.
Is there a way or configuration on the adapter to improve this?
What do you recommend?
I was running esxi on single drives and because esxi doesn’t run on SW RAID told my host to upgrade my server with a Raid card,
After a long day transferring the VMs, from the backup i have made, i came to see that i have less write speed than when i was running on single drives datastore ,I feel ashamed with those kind of speed... i feel like i shouldn’t upgrade at all.
@jeremyyy
I think that there must be something that can be done before upgrading to a new card.
5MB ?
Not a huge fan of Adaptec in general. Not being a hardware guy I am not sure if the BBU is the key for these cards but you never know.
What disks are you running in your raid10? Can you verify the speed of the channel?
To get the HW status just need to load the CIM providers from a quick Google, LSI card i provided just pops up. 😃
nothing you do short of replacing that card is going to improve performance. No BBWC or Flash cache means you are in Write through mode and its gonig to be slower then dirt.
You may be able to enable write-back in the bios, but you risk listing your data if your system crashes without a clean shutdown.
Bite the bullet and get a card with a BBWC on it...
They are all 4x SATA2 hitachi 500GB 7200rpm
not sas.
@rumple
If im understanding right, you mean that the correct way to improve performance on that RAID card is by adding the bbu module?
I hope, because I really don’t want to start backing up and restoring all the vms…(it was a pain for me), nor change host provider
the battery backup and Write-back should dramatically improve performance
Adaptec Battery Module 800T | |
The Adaptec Battery Module 800T supports tethered battery which allows for a remote mount inside the server chassis
| |
Adaptec Battery Module 800 | |
The Adaptec Battery Module 800 supports mounting of module on the RAID card
|
the drives cache is not what you want enabled under controller configuration. I would disable that..that can cause data loss.
In the raid 4.png file it does look like the write caching is enabled (which is write back)
Thats the fastest performance option you have.
With those settings enabled you shouldn't have horrible performance (although with 2 sata drives it won't be porche fast..but you shouldn't have too many problems running overall)
I performed the test again and this is what i got now that a fault drive was changed and array has been rebuilt.
# dd if=/dev/zero of=test bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 52.8942 s, 20.3 MB/s
There is some progress compared to the 5MB/s i was getting before, but compared to software RAID, it is slow.
I use to have the same server spec with a Soft/ RAID10 and i was getting 100+MB/s
Per instruction on the 5405 manual
http://download.adaptec.com/pdfs/user_guides/adaptec_raid_controller_iug_5_2012.pdf
Chapter 6: Creating a Bootable Array ● Page 56
This is what they recommend:
Read Caching Press Enter to use the default (Yes).
Write Caching Press Enter to use the default (Enable always).
Create RAID via Press Enter to use the default (Build/Verify).
MaxCache Read Press Enter to use the default (Enable Read)
I really think that what im getting right now on a hardware RAID10 is below average, i understand that im not using SAS drive but at least something on the 100's is spected.
Never was a fan of adaptec hardware...
amaurib12 wrote:
Hello
I have a ESXi 5.1 host using an Adaptec 5405 card, first of all , status/healt of the storage is not displayed on vSphere Client.
Copying files withig the datastorage takes too long.
ESXi Host its a Xeon Sandy Bridge E3-1245 V2 with 32GB Ram
Adaptec 54054x 500GB SATA Drives
Do you guys recommend to install that adaptec driver on my ESXi 5.1 host?
I'll really appretiate your comments.
Note: There is no load on the server. There are only 3 vm which are used for Windows development purpose.
thanks.
It's not the card, it's not the missing battery backed.. It's SATA.
That's your problem. SATA drives are good for storage NOT performance.
You also have 4 of them.
RAID 10 means stripe across mirror, so 2 drives is a mirror, you essentially ONLY have 2 drives at this point with RAID 10. That's your problem.
You need at least 8 SAS drives (not SATA) and besides which RAID 10 is no longer ultimate performance RAID 5 is, but you still need more spindles. The storage you wasted to eek out about 10% more performance that you COULD get with RAID 10 isn't worth it.
RAID 5 is good, great even... when properly specced drives / card (which in your case is fine) / OS. Therefore the problem is your drive choice and number of spindles.
amaurib12 wrote:
They are all 4x SATA2 hitachi 500GB 7200rpm
not sas.
That's why SAS is superior to SATA spindle speed, it's double... that's the main problem
jeremyyyy wrote:
Never was a fan of adaptec hardware...
The rest of world uses Adaptec. Not sure who you like, but there is no better SCSI card than adaptec. Models may vary, but you will not beat Adaptec for vendor performance.
You might come close, but not better than Adaptec.
Mark Hodges wrote:
the drives cache is not what you want enabled under controller configuration. I would disable that..that can cause data loss.
That's not true, cache will ALWAYS give benefit. Data loss isn't because of cache, the OS has the biggest involvement in maintenance of drives. ESX has been perfect (for lack of a better word) when it comes to management of data.
In a raid10 with 4 re4 2tb disks preform better then this running off a old 2850 II box.'
I am no way saying I'm a hardware guy but sure looks to be a card issue not a drive issues, 20MB? cant blame that on the drives when a single drive gets better then that. Its not software raid.
You mis-understand.
There are very few cases where you would enable the hard drive cache itself.
Enabling the raid controller cache is a case of always enabling it.