Hi,
after the successfully integration of my SRCS28x Controller I've configured a RAID5 Array with 6 250GB harddisks. The initalisation was ok and the status of the array is "optimal". Another 250GB harddisk is connected to the onboard SATA Port and is my ESXi bootdrive and first datastore for images and other stuff.
The RAID5 array is successfully detected by ESXi and I configured VMFS3 with 2MB Blocksize - total space on the array is ~ 1,2TB.
Now I've migrated the first machines from my VMWare Server an all the things looks fine.
The next step was to create a new virtual machine as a fileserver, I configured some big virtual harddisks (100GB, 200GB, 50GB, 50GB) and assigned the user-rights for this.
Now - when I try to copy the files to this disks - the performance of the array is very slow. I copy 80GB from a stand-alone server to the vm, it starts with ~10MB/s and after a few minutes the speed goes to 2 - 3 MB/s. The server is connected to a 100MBit switch.
Some details about the hardware:
Intel SE7520BD2 Mainboard
Intel SRCS28x Raid Controller with 6x Seagate 250GB SATA HDDs RAID5, Datastore "RAID5"
1x Seagate 250GB connected to the onboard SATA (Bootdrive, Datastore "BootHD")
2x Intel Xeon 3.0GHz
4GB DDR2-400 RAM
Intel Pro/1000 Network, connected to a 10/100MBit switch
I've tryed a second network card but without any enhancement. I think the network isn't the problem - it seems to be a slow performance of the controller.
Some hints?
If you run resxtop from RCLI or exstop at the console, what sort of numbers do you see press d (for disk counters). Does the controller support battery backed write cache? What sort of performance do you see if you copy to the file server VM from another VM?
Hi Dave,
yesterday I've played around with the cache settings of the controller and enabled caching for all of the hard disks (there is a 2200VA UPS). Now the situation is a little bit better but for a RAID5 too slow.
Copying from VM to VM (800MB File) starts at 4,5MB/s and vary around 7,5MB/s, copying from VM to physical PC starts with 7,5MB/s and vary then around 9 - 10 MB/s (normal for a 100MBit Networking). Copying from physical PC to VM vary around 3,5 - 5 MB/s.
I check the disk counters this evening.
Also I've ordered the intelligent battery backup for the controller. At the moment there is no battery board.
Greetings,
Sebastian
Hey Sebastian,
Just came across your post while looking for
information on the SRCS28x with ESX and thought I might chime in with a
thought for you, or anyone else that comes across this post, from my experience with the SRCS28x and VMware Server.
We installed VMware server on an Intel server system with an added
SRCS28x card (ESX had not been released for free yet). While testing
and monitoring some testing of the system we noticed the same poor
performance you did on the RAID setup. After hours of searching and
tweaking I finally stumbled upon the solution. The SRCS28x does not
come with the RAID battery standard. Due to this the card is most
always in "Write-thru" cache mode which results in very slow transfers
because the OS has to wait for all the data to be written to disk
before it can consider it done. It seems that you probably do not have the battery either (or your card is in write-thru mode) judging by the pics you attached which show a VERY high I/O wait %. This was the statistic that clued me into looking at the cache problem in the first place.
If you purchase the optional battery
you can then use the web based RAID tools to enable "write-back" mode
for the card which will allow the OS to continue processing after the
card receieves the data in memory.
Also of note. I would not enable "write-back" without the battery or else you will have a very high risk of data or RAID corruption if the server loses power before the memory is synced to disk.