VMware Cloud Community
clicker666
Contributor
Contributor

Horrible disk performance....

My system is as follows:

Supermicro X7DWN+ Mainboard

Dual Xeon E5410 CPUs

24 GB RAM

8 WD500ABYS SATA II HDD

HP e200i RAID card

2 onboard gigabit nics and 2 Intel gigabit NICs

There are two 1 TB arrays with 4 disks each, in RAID 5 with a hotspare. Both arrays are on separate channels. I am running 1 Win XP VM, three Win2k3 Server VMs, and 1 Linux VM. All perform small tasks, none are resource intensive.

Whenever I attempt to do anything disk intensive everything suffer. I attempted to make a Linux 32-bit server with a 500 GB drive and had to halt it after 20 minutes as it had not finished formatting, and was killing myother VMs. I tried to run IOMeter on a 20GB Win2k3 VM, and had to stop after 10 minutes as it was crushing the other VMs.

From what I've read, the problem appears to be the RAID controller being not very good. If this is correct, is there a RAID controller that users here would recommend?

Tags (4)
0 Kudos
8 Replies
Dave_Mishchenko
Immortal
Immortal

Your post has been moved to the Performance forum.

Dave Mishchenko

VMware Communities User Moderator

Does the controller have a battery backed write cache? If not, then that would explain the poor performance. If you search the performance forum for BBWC, you'll find other examples showing the difference that it will make to performance.

0 Kudos
Craig_Baltzer
Expert
Expert

Does the e200i have a battery backed write cache enabler (BBWC) on it? In the e200i configuration is write caching enabled for the arrays?

The lack of write cache can cause significant performance issues. The e200i is available without the BBWC enabler which would make it a slow card. You can buy a BBWC for the e200i as an add-on if you bought the card without it...

0 Kudos
clicker666
Contributor
Contributor

Yes, it has a 128MB BBWC.

0 Kudos
Craig_Baltzer
Expert
Expert

I think you may be correct in terms of the limitations of the e200i controller. notes that drive write caching can only be enabled on the intermediate/higher end HP controllers such as the P400/E500/P800, not not with the e200. While we don't use e200i controllers I know that performance is awful with our P400 in the lab when write caching is disabled...

clicker666
Contributor
Contributor

I acutally managed to get in and turn on disk write caching. It was disabled. I'll see how that works and report back.

0 Kudos
clicker666
Contributor
Contributor

Controller settings were 50% read 50% write acceleration, drive acceleration on, and block size is 64k.

I was able to max out the controller at 43300 KBps max, writing an 8 GB disk image from my local machine to a FreeNAS VM, using SMB. Going VM to VM netted a top speed of about 30000 KBps,

The system seems a lot better, but how would others rate this performance?

0 Kudos
williambishop
Expert
Expert

There's a thread in here somewhere that has different numbers for different systems and should give you a decent idea of what to expect.

http://communities.vmware.com/message/906821#906821

--"Non Temetis Messor."
TimCortex
Contributor
Contributor

The short answer = ENABLE the write cache BUT set the WRITE CACHE to 0% via the HP CLI. [ctrl slot=0 modify cacheratio=100/0 | ctrl slot=0 modify dwc=enable]  Turn it on, but don't use it, go figure!

I just set up an Open-e DSS V6 server on an HP ML350 G5 with a E200i RAID controller, 128MB BBWC, and 6 x 1.5 TB 7200 RPM drives in a RAID 10 array.  I was getting very bad performance, less than 100 iostats on an iSCSI file I/O volume high initialize rate.  I have similar configurations using 3Ware 9550 cards producing well over 1000 iostats so I was trying to figure out what the ??? was going wrong.

A bit of research directed me here and to an Experts-Exchange article (http://www.experts-exchange.com/Storage/Hard_Drives/Q_24947953.html).  It would appear the RAID 5 processor and write cache are troublesome components to this controller.  I wasn't using RAID 5 so I experimented with the read/write cache through the HP controller CLI that Open-e conveniently puts in their software.

The CLI was running VERY SLOW, commands took a minute to respond.  Finally the above command made the array perform closer to expectation.  I am now seeing almost 1000 iostats and the CLI is responding normally.  Yes, you have to enable the write cache and set it to 0%.  No other combination seems to work.  Even using the disable write cache option failed to provide acceptable performance.

0 Kudos