I have a Supermicro X10SL7-F motherboard, which includes an integrated LSI 2308 SAS/SATA controller, running on a fresh install of ESXi 5.5 Update 1 patch rollup 2. I have updated the firmware of the LSI 2308 controller to v19 and updated to the latest VMWare driver (version 19, available here: VMware vSphere 5: Private Cloud Computing, Server and Data Center Virtualization). Reminder: this LSI controller is on VMWare's supported device list.
I use two Western Digital Caviar Black 2TB drives in RAID 1 configuration on this controller, and the write performance is absolutely terrible. The "out of the box" write performance was about 4 MB/s, and after contacting Supermicro support and setting the Disk Cache Policy = Enabled I get about 13 MB/s write throughput. (FYI, this disk cache policy setting is not accessible via the BIOS menu, you HAVE to use LSI's Megaraid Storage Manager software to set it, which is a huge headache all its own). This same RAID 1 configuration allows for over 100 MB/s read performance in ESXi, and that is perfectly acceptable.
Before people jump all over me for using an integrated controller with no onboard RAM cache, I temporarily ran Fedora Linux and was able to demonstrate over 135 MB/s reads and writes to the same RAID 1 configuration. Also, as a test I re-configured the RAID settings temporarily to run as RAID 0 (stripe) and the read and write performance in ESXi was just fine, and exceeded 100 MB/s. To me, this clearly indicates the write performance is a problem with VMWare's driver and is not a limitation of the controller or the underlying hard drives.
I have scoured the internet and all I see are people re-flashing the controller to run in IT mode, then they use passthrough to get the drives accessible to some underlying VM (e.g. a ZFS storage array, then re-share it back to its parent hypervisor via NFS or iSCSI). I really don't want to do this if I don't have to!
Please, is there anyone who has words of wisdom to make the RAID 1 performance on a LSI 2308 controller not suck on ESXi? Does anyone know if this same problem exists when running a RAID 10 configuration?
Welcome to communities.
performance depend upon combination of RAID, DISK RPM , and how they connected at last utilization .
always you will get better performance on F/C , as you are using SATA try to reduce number of VM running on it
"...Before people jump all over me for using an integrated controller with no onboard RAM cache, I temporarily ran Fedora Linux and was able to demonstrate over 135 MB/s reads and writes to the same RAID 1 configuration..."
Trully, your on-board controller (based on LSI SAS2308 SoC) has no cache and that *is* the problem. And I'm going to jump over you:
ESXI DOES NOT DO DISK CACHING!
This has been said here zillion times, and yet some people do not want to face hard reality. You can not compare results you got from Linux, because Linux (and any other modern OS) do disk-caching. It reserves part of RAM (can be even a few GB) for I/O-buffers and disk-cache. ESXi does not do this. It does not sacrifice single bit of RAM for caching disk-i/o. Instead of that, it counts on raid-controller to do it. And if your raid-controller does not have its own cache, it means you are running raid-arrays basically un-cached.
What LSI told you was to turn DISK-cache on (that's the cache directly inside of hard-drives, iirc it WD Black has 64MB), but that is terribly small (compared to 512MB-2GB you can find on any raid-controller), and not shared between drives (each hard-drive has its own).
I'm affraid you can not do a lot, except to get another controller (with on-board cache), or maybe SSDs.
Check the following url. Removing lsiprovider may work until ESXi 5.5 Update 2 is released. http://www.virtuallifestyle.nl/2014/08/update-smi-s-provider-causing-latency-issues