VMware Cloud Community
DKramkowski
Enthusiast
Enthusiast

VMWare Fibre Channel SAN Performance Woes

Hopefully someone here can help me get some idea what's causing my performance issues.

I have a Promise VTrak E610fD SAN linked to my VMWare server using two Emulex LPe1150 HBAs at 4Gb/s.

My main datastores at this point are a four disk RAID10 and a two disk RAID1.

Promise had told me that the best performance would be had from a RAID5 array, and to be honest, I didn't believe them as everything I've ever read indicates RAID0/RAID10 will be the best performer. I had some extra time and some drives so I decided to do some testing.

All tests were run on a Windows 7 64-bit VM using ATTO Disk Benchmark, and all tests were performed after the array finished building.

I started with two 146GB 15k SAS drives in  RAID 1. This was RDM'd to the VM. Testing that config gave me pretty poor results, with the highest speed being ~120MB/s for 16K.

I then removed the RDM mapping, deleted the array, added a third drive and created a RAID5 array. After that finished Syncing, I ran the same exact test again. The results this time were much more consistent and much better with all write speeds for transfer sizes over 16K being in excess of 142MB/s and all read speeds for the same transfer sizes being in excess of 235MB/s. Needless to say I was shocked that RAID5 blew away the RAID1 using the exact same drives.

I then removed the mapping, deleted the array, created a RAID50, again, using all 15K 146GB SAS drives, and waited for that to build.

Testing of the RAID50 gave me even better results, with all write speeds for transfer sizes over 128K being in excess of 235MB/s and (almost) all read speeds for the same being in excess of 360MB/s.

I then removed the RAID50 and built an array using three 1TB 7200RPM SATA drives, allowed that to build, and tested it.

The results were obviously a bit lower than the 15K SAS drives in a RAID5 array, but still quite decent with most read speeds being in excess of 220MB/s and most write speeds being in excess of 128MB/s.

Now, here's why I'm asking here instead of going to Promise: I then deleted the partition on the array, removed the RDM, created a VMware datastore on it and moved the VM I was using for testing over to the new RAID5 array. Absolutely nothing was changed on the SAN or the RAID5 array.

When I ran the same tests again on the VM, but this time using it's C drive for the test which now resides on the RAID5 array, the test results were significantly lower than what I got on the exact same array when it was RDM'd to the same VM, with the absolute best read speed barely touching 128MB/s. It 'feels' as if VMWare has a cap on transfer speeds to VMWare datastores.

Is there something I'm missing in my config? It simply does not make sense that there is such a huge performance drop from simply going from a RDM to a VMWare datastore.

The other issue I have been seeing in this testing which may or may not be related, is when copying a large file over my LAN, such as a 4GB ISO to or from the VM that is on the array, it'll start out up a bit over 100MB/s, which is 'acceptable' on a 1Gb LAN considering overhead and such, but then after a seemingly random amount of time, the copy will seem to pause for a short bit, then pick up at a much slower pace. Typically half what it started out at. This change in transfer speed shows up in the VMs disk performance monitor as well as in Windows.

Does anyone have any thoughts on what's going on, tips, or possibly some settings I could change to help performance?

0 Kudos
1 Reply
Josh26
Virtuoso
Virtuoso

I make this point repeatedly.

If a vendor designs a product with a focus on tuning it for RAID5 deployments, insisting that RAID10 should be faster is theorycraft that doesn't apply to the real world.

I'm aware this doesn't help with your RDM issue.

0 Kudos