The reason for your lower scores is the fact that you used a 64 queue depth on your 3.0.1 tests, but a 1 queue depth on your 3.0.2 tests.
Outstanding catch Morten. This test was run by another admin and I assumed she used the same test configurations. Just goes to show the truth in the old saw about assume.
I'll retest soon and post results, but I bet you're right.
I just wish i knew how to get performance like that.
We run HP bl465c blades, with Qlogic HBAs, hooked up to an EVA8000 @ 2gbps.
While the Windows blades are able to get nearly 400Mbytes/sec (with 2 port load balancing), VMware ESX 3.0.2, and RHEL 4 are only able to manage around 80Mbytes/sec. We have this general problem with Linux machines on the SAN.
esxtop shows 0 queue on the HBAs, and 0 load (?!). The guest OS runs at about 80% iowait.
I suspect the Qlogic Linux driver, however i hear that Emulex cards have the same problem in Linux.
Any suggestions would be greatly appreciated.
You should be getting performance like that. The test platform is a DL380 G3 with a single port qla 2340 2Gb card. We're running 2x EVA 8k as well w/ 4Gb Brocade.
How much other stuff is happening on the EVA? What are your IOPS & data rates on your disk groups overall?
The EVA is pretty much idle, the Windows test, and the Linux/VMware tests are done under the same conditions, with the same hardware. That is what leads me to believe that it can only be something driver/OS specific, and not hardware/load specific.
Do you have any options set for the QLA driver in VMware? Like Queue options? I run without any options.
I set the following now, but it was vanilla when my original performance tests were run.
esxcfg-advcfg -s 0 /Disk/UseDeviceReset
esxcfg-advcfg -s 1 /Disk/UseLunReset
service mgmt-vmware restart
/usr/sbin/esxcfg-module -s "qlport_down_retry=14 ql2xmaxqdepth=64" qla2300_707
/usr/sbin/esxcfg-module -g qla2300_707
/usr/sbin/esxcfg-boot -q vmkmod | /bin/grep qla2300_707
Check my blog: