Working with our EVA4000 SAN (with 16 x145GB disks + 8x 500GB) which has DL585s x4 and DL385s x4 running ESX 3.5 we are seeing currently 64mbps for reads and 55mbps for writes on the virtual we are using for our database server. Is this typical for other people out there?
Run began: Tue Aug 12 15:30:19 2008
Using Minimum Record Size 64 KB
Using Maximum Record Size 512 KB
Using minimum file size of 1048576 kilobytes.
Using maximum file size of 5242880 kilobytes.
Command line used: /opt/iozone/bin/iozone -a -i 0 -i 1 -y 64 -q 512 -n 1G -g 5G
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
KB reclen write rewrite read reread
1048576 64 119840 106420 44654 54505
1048576 128 105871 98496 44604 52392
1048576 256 109246 98330 46923 49939
1048576 512 108612 97433 45398 49688
2097152 64 79077 72022 46774 48420
2097152 128 77208 72996 49515 51446
2097152 256 79036 72971 46667 50004
2097152 512 80351 67250 47873 50124
4194304 64 68539 53477 43285 44356
4194304 128 67747 54020 47642 41784
4194304 256 67272 50594 34582 45051
4194304 512 67742 54265 43592 45637
No it's not normal in fact it's far from normal.
Have your storage folks check if you have a damaged fiber or errors on the fc switch, at least for starters.
Do you have other machines connected to that same EVA4K ?
The values on the excel file dont really add up to the values you mention.
1) Find a non ESX machine with storage connection and run an I/Ometer or alike tool to get the throughtput of your storage to see if you get equal results.
2) Check the cabling, a "damaged" fiber is enough to get your I/O down the drain.
3) Check the FC Switch Ports for Sync Errors.
What version of ESX are you using and what policy are you using on the ESX boxes running disks on the E4K ... and by the way is the firmware on it supported ?
PS: Might continue to follow up on this in the morning in case i get to sleep in the chair
My apologies. The spreadsheet is accurate. In my rush ...I had accidently typed kbps instead of mbps in the orginal question. I have edited it to read correctly and reflect the spreadsheet. I will run additional tests and request our hosting provider provides the additional information about ESX.
Your graphical data show up to 70000 KBps wich roughtly measured unless i made a mistake is 70MBps, so there has to be some sort of glitch on your system.
Are you running Linux OS's as Guest's ?
If so please try the following, copy any file onto one of the virtual disks then just make a dd if=/<path>/<file> of=/dev/null to give up a rough idea of the actual reading speed and then a dd if=/<path>/<file> of=/<path>/<file2>.
Did you check my other recomendations ?
I'm not familiar with iozone (I tend to use iometer), but if these numbers are for sequential IO, then yes 70MB/s is a very low number when measured from within a VM. Typically such a benchmark should be at or near the medium speed (~200MB/s for 2Gb fibre), at least on present-day CPUs. For random IO however, your bottleneck is usually the number of IO/s your disks are able to service, and the measured throughput in MB/s may even fall into single digits.
If the EVA also hosts data for non-vmware servers, there could potentially be a high load on the storage system itself and this might bottleneck your test.
Well, anyways if you think Physical you can also achieve some degree of improvement.
For instance if you create two vSCSI Controllers, and have you disks spread over the two and as you create the pv's and form the vg you give it a right order and use stripping you can get better results.
But strange enough the values that you described according to EMC arent really all that bad .... all in all i just find it a tad weird.
Get your results and i'll run those same tests at a EVA3K, EVA8K and at the CX3-80 i have at work to see if we can draw a line.
Francisco Cardoso, Logica PT - VCP
Thanks for the help! I have passed through all of the really helpful suggestions to the Hosting provider. It may take some time for the provider to come back to me with info. They have suggested using RDM instead of VMFS however I understand from reading doc RDM is not suitable for databases. What specific policies would you like to see (eg Resource)?
Depending on what configuration you have .. cluster or non cluster you might find yourself compeled to choose one over the other.
Bear in mind that you can use both with Databases and i really see no reason to use RDM over Virtual Disks in most scenários unless of course the Database Guys are RDM weirdo's ;).
Francisco Cardoso, Logica PT - VCP