Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
Ich bin vom 28.09. bis 01.10.2010 nicht im Büro. E-Mails werden nicht weitergeleitet.
In dringenden Fällen wenden Sie sich bitte an meine Kollegen unter 0351 / 49701-150 bzw. per E-Mail saxonia.hotline@saxsys.de.
Hi JonT,
i have tested my 2008 R2 VM with same test configuration but disabled Jumbo Frames on both virtual NIC's only (left enabled on vSwitch, VMkernel iSCSI Ports, physical switches and SAN).
##################################################################################
Test--
##################################################################################
Max Throughput-100%Read........___10.7937_____.......___4402.22_____.....____137.57______
RealLife-60%Rand-65%Read......_____12.5362____.....____3717.64____.....____29.04______
Max Throughput-50%Read.........____8.5457____.....____4344.71____.....____135.77______
Random-8k-70%Read..............____13.2489_____.....____3766.60______.....____29.43______
As you can see the results are worse in comparison to former tests with enabled Jumbo Frames (Throughput 5.8% - 17.7% lower, 21% - 38% higher CPU Utilization).
We use following configuration for ESXi networking:
Four VMkernel Ports on single vSwitch 1 which are used for iSCSI Traffic only. Each VMkernel Port uses a dedicated physical vmnic (no standby adapter).
All of these ports are bound to vmhba with Round Robin MPIO policy.
In addition there are four Virtual Machine Port Groups which are attached to the same vSwitch1, each Port Group uses a dedicated physical vmnic (no standby adapter). These Virtual Machine Ports were used for above mentioned tests.
vmk1 -> vmnic1
vmk2 -> vmnic2
vmk3 -> vmnic3
vmk4 -> vmnic4
Virtual Machine Port Group 1 -> vmnic1
Virtual Machine Port Group 2 -> vmnic2
Virtual Machine Port Group 3 -> vmnic3
Virtual Machine Port Group 4 -> vmnic4
We don't use Link aggregation on physical switch ports because there's is always the recommendation to only use MPIO for managing paths to iSCSI storage.
So I have tested same VM like above, but now with 4 iSCSI sessions from 2 virtual NICs. These are my new results, which are great! What do you think?
##################################################################################
Test--
##################################################################################
Max Throughput-100%Read........___8.3057_____.......___6544.23_____.....____204.51______
RealLife-60%Rand-65%Read......_____12.4962____.....____3686.39____.....____28.80______
Max Throughput-50%Read.........____7.1141____.....____7587.90____.....____237.12______
Random-8k-70%Read..............____13.1056_____.....____3809.72______.....____29.76______
Thank you for your message! I will be out of the office until Sept 30, 2010 and will respond to email when I return. Have a great day!
David DeCoste
anc that is great to hear. That would actually make sense, if I remember correctly your vSwitch configuration was to separate pNIC's so you should now have 2x vNIC's going to separate physical switches right? Also now your 2k8R2 guest is truly using its MPIO to balance traffic accross the multiple connections, which should have contributed to your gain. Sorry I didn't even think to suggest adding another vNIC but I rarely need to use iSCSI, we have an all Fiber Channel SAN in all our locations
Glad to hear!
JonT
Here is my result from the newly setup EQL PS6000XV, I noticed the harddisk is Seagate Cheetah 15K.7 (6Gbps) even PS6000XV is a 3Gbps array.
(I thought they will ship me Seagate Cheetah 15K.6 originally)
I've also spent 1/2 day today to conduct the test on different generation servers both local storage, DAS and SAN.
The result is pretty making sense and reasonable if you look deep into it.
That's is RAID10 > RAID5, SAN > DAS >= Local and EQL PS6000XV Rocks despite warning saying all 4 links being 99.9% saturated during the sequential tests. (I increased the workers to 5, that's why, it's not in the result but in a seperate test for Max Throughput-100%Read)
Finally, I wonder why there aren't many results from Lefthand, NetApp, 3PAR and HDS?
Enjoy,
Jack
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM on ESX 4.1 with EQL MEM Plugin
CPU TYPE / NUMBER: vCPU / 1
HOST TYPE: Dell PER710, 96GB RAM; 2 x XEON 5650, 2,66 GHz, 12 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB 15K Disks (Seagate Cheetah 15K.7) / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : ESX Software iSCSI, Broadcom 5709C TOE+iSCSI Offload NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........5.4673..........10223.32.........319.48
RealLife-60%Rand-65%Read......15.2581..........3614.63.........28.24
Max Throughput-50%Read..........6.4908..........4431.42.........138.48
Random-8k-70%Read.................15.6961..........3510.34.........27.42
EXCEPTIONS: CPU Util. 83.56, 47.25, 88.56, 44.21%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 1
HOST TYPE: Dell PER610, 12GB RAM; E6520, 2.4 GHz, 4 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, PERC H700 (LSI), 512MB Cache with BBU, 4 x 300 GB 10K SAS/ RAID5 / 450GB Volume
SAN TYPE / HBAs : Broadcom 5709C NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........2.7207..........22076.17.........689.88
RealLife-60%Rand-65%Read......50.4486..........906.69.........7.08
Max Throughput-50%Read..........2.5429..........22993.78.........718.56
Random-8k-70%Read.................55.1896..........841.89.........6.58
EXCEPTIONS: CPU Util. 6.32, 6.94, 5.95, 6.98%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: Dell PE2450, 2GB RAM; 2 x PIII-S, 1,26 GHz, 2 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, PERC3/Si (Adaptec), 64MB Cache, 3 x 36GB 10K U320 SCSI / RAID5 / 50GB Volume
SAN TYPE / HBAs : Intel Pro 100 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........44.1448..........1326.03.........41.44
RealLife-60%Rand-65%Read......93.1499..........456.88.........3.57
Max Throughput-50%Read..........143.9756..........269.51.........8.42
Random-8k-70%Read.................80.27..........502.63.........3.93
EXCEPTIONS: CPU Util. 23.33, 13.23, 11.65, 12.51%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DIY, 3GB RAM; 2 x PIII-S, 1,26 GHz, 2 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, LSI Megaraid 4D (LSI), 128MB Cache, 4 x 300GB 7.2K SATA / RAID5 / 900GB Volume
SAN TYPE / HBAs : Intel Pro 1000 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........15.1582..........3882.81.........121.34
RealLife-60%Rand-65%Read......60.2697..........499.05.........3.90
Max Throughput-50%Read..........2.8170..........2337.38.........73.04
Random-8k-70%Read.................152.8725..........244.40.........19.1
EXCEPTIONS: CPU Util. 16.84, 18.79, 15.20, 17.47%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: Dell PE2650, 4GB RAM; 2 x Xeon, 2.8 GHz, 2 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, PREC3/Di (Adaptec), 128MB Cache, 5 x 36 GB 10K U320 SCSI / RAID5 / 90GB Volume
SAN TYPE / HBAs : Broadcom 1000 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........33.9384..........1743.55.........54.49
RealLife-60%Rand-65%Read......111.2496..........310.62.........2.43
Max Throughput-50%Read..........55.7005..........518.47.........16.20
Random-8k-70%Read.................122.5364..........317.95.........2.48
EXCEPTIONS: CPU Util. 7.66, 6.97, 7.78, 9.27%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DIY, 3GB RAM; 2 x PIII-S, 1,26 GHz, 2 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage (DAS), PowerVault 210S with LSI Megaraid 1600 Elite (LSI), 128MB Cache with BBU, 12 x 73GB 10K U320 SCSI Splite into two Channels 6 Disks each/ RAID5 / 300GB Volume x 2, fully ultilize Raid Card's TWO U160 Interfaces.
SAN TYPE / HBAs : Intel Pro 1000 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........28.9380..........3975.19.........124.22
RealLife-60%Rand-65%Read......30.2154..........2913.15.........84.17
Max Throughput-50%Read..........31.0721..........3107.95.........97.12
Random-8k-70%Read.................33.0845..........2750.71.........78.00
EXCEPTIONS: CPU Util. 23.91, 22.02, 26.01, 20.24%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DIY, 4GB RAM; 2 x Opeteron 285, 2.4GHz, 4 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, Areca ARC-1210, 128MB Cache with BBU, 4 x 73GB 10K WD Raptor SATA / RAID 5 / 200GB Volume
SAN TYPE / HBAs : Broadcom 1000 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........0.2175..........10932.45.........341.64
RealLife-60%Rand-65%Read......88.3245..........393.66.........3.08
Max Throughput-50%Read..........0.2622..........9505.30.........296.95
Random-8k-70%Read.................109.6747..........336.66.........2.63
EXCEPTIONS: CPU Util. 14.11, 7.04, 13.23, 7.80%;
##################################################################################
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: Tyan, 8GB RAM; 2 x Opeteron 285, 2.4GHz, 4 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Storage, LSI Megaraid 320-2X, 256MB Cache with BBU, 4 x 36GB 15K U320 SCSI / RAID 5 / 90GB Volume
SAN TYPE / HBAs : Broadcom 1000 NIC
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........0.4261..........7111.26.........222.23
RealLife-60%Rand-65%Read......30.1981..........498.56.........3.90
Max Throughput-50%Read..........0.5457..........5974.71.........186.71
Random-8k-70%Read.................42.7504..........496.88.........3.88
EXCEPTIONS: CPU Util. 29.71, 24.51, 27.74, 32.93%;
Hi poweredge 2010,
could you please explain how many NICs (physical and virtual) you were using for tests from VM to EQL PS6000XV, which MPIO policy, which Guest OS and iSCSI Initiator and so on? Had the VM accessed the LUN directly via Guest's iSCSI Initiator?
Sorry for asking so many questions about your setup but your results are quite fantastic and I want to compare to my former results. My environment should be similar to yours.
- Physical: ESX 4.1 hosts has 4 Broadcom 5709C NICs for iSCSI (For redundancy design, I used 2 from LOM, 1 from Riser 1 Quad Port 5709C, and 1 from Riser 2 Quad Port 5709C) and those 4 NICs are connected to 2 PowerConnect 5448 switch (2 NICs on each switch), then the 4 ports from EQL PS6000XV also connects both swtich (2 NICs on each switch), Jumbo Frame and Flow Control have been turned on all along the path on switch, server and inside ESX.
- Virtual: The testing VM is a W2K3, didn't even bother to install any Service Pack, has 2GB Ram, Version 7, Paravirtual Disk Controller, and just one NIC (VMXNET3), it shows as 10Gbps NIC, so IOMeter is really testing on the ESX Host, this VM is just a helper. I did not even enable Jumbo Frame or use RDM, as it's own file system is mapped by the ESX host to a EQL volume and even 5709C supports hardware HBA, but since it doesn't support Jumbo Frame (someone tested saying Jumbo Frame is far more important than Hardware HBA), so I used ESX buildin software iSCSI instead. Finally, I didn't even install HIT kit on this VM.
- On EQL PS6000XV, I've just upgraded to latest FW 5.0.2, also installed EQL's MEM Plugin to enable Storage Hardware Acceleration and VAAI, so the default ESX MPIO is DELL_PSP_EQL_ROUTED, it's basically an add-on on top of RR.,
- My result actually agrees with many other EQL users especially those who are using PS6000XV and PS5000XV in R10 configuration.
Btw, one thing I don't quite get, from the result I saw someone is also using PS6000XV or PS5000XV, but they claim the random IOPS from R10 is almost the same as R50, this can't be, I thought R10 is at least 50-70% more than R50 in random IOPS ESX environment (or is it actually 100% sequential ? as I couldn't remember if it's random or sequential R10 is at least 50-70% faster/more than R50), can anyone double confirm this please?
Thanks,
Jack
I've just performed another test using R610, this time with EQL PS6000, also used 2 Workers to push its limit and see where it can get.
7197.69 for RealLife-60%Rand-65%Read is really really high for a single array with 14 spindles 15K SAS!
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 1
HOST TYPE: Dell PER610, 12GB RAM; E6520, 2.4 GHz, 4 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB 15K Disks (Seagate Cheetah 15K.7) / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : Broadcom 5709C NICs with 2 paths only (ie, 2 physical NICs to SAN)
Worker: Using 2 Workers to push PS6000XV to it’s IOPS peak!
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read……14.3121………6639.48………207.48
RealLife-60%Rand-65%Read……12.8788………7197.69………150.51
Max Throughput-50%Read………11.3125………6837.76………213.68
Random-8k-70%Read……………13.7343………6739.38………142.22
EXCEPTIONS: CPU Util. 25.99, 24.10, 28.22, 25.36%;
##################################################################################
I admit it's not fair to have 2 Workers and in fact, I am seeing SAN HQ is complaing about 1% TCP Retransmit when I push the box so hard, so the following is with normal 1 Worker again.
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 1
HOST TYPE: Dell PER610, 12GB RAM; E6520, 2.4 GHz, 4 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB 15K Disks (Seagate Cheetah 15K.7) / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : Broadcom 5709C NICs with 2 paths only (ie, 2 physical NICs to SAN)
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read……8.7584………5505.30………172.04
RealLife-60%Rand-65%Read……12.5239………4032.84………31.51
Max Throughput-50%Read………6.8786………6455.76………201.74
Random-8k-70%Read……………14.96………3435.59………26.84
EXCEPTIONS: CPU Util. 19.37, 10.33, 18.28, 9.78%;
Impressive Equallogic PS6000XV IOPS result
I just performed the test again 3 times and confirmed the followings, this is with default 1 Worker only, IOmeter testing using VM's VMFS directly, no MPIO direct mapping to EQL array, VM is version 7, Disk Controller is Paravirtualized and NIC is VMXNet3.
SERVER TYPE: VM on ESX 4.1 with EQL MEM Plugin, VAAI enabled with Storage Hardware Acceleration
CPU TYPE / NUMBER: vCPU / 1
HOST TYPE: Dell PE R710, 96GB RAM; 2 x XEON 5650, 2,66 GHz, 12 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB Disks / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : ESX Software iSCSI, Broadcom 5709C TOE+iSCSI Offload NIC
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read……4.1913………13934.42………435.45
RealLife-60%Rand-65%Read……13.4110………4051.49………31.65
Max Throughput-50%Read………5.5166………10240.39………320.01
Random-8k-70%Read……………14.1525………3915.15………28.95
EXCEPTIONS: CPU Util. 67.82, 38.12, 56.80, 40.2158%;
##################################################################################
RealLife-60%Rand-65%Read 4051 IOPS is really impressive for a single array with 14 15K RPM spindles!
I think what really helped are
- VM being Version 7 and Disk Controller is Paravirtualized
- VAAI enabled with Storage Hardware Acceleration
Hi guys,
Is a google spreadsheet is still filled with all the results ?
The original one is a bit old (http://spreadsheets.google.com/pub?key=p2IFgyUF_v5Jn-7QobgY9Fw)...
TIA
Ik ben dinsdag 19 en woensdag 20 oktober niet aanwezig. Email wordt in tussentijd niet gelezen of doorgestuurd.
Voor dringende zaken verzoek ik u contact op te nemen met kantoor: 013-5115088 of helpdesk@feju.nl<mailto:helpdesk@feju.nl>
Groeten,
Dennes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM
CPU TYPE / NUMBER: CPU / 1
HOST TYPE: R610, 72GB RAM; 2X x5680 XEON 3.33 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS 2100 / 5+1 DISK (15K SAS) / R5)
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___5.2631____..........___11254_______.........____351______
RealLife-60%Rand-65%Read......____8.1451___..........___4243_______........._____33.14_____
Max Throughput-50%Read..........___9.5291___..........___5470_______........._____170.94_____
Random-8k-70%Read.................__6.052____..........___5180_______.........___40.48_______
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
SERVER TYPE: VM
CPU TYPE / NUMBER: CPU / 1
HOST TYPE: R610, 72GB RAM; 2X x5680 XEON 3.33 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS 2100 / 5 x 8+1 DISK (15K SAS) (45 disks) / R5)
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___5.347_______..........___11104_______.........____347______
RealLife-60%Rand-65%Read......___6.067_______..........___5581_______.........__43.60________
Max Throughput-50%Read..........__7.636________..........___7428_______.........___232.13_______
Random-8k-70%Read.................____4.657______..........__5990________.........____46.79______
EXCEPTIONS: CPU Util.-XX%;
Ik ben van dinsdag 26 tot vrijdag 29 oktober niet aanwezig. Email wordt in tussentijd niet gelezen of doorgestuurd.
Voor dringende zaken verzoek ik u contact op te nemen met kantoor: 013-5115088 of helpdesk@feju.nl
Groeten,
Dennes
Here are my results, from reading the other results I think mines are pathetic.
Please shed some light so I can go to the Storage Team and "share" the light.
FYI, I ran the same test on a physical server and the result were almost the same... (4 MBps on the RL60%)
SERVER TYPE: VMWare ESX 3.5u5
GUEST OS / CPU / RAM Win2K3 SP2, 1 VCPU, 2GB
HOST TYPE: HP DL380 G6, 72GB RAM, 2 x Intel X5570, 2.93GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP StorageWorks XP24000 RAID-5, 14+2, 15k RPM drives, 10X90GB Luns on the datastore used by the VM
SAN TYPE / HBAs : FC EMULEX LPe11000 on Brocade switches
TEST NAME--
##################################################################
Max Throughput-100%Read....5.6884..........10216......... 319.26 CPU=28.02%
RealLife-60%Rand-65%Read...13.0071........500.23........ 3.908 CPU=4.747%
Max Throughput-50%Read......1.3554 .........2302........... 71.967 CPU=9.53%
Random-8k-70%Read.............12.11...........493.92......... 3.858 CPU=4.32%
##################################################################
The SAN is also running 100 other VMs ...
Alex
Consultant - VMware Specialist
Where are your 10x90GB LUNs joined, at the SAN frame or on the host? If the SAN team has created a larger volume for you using 10x90GB devices you are fine, but generally performance will be terrible if you use "Extents" on ths host like that. Creating a large datastore on the host by basically concatenating 10 devices like that will only use a certain number of spindles at a time on the frame, all typically from the same array group. My recommendation is to use one single LUN sized anywhere between 200-500GB from the storage frame to your cluster of hosts, so that way your Datastore is a single volume on a single LUN. Let the storage frame do all your striping and disk separation for you.
Test that configuration out and let us all know if your numbers improve. Generally your hardware is sufficient for much more than that. Also watch the CPU usage on the entire host, as that could impact your test VM's performance to some degree.
Hope this helps!
The 10X90GB Luns are joined using Extents on the hosts.
That's the answer I got from the storage team: "Everything is RAID-5, 14+2, so a single LDEV spans 16 physical disks. Tier-2 uses 15k RPM drives
We try to not assign 2 LDEVs from the same RAID group, but sometimes that is impossible. 10x90GB should be able to be on separate RAID groups, therefore, you would have 160 spindles servicing your I/Os"
So following their saying, 160 spindles is sufficient?
Alex
Consultant - VMware Specialist
160 spindles is excellent, but the problem is that if you have to take the 10 devices and make one volume out of them at the Host level, they are not striped accross all 160. You will get only the I/O from the 16 spindles as you read/write your way accross those 90GB volumes in your test. Its entirely possible that your VM could live entirely on one Raid array with those 16 spindles. If you have the SAN team stripe the 10 devices for you (if possible) within the array and present you one 900GB volume, your perf will be split to all of those 160 spindles. Make sense? The ESX Host will only concatenate, which just adds each volume to the end of the previous one to make a larger size volume.
First attempt at this so I hope I've got it right. Ran this against a 600GB LUN. Look OK?
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: HP DL380 G6, 2 x E5540 2.53GHz Quad. 24GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4500 G2 x 4 / 48 x 420GB / RAID 5 + nRAID 10
SAN TYPE / HBAs : 2 paths via 2 x ProCurve 6600 @ 1GB / HP NC382i with HP DSM
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........16.6..........3599.86.........112.49
RealLife-60%Rand-65%Read......12.93..........3183.34.........24.86
Max Throughput-50%Read..........9.81..........5637.98.........176.18
Random-8k-70%Read.................7.68..........3172.32.........35.39
EXCEPTIONS: CPU 10, 6, 11, 11
###########################################################################
For comparison I've also tested its DAS
SERVER TYPE: Physical
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: HP DL380 G6, 2 x E5540 2.53GHz Quad. 24GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP P410i / 2 x 146GB 15K / RAID 1
SAN TYPE / HBAs : N/A
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........1.32..........44525.46.........1391.42
RealLife-60%Rand-65%Read......65.59..........751.36.........3.8
Max Throughput-50%Read..........69.3..........822.18.........25.69
Random-8k-70%Read.................61.97..........837.51.........6.54
EXCEPTIONS: CPU 4, 9, 3, 4
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2003 std R2 SP2
CPU TYPE / NUMBER: CPU / 4
HOST TYPE: HP Proliant DL380 G3, 4GB RAM; 4x XEON 3,06 GHz Quad-Core
SAN Type: EMC Clariion CX4-120 / Disks: 300 GB 15k SAS / RAID LEVEL: Raid5 / 5 Disks / Emulex LP 9802 4Gbit FC HBA
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read.......______9___..........___6097__........___185____
RealLife-60%Rand-65%Read..._____29___.........._____1532__........_____12____
Max Throughput-50%Read.........______7___..........____8681__........___271____
Random-8k-70%Read................_____28___.........._____1666__........_____13____
CPU 9-10-12-10
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2003 std R2 SP2
CPU TYPE / NUMBER: CPU / 4
HOST TYPE: HP Proliant DL380 G3, 4GB RAM; 4x XEON 3,06 GHz Quad-Core
SAN Type: EMC Clariion CX4-120 / Disks: 300 GB 15k SAS / RAID LEVEL: Raid0 / 6 Disks / Emulex LP 9802 4Gbit FC HBA
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read.......______10___..........___5924__........___185____
RealLife-60%Rand-65%Read..._____19___.........._____2664__........_____21____
Max Throughput-50%Read.........______7___..........____8537__........___266____
Random-8k-70%Read................_____17___.........._____2862__........_____22____
CPU 9-9-11-9
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2003 std R2 SP2
CPU TYPE / NUMBER: CPU / 4
HOST TYPE: HP Proliant DL380 G3, 4GB RAM; 4x XEON 3,06 GHz Quad-Core
SAN Type: EMC Clariion CX4-120 / Disks: 300 GB 15k SAS / RAID LEVEL: Raid1/0 / 8 Disks / Emulex LP 9802 4Gbit FC HBA
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read.......______10___..........___6095__........___190____
RealLife-60%Rand-65%Read..._____19___.........._____2717__........_____21____
Max Throughput-50%Read.........______7___..........____8660__........___270____
Random-8k-70%Read................_____17___.........._____2913__........_____23____
CPU 9-9-12-10
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2003 std R2 SP2
CPU TYPE / NUMBER: CPU / 4
HOST TYPE: HP Proliant DL380 G3, 4GB RAM; 4x XEON 3,06 GHz Quad-Core
SAN Type: EMC Clariion CX4-120 / Disks: 300 GB 15k SAS / RAID LEVEL: Raid6 / 8 Disks / Emulex LP 9802 4Gbit FC HBA
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read.......______10___..........___5933__........___185____
RealLife-60%Rand-65%Read..._____26___.........._____1631__........_____13____
Max Throughput-50%Read.........______7___..........____8497__........___265____
Random-8k-70%Read................_____26___.........._____1774__........_____13____
CPU 7-11-12-10
This seems to be a bit easiere read and compare
Max Throughput-100%Read
Av. IOs/sek Av. MB/sek Av. Resp. Time CPU
vmware_raid0 5924.357251 185.136164 10.143605 8.827404
vmware_raid10 6095.489571 190.484049 9.847174 8.744632
vmware_raid5 6097.029427 190.532170 9.839976 8.993979
vmware_raid6 5932.975720 185.405491 10.133632 7.313606
Max Throughput-50%Read
Av. IOs/sek Av. MB/sek Av. Resp. Time CPU
vmware_raid0 8537.344950 266.792030 6.939949 11.317932
vmware_raid10 8660.177426 270.630545 6.840767 11.711477
vmware_raid5 8680.992267 271.281008 6.821322 11.967368
vmware_raid6 8497.595827 265.549870 6.979722 11.532469
Random-8k-70%Read
Av. IOs/sek Av. MB/sek Av. Resp. Time CPU
vmware_raid0 2862.687167 22.364743 17.769811 9.322945
vmware_raid10 2913.020388 22.757972 17.112788 9.639570
vmware_raid5 1666.858224 13.022330 28.530374 9.701085
vmware_raid6 1773.546681 13.855833 25.752040 10.418077
RealLife-60%Rand-65%Read
Av. IOs/sek Av. MB/sek Av. Resp. Time CPU
vmware_raid0 2664.346540 20.815207 19.339617 8.512301
vmware_raid10 2717.904381 21.233628 18.842542 8.529612
vmware_raid5 1532.989034 11.976477 28.874925 10.464225
vmware_raid6 1631.977805 12.749827 26.287937 11.386308
Message was edited by: mry