Attention!
Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.
The new thread is available here:
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any
warranty.
If anybody shouldn't be agreed with some results then he can make own tests and presents
his/her results too.
I hope this way to classify the different systems and give a "neutral" performance comparison.
Additionally I will mention that the performance is one of many aspects to choose the right system.
The others could be e.g.
\- support quality
\- system management integration
\- distribution
\- self made experiences
\- additional features
\- costs for storage system and infrastructure, etc.
There are examples of IOMETER Tests:
=====================================
\######## TEST NAME: Max Throughput-100%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
\######## TEST NAME: RealLife-60%Rand-65%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
\######## TEST NAME: Max Throughput-50%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
\######## TEST NAME: Random-8k-70%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
The global options are:
=====================================
Worker
Worker 1
Worker type
DISK
Default target settings for worker
Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
Disk maximum size,starting sector
8000000,0
Run time = 5 min
For testing the disk C is configured and the test file (8000000 sectors) will be created by
first running - you need free space on the disk.
The cache size has direct influence on results. By systems with cache over 2GB the test
file should be increased.
LINK TO IOMETER:
Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s
To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks
Here the config file *.icf
\####################################### BEGIN of *.icf
Version 2004.07.30
'TEST SETUP ====================================================================
'Test Description
IO-Test
'Run Time
' hours minutes seconds
0 5 0
'Ramp Up Time (s)
0
'Default Disk Workers to Spawn
NUMBER_OF_CPUS
'Default Network Workers to Spawn
0
'Record Results
ALL
'Worker Cycling
' start step step type
1 5 LINEAR
'Disk Cycling
' start step step type
1 1 LINEAR
'Queue Depth Cycling
' start end step step type
8 128 2 EXPONENTIAL
'Test Type
NORMAL
'END test setup
'RESULTS DISPLAY ===============================================================
'Update Frequency,Update Type
4,WHOLE_TEST
'Bar chart 1 statistic
Total I/Os per Second
'Bar chart 2 statistic
Total MBs per Second
'Bar chart 3 statistic
Average I/O Response Time (ms)
'Bar chart 4 statistic
Maximum I/O Response Time (ms)
'Bar chart 5 statistic
% CPU Utilization (total)
'Bar chart 6 statistic
Total Error Count
'END results display
'ACCESS SPECIFICATIONS =========================================================
'Access specification name,default assignment
Max Throughput-100%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
'Access specification name,default assignment
RealLife-60%Rand-65%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
'Access specification name,default assignment
Max Throughput-50%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
'Access specification name,default assignment
Random-8k-70%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
'END access specifications
'MANAGER LIST ==================================================================
'Manager ID, manager name
1,PB-W2K3-04
'Manager network address
193.27.20.145
'Worker
Worker 1
'Worker type
DISK
'Default target settings for worker
'Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
'Disk maximum size,starting sector
8000000,0
'End default target settings for worker
'Assigned access specs
'End assigned access specs
'Target assignments
'Target
C:
'Target type
DISK
'End target
'End target assignments
'End worker
'End manager
'END manager list
Version 2004.07.30
\####################################### ENDE of *.icf
TABLE SAMPLE
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........__________..........__________.........__________
RealLife-60%Rand-65%Read......__________..........__________.........__________
Max Throughput-50%Read..........__________..........__________.........__________
Random-8k-70%Read.................__________..........__________.........__________
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
I hope YOU JOIN IN !
Regards
Christian
A Google Spreadsheet version is here:
Message was edited by:
ken.cline@hp.com to remove ALL CAPS from thread title
Message was edited by:
RDPetruska
Added link to Atamido's Google Spreadsheet
Definitely true - one OS for all would be fine, but I'm not sure about windows licensing (should it be ok to use a trial version).
I tested iometer with linux, but got not realistic results and in additon you need the iometer client on windows too.
@christianZ: Looks like I had write back caching enabled, but no read ahead. Options for write policy are write back, write through, and force write back. Then for read policy, it has read ahead or adaptive read ahead. I'll try enabling read ahead caching. Should I maybe use force write back, since is there maybe something like sanmelody is bypassing the cache?
I've tried yet another virtualization product on my DL360 with local storage. It seems we have finally found a product with fairly similar storage performance as the vmware products I tested on this server earlier.
I also tested a paravirtualized linux VM (As provided by a template in XenCenter), but didn't get the expected results. While I had expected better results than the windows test, the results were not very good at all (except cpu load). As workstation 6 also supports paravirtual linux guests I wonder what results it would have shown, but this server is now close to being acquired for prod usage.
SERVER TYPE: Virtual Windows 2003R2sp2 on XenServer release 4.0.1-4249p (xenenterprise)
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size
TEST NAME | Av. Resp. Time ms | Av. IOs/sec | Av. MB/sec |
Max Throughput-100%Read. | 5 | 10445 | 326 |
RealLife-60%Rand-65%Read | 44 | 810 | 6.3 |
Max Throughput-50%Read | 6.46 | 8896 | 278 |
Random-8k-70%Read. | 55.9 | 811 | 6.3 |
EXCEPTIONS: CPU Util. 92% 52% 83% 37%
SERVER TYPE: Virtual Debian 4.0, kernel 2.6.18.xs4.0.1.900.5799 on XenServer release 4.0.1-4249p (xenenterprise)
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size
TEST NAME | Av. Resp. Time ms | Av. IOs/sec | Av. MB/sec |
Max Throughput-100%Read. | 0.36 | 2773 | 86.6 |
RealLife-60%Rand-65%Read | 3.04 | 328 | 2.6 |
Max Throughput-50%Read | 1.38 | 724 | 22.6 |
Random-8k-70%Read. | 3.3 | 302 | 2.36 |
EXCEPTIONS: CPU Util. 0% 0% 0% 0%
Part 3: Running 8 tests at one time on 2 ESX hosts (each with 4 VM's, each clones of each other).
The back-end for this test is split - I used 3 LeftHand DL320s, 36 x 15K 300GB SAS drives at
RAID 50 (at the host level) and then a both one-way (no replication) and two-way replicated volumes
for EACH host. (Eg, each host has its own volume to run off of). All tests run with a 3COM 4500G
as the switch.
All tests run on a non-formatted volume using Microsoft iSCSI initaitor.
Sessions 1 -> 4 were on a 1-Way volume.
Session 1:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 32.990719 | 1,799.65 | 56.239195 | 52.94 |
RealLife-60%Rand-65%Read | 44.732703 | 914.94 | 7.147985 | 58.46 |
Max Throughput-50%Read | 29.608262 | 1,961.87 | 61.308413 | 54.70 |
Random-8k-70%Read. | 44.877465 | 1,026.99 | 8.023367 | 50.79 |
Session 2:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 32.855391 | 1,806.96 | 56.467508 | 51.66 |
RealLife-60%Rand-65%Read | 45.451255 | 909.93 | 7.108827 | 57.28 |
Max Throughput-50%Read | 31.814684 | 1,865.16 | 58.286388 | 49.85 |
Random-8k-70%Read. | 45.038477 | 1,015.53 | 7.933811 | 51.18 |
Session 3:
Frontend: HP DL385, 12GB RAM, 2xOpteron 2.6 DC, 2xIntel e1000 PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 38.523787 | 1,502.81 | 46.962704 | 50.29 |
RealLife-60%Rand-65%Read | 46.062629 | 881.17 | 6.884125 | 57.69 |
Max Throughput-50%Read | 62.866689 | 968.49 | 30.265301 | 31.65 |
Random-8k-70%Read. | 45.212364 | 1,002.04 | 7.828432 | 51.38 |
Session 4:
Frontend: HP DL385, 12GB RAM, 2xOpteron 2.6 DC, 2xIntel e1000 PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 55.957563 | 1,070.46 | 33.451978 | 41.64 |
RealLife-60%Rand-65%Read | 56.890474 | 790.74 | 6.177638 | 50.33 |
Max Throughput-50%Read | 107.05434 | 554.62 | 17.331824 | 26.53 |
Random-8k-70%Read. | 57.601321 | 830.74 | 6.490151 | 46.29 |
Sessios 5 -> 8 were on a 2-way volume:
Session 5:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 47.009389 | 1,268.83 | 39.650974 | 48.48 |
RealLife-60%Rand-65%Read | 49.853864 | 878.28 | 6.861588 | 53.10 |
Max Throughput-50%Read | 87.856369 | 691.62 | 21.613142 | 28.12 |
Random-8k-70%Read. | 56.399108 | 834.95 | 6.523053 | 48.27 |
Session 6:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 47.319051 | 1,261.90 | 39.43428 | 49.67 |
RealLife-60%Rand-65%Read | 104.975759 | 446.25 | 3.48632 | 42.61 |
Max Throughput-50%Read | 54.936891 | 1,069.91 | 33.434719 | 43.62 |
Random-8k-70%Read. | 109.253424 | 419.66 | 3.278631 | 44.73 |
Session 7:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 27.050811 | 2,209.13 | 69.03526 | 53.81 |
RealLife-60%Rand-65%Read | 87.214384 | 508.69 | 3.974111 | 46.15 |
Max Throughput-50%Read | 37.742076 | 1,578.58 | 49.330604 | 42.86 |
Random-8k-70%Read. | 80.979612 | 535.10 | 4.180488 | 48.50 |
Session 8:
Frontend: IBM x3500, 2 Xeon 5140's, 5GB RAM, 2xIntel PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI Initiator
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 38.51754 | 1,497.39 | 46.793515 | 49.15 |
RealLife-60%Rand-65%Read | 113.306869 | 416.15 | 3.25118 | 41.35 |
Max Throughput-50%Read | 43.552626 | 1,301.15 | 40.660856 | 46.37 |
Random-8k-70%Read. | 119.105131 | 396.74 | 3.099504 | 40.83 |
Result summary:
Session Totals From 1-Way Sessions (1 ->4) - Response Time & CPU Use is AVERAGE, I/O's and MB/sec are SUMS
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 40.081865 | 6,179.88 | 193.12139 | 49.13 |
RealLife-60%Rand-65%Read | 48.28426525 | 3,496.78 | 27.318575 | 55.94 |
Max Throughput-50%Read | 57.83599375 | 5,350.14 | 167.19193 | 40.68 |
Random-8k-70%Read. | 48.18240675 | 3,875.30 | 30.275761 | 49.91 |
Session Totals From 2-Way Sessions (5 ->8) - Response Time & CPU Use is AVERAGE, I/O's and MB/sec are SUMS
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 39.97419775 | 6,237.25 | 194.91403 | 50.44 |
RealLife-60%Rand-65%Read | 88.44531931 | 4,867.86 | 38.030186 | 46.51 |
Max Throughput-50%Read | 48.51689669 | 9,299.78 | 290.61811 | 43.38 |
Random-8k-70%Read. | 89.38014344 | 5,226.80 | 40.834384 | 45.99 |
Session Totals From All Sessions (1 ->8) - Response Time & CPU Use is AVERAGE, I/O's and MB/sec are SUMS
TEST NAME | Av. Resp. Time ms | Av. IOs/sek | Av. MB/sek | CPU Use |
Max Throughput-100%Read. | 40.02803138 | 12,417.13 | 388.03541 | 49.79 |
RealLife-60%Rand-65%Read | 68.36479228 | 8,364.64 | 65.348761 | 51.22 |
Max Throughput-50%Read | 53.17644522 | 14,649.92 | 457.81003 | 42.03 |
Random-8k-70%Read. | 68.78127509 | 9,102.10 | 71.110145 | 47.95 |
This was the best 'speed' test of all the bunches. With 388+ MB/sec for the 100% test, the MS iSCSI initator bested the ESX initator by about 60MB/sec with a similar test run. Also, with 12,400+ I/O sec - that's not too shabby. Best ESX iSCSI initator could pull with an 8-session test was just under 10,000. Random test was almost 20MB/sec faster as well - using ESX initiator, best result from an 8-session test was 45 MB/sec (with almost 6,000 I/O sec).
Howto put table format in this thread?
SERVER TYPE: Virtual Windows 2003R2sp1 on ESX 3.0.2 Build 57941
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL585G2, 64 GB RAM; 4x Opteron, 2,8 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL:Netapp 6070C 64 GB Cache Metrocluster
Max Throughput-100%Read
IOps Read IOps Write IOps MBps Read MBps Write MBps
5.296.482.775 5.296.482.775 0.000000 165.515.087 165.515.087 0.000000
5.296.482.775 5.296.482.775 0.000000 165.515.087 165.515.087 0.000000
5.296.482.775 5.296.482.775 0.000000 165.515.087 165.515.087 0.000000
5.296.482.775 5.296.482.775 0.000000 165.515.087 165.515.087 0.000000
RealLife-60%Rand-65%Read
IOps Read IOps Write IOps MBps Read MBps Write MBps
10.391.374.955 6.753.172.143 3.638.202.812 81.182.617 52.759.157 28.423.459
10.391.374.955 6.753.172.143 3.638.202.812 81.182.617 52.759.157 28.423.459
10.391.374.955 6.753.172.143 3.638.202.812 81.182.617 52.759.157 28.423.459
10.391.374.955 6.753.172.143 3.638.202.812 81.182.617 52.759.157 28.423.459
Max Throughput-50%Read
IOps Read IOps Write IOps MBps Read MBps Write MBps
8.184.789.783 4.089.720.225 4.095.069.558 255.774.681 127.803.757 127.970.924
8.184.789.783 4.089.720.225 4.095.069.558 255.774.681 127.803.757 127.970.924
8.184.789.783 4.089.720.225 4.095.069.558 255.774.681 127.803.757 127.970.924
8.184.789.783 4.089.720.225 4.095.069.558 255.774.681 127.803.757 127.970.924
Random-8k-70%Read
IOps Read IOps Write IOps MBps Read MBps Write MBps
13.772.111.385 9.640.807.308 4.131.304.076 107.594.620 75.318.807 32.275.813
13.772.111.385 9.640.807.308 4.131.304.076 107.594.620 75.318.807 32.275.813
13.772.111.385 9.640.807.308 4.131.304.076 107.594.620 75.318.807 32.275.813
13.772.111.385 9.640.807.308 4.131.304.076 107.594.620 75.318.807 32.275.813
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 1X VM WIN2003 / ESX 3.02 SP1 ON INFORTREND S16F-R1430
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 8GB RAM, 2x XEON 5130, 2,0 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: IFT S16F-R1430 (2 GB CACHE/SP) / 8x SATA / R10
SAN TYPE / HBAs : FC 4Gb; QLA2640 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........____4.8____..........___11425___.........___357____
RealLife-60%Rand-65%Read......___31.8___..........___1595___.........____12____
Max Throughput-50%Read..........____3.4___..........___14900__.........____465___
Random-8k-70%Read.................____38.4___..........___1351___.........____10.5___
EXCEPTIONS: 82%, 35%, 94%, 32% VCPU Util.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 2 X VM WIN2003 CONCURRENT/ ESX 3.02 SP1 ON INFORTREND S16F-R1430
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 8GB RAM, 2x XEON 5130, 2,0 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: IFT S16F-R1430 (2 GB CACHE/SP)
6x SATA / R5 (SP1)-> 1st VM; 8x SATA / R10 (SP2)-> 2nd VM
SAN TYPE / HBAs : FC 4Gb; QLA2640 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........__9.5 / 9.5__.........._6156 / 6165__........._192 / 192
RealLife-60%Rand-65%Read......__46 / 35__.........._1165 / 1499__.........__9.1 / 11.7__
Max Throughput-50%Read..........__7.3 / 7.6__.........._7870 / 7617__.........__246 / 238__
Random-8k-70%Read.................__57 / 44___.........._956 / 1213___.........__7.5 / 9.5___
EXCEPTIONS: 54/50%, 28/33%, 61/59%, 26/30% VCPU Util. (VM1/VM2)
##################################################################################
Raid 10 configured as Logical Volume (3XR1) - results on R50 (3XR5=9 disks) was only 3-5 % slower.
The cache on all hard disks was enabled (~ 40% more throughput especially by random ios).
Great numbers - thanks for that. Maybe you could do the tests on 2 or 3 vms concurrently (you have a very large cache there)?
You can e.g. copy and paste my table and put in your numbers.
Regards
Christian
I did this test with 9 vm's in parallel.
Netapp hat mir folgendes mitgeteilt:
Performance Messungen mit Messwerkzeugen (z.B. IOmeter) in virtuellen Maschinen sind nicht aussagekräftig, da die Zeitmessungen aufgrund von virtualisierten CPU Cycles falsche Werte liefern. Es handelt sich um ein allgemeines Problem im VMware Umfeld, auf das im "VI Performance Tuning Best Pratice" hingewiesen wird:
"Timing numbers reported within the virtual machine can be inaccurate, especially when the processor is overcommitted".
Das Phänomen tritt verstärkt auf, wenn mehrere VM Instanzen auf einem VM Server laufen. Das technische Problem wird in dem VMware Technical Report "Timekeeping in VMware Virtual Maschines" im Detail beschrieben. VMware empfiehlt für Messungen in virtuellen Maschinen das Tool VMmark.
Auf der Storage Controller Seite ist über den DFM/OM zu beobachten, dass ca. 130 MB/s lesende und 120 MB/s schreibende IOs ankommen (siehe FCPPerf.jpg, Zeitpunkt: 6.11 ca. 16:30). Die über FCP ankommenden IOs gehen mit nahezu der selben Geschwindigkeit durch bis auf die Festplattenebene (siehe VolPerf.jpg). Das bedeutet, dass auf der NetApp Seite keine Caching Effekte auftreten und der beobachtete Durchsatz die End-to-End Performance darstellt. Die DFM/OM Messdaten belegen, dass der vorhergesagte IO Durchsatz erreicht wurde, jedoch von IOmeter in der VM aufgrund der obengenannten Probleme falsch wiedergegeben wurden.
so out of all this and from everyone's experience -
if you had to choose from either equallogic ps300 or two lefthand 2060's, which path would you take? and why?
and what are the notable differences between the two? (barring substantial virtual infrastructure differences).
(will be using DL380 G5's)
Actually, I just did make this decision. I went with LeftHand.
Why is really more of a 'because they fit us' vs. a performance thing. I think EQ has a superior product when it comes to performance, and LH has a superior product when it comes to availability and replication options.
I didn't buy at the price point you're looking at - we actually bought 6 LeftHand DL320s, but the biggest thing that got me to go that route was being able to unplug one of the units while I was running tests on them - and nothing happened. With our experience with the MSA1500 (horrible), this was a must for us. We're also replicating data across a 250M WAN link as well.
Sounds reasonable for me.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 1X VM WIN2003 R2[SP2] / ESX 3.5 ON HDS AMS200
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 32GB RAM, 2x XEON 5355, 2.66 GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS200 (2 GB CACHE/SP) / 1+ 1 (FC) / R1
SAN TYPE / HBAs : FC 4Gb; QLA2642 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........_____8.3_____.........._____7021_____.........____219.4______
RealLife-60%Rand-65%Read......_____53.1_____.........._____1008_____.........______7.8____
Max Throughput-50%Read.........._____21.3_____..........____1708______........._____53.38_____
Random-8k-70%Read.................____38.3______..........____1253______.........____9.8______
EXCEPTIONS: CPU Util.- never went over 75%
##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 1X VM WIN2003 R2[SP2] / ESX 3.5 ON HDS AMS200
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 32GB RAM, 2x XEON 5355, 2.66 GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS200 (2 GB CACHE/SP) / 6+1 (FC) / R5
SAN TYPE / HBAs : FC 4Gb; QLA2642 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........____8.3______..........____7034______.........___219_______
RealLife-60%Rand-65%Read......_____29.9_____.........._____1508_____.........____11.7______
Max Throughput-50%Read.........._____30_____.........._____1588_____........._____49.6_____
Random-8k-70%Read.................____30______.........._____1503_____.........____11.75______
EXCEPTIONS: CPU Util.- never went over 75%
##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 1X VM WIN2003 R2[SP2] / ESX 3.5 ON HDS AMS200
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 32GB RAM, 2x XEON 5355, 2.66 GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS200 (2 GB CACHE/SP) / 13+1 (FC) / R5
SAN TYPE / HBAs : FC 4Gb; QLA2642 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........_____8.3_____.........._____7023_____.........____219______
RealLife-60%Rand-65%Read......_____25.2_____.........._____1804_____.........____14.1______
Max Throughput-50%Read..........____24.8______..........____1991______.........____62.2______
Random-8k-70%Read.................____24.8______..........___1853_______.........____14.4______
EXCEPTIONS: CPU Util.- never went over 75%
##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS 1X VM WIN2003 R2[SP2] / ESX 3.5 ON HDS AMS200
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE1950, 32GB RAM, 2x XEON 5355, 2.66 GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: AMS200 (2 GB CACHE/SP) / 7+7 (FC) / R10
SAN TYPE / HBAs : FC 4Gb; QLA2642 x 1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___8.4_______..........___6944_______.........____217______
RealLife-60%Rand-65%Read......_____9.6_____..........____3811______.........______29.7____
Max Throughput-50%Read..........____18.1______..........____2354______.........____75______
Random-8k-70%Read.................____4.1______..........____8243______.........____64______
EXCEPTIONS: CPU Util.- never went over 75%
##################################################################################
Thanks for the 1st HDS system's testing !
Great.
So where are you getting this table of results? all I get is a CSV file.
--Matt
Well you can copy and paste one of tables (well formatted) and then put in your results - the results yon can see in iometer window.
You should run it from a VM. Running it from the service console is of very limited value.
Lars