Attention!
Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.
The new thread is available here:
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any
warranty.
If anybody shouldn't be agreed with some results then he can make own tests and presents
his/her results too.
I hope this way to classify the different systems and give a "neutral" performance comparison.
Additionally I will mention that the performance is one of many aspects to choose the right system.
The others could be e.g.
\- support quality
\- system management integration
\- distribution
\- self made experiences
\- additional features
\- costs for storage system and infrastructure, etc.
There are examples of IOMETER Tests:
=====================================
\######## TEST NAME: Max Throughput-100%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
\######## TEST NAME: RealLife-60%Rand-65%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
\######## TEST NAME: Max Throughput-50%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
\######## TEST NAME: Random-8k-70%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
The global options are:
=====================================
Worker
Worker 1
Worker type
DISK
Default target settings for worker
Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
Disk maximum size,starting sector
8000000,0
Run time = 5 min
For testing the disk C is configured and the test file (8000000 sectors) will be created by
first running - you need free space on the disk.
The cache size has direct influence on results. By systems with cache over 2GB the test
file should be increased.
LINK TO IOMETER:
Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s
To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks
Here the config file *.icf
\####################################### BEGIN of *.icf
Version 2004.07.30
'TEST SETUP ====================================================================
'Test Description
IO-Test
'Run Time
' hours minutes seconds
0 5 0
'Ramp Up Time (s)
0
'Default Disk Workers to Spawn
NUMBER_OF_CPUS
'Default Network Workers to Spawn
0
'Record Results
ALL
'Worker Cycling
' start step step type
1 5 LINEAR
'Disk Cycling
' start step step type
1 1 LINEAR
'Queue Depth Cycling
' start end step step type
8 128 2 EXPONENTIAL
'Test Type
NORMAL
'END test setup
'RESULTS DISPLAY ===============================================================
'Update Frequency,Update Type
4,WHOLE_TEST
'Bar chart 1 statistic
Total I/Os per Second
'Bar chart 2 statistic
Total MBs per Second
'Bar chart 3 statistic
Average I/O Response Time (ms)
'Bar chart 4 statistic
Maximum I/O Response Time (ms)
'Bar chart 5 statistic
% CPU Utilization (total)
'Bar chart 6 statistic
Total Error Count
'END results display
'ACCESS SPECIFICATIONS =========================================================
'Access specification name,default assignment
Max Throughput-100%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
'Access specification name,default assignment
RealLife-60%Rand-65%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
'Access specification name,default assignment
Max Throughput-50%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
'Access specification name,default assignment
Random-8k-70%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
'END access specifications
'MANAGER LIST ==================================================================
'Manager ID, manager name
1,PB-W2K3-04
'Manager network address
193.27.20.145
'Worker
Worker 1
'Worker type
DISK
'Default target settings for worker
'Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
'Disk maximum size,starting sector
8000000,0
'End default target settings for worker
'Assigned access specs
'End assigned access specs
'Target assignments
'Target
C:
'Target type
DISK
'End target
'End target assignments
'End worker
'End manager
'END manager list
Version 2004.07.30
\####################################### ENDE of *.icf
TABLE SAMPLE
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........__________..........__________.........__________
RealLife-60%Rand-65%Read......__________..........__________.........__________
Max Throughput-50%Read..........__________..........__________.........__________
Random-8k-70%Read.................__________..........__________.........__________
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
I hope YOU JOIN IN !
Regards
Christian
A Google Spreadsheet version is here:
Message was edited by:
ken.cline@hp.com to remove ALL CAPS from thread title
Message was edited by:
RDPetruska
Added link to Atamido's Google Spreadsheet
Christian,
The physical server tests were done around the same time periods. So the 1xPS100E was done while about 30 VMs were hitting the storage, and the 2xPS100E was done with about 60 VMs.
The volumes are spanned over both members. I have to agree with you, the additional member does not seem to improve the IOs from ESX, but it's hard for me to confirm because my load has changed. It does appear that performance with another member increses for a physical server. I haven't tried not spanning the volumes.
Rich
You should definitely see better performance from the 2 PS100's. Especially in the last 3 tests.
Do me a favor, log into the group with SSH and run:
'vol select Your_Volume_Name show'
the 3rd line will tell you 'ActualMembers', you should see 2
Ben,
Here's the output-
ActualMembers: 2
It looks like performance increased when I added the second member and ran the test from a physical server, even with an increased load. Its' just that the test from a VM doesn't bear that out.
Thanks,
Rich
Myself saw this bahavior too (now running ca. 30 vms) although at beginning (none vms running) I saw definitely more ios when the volume was spanned over 2 members.
Very strange.
upgraded bbwc-cache on p600 from 256 to 512 MB (waited till the battery was fully loaded and the cache was enabled)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS VM ON ESX / DAS (p600) on HP DL 380g5
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 512MB BBWC (50/50% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........\_7.45_..........\_7729.95_.........\_241.56_
RealLife-60%Rand-65%Read......\_16.98_..........\_2775.84_.........\__21.69_
Max Throughput-50%Read..........\_8.47_..........\_6814.75_.........\_212.96_
Random-8k-70%Read.................\_15.69_..........\_2992.00_.........\__22.83_
EXCEPTIONS: CPU Util.-45-41-46-44%
##################################################################################
The reallife test results are little slower as with 256MB, dont know why. Its not the same host but with the same hard- and softwareconfig.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS VM on ESX / Celerra NS40
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: ESX 3.0.1
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell 2950, 16GB, 2x XEON 5140 2.327Ghz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: R1 2x15k FC Drives
SAN TYPE / HBAs : iSCSI; ESX softw. initiator(only uses 1GB NIC), 2XGb NIC for iSCSI
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........____44.2____..........___1356___.........__41.45____
RealLife-60%Rand-65%Read......_____99.81____..........___554___.........____4.33____
Max Throughput-50%Read........____44.18____..........___1305___.........____41.52___
Random-8k-70%Read............._____96.45____..........___553___.........____4.29____
Notes: Windows XP guest, Cisco 6509 VLAN, Host connected via Link Aggregation, Celerra
Connected via link aggregation. No jumbo frames because of software initiatior.
I apologize for cross-posting, but does anyone have any insights into my post here: http://www.vmware.com/community/thread.jspa?threadID=88684&tstart=0 ?
It seems like the performance I am getting within ESX with a MSA1510i is below what I should be getting, but I am out of things to troubleshoot...
Thanks in advance.
It is a bit late, but I got round to benchmarking our SAN. Initially I got really bad throughput, then after HBA firmware update it started flying along.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM on ESX 3.01
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: BL685c, 32GB RAM; 4 x opteron 2.6, QLogic Dual Port 4GB HBA
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS3020c / 24+2 Disks / Raid DP (r6) FC Single Loop
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read........___10_____..........__5642___.........___176_____
RealLife-60%Rand-65%Read......____26____..........___2075___.........___16______
Max Throughput-50%Read..........___6_____..........__8074___.........____252___
Random-8k-70%Read...........___29___..........__1882_______.........___14______
EXCEPTIONS: CPU Util.-XX%;
Thanks for that.
System similar to that from rb2006 (page 9) - a bit worse throughput, I guess caused by your raid dp (rb2006 has raid 4).
Quite good that p600 - tops the midrange systems NetApp FAS3020, EQL PS3600 (VM), HP EVA 4000... I wonder.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: ESX3.01 VM W2K3 SP1 (only 1vm on the server)
CPU TYPE / NUMBER: VCPU / 2
HOST TYPE: FSC RX300 -S2 , 10 GB RAM; 2x XEON 51xx, 2,66 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: DMX3 / 80 Disks 10k / Raid 5 (5+1) / shared store... hm don't believe that the system was idle during my test
VMFS: 350GB
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___12,47_..........____4655__.........__145,47____
RealLife-60%Rand-65%Read......_7,63___.........._7170___.........__56__
Max Throughput-50%Read..........\__13,65_..........__4242____.........__132,64________
Random-8k-70%Read.................___6,64___..........___7934__.........__61,99______
EXCEPTIONS: CPU Util.-31%;
Probably the new champion is there.
Thanks for that.
my pleasure ! great threat .. chris
..tomorrow i will start iometer on clariions ...
When possible you could test your DMX3 with 2 or 3 vms loads (the same test on each vm) simultaneously.
ok wil give it a try..on monday
ok ok it is wednesday
SERVER TYPE: ESX3.01 VM W2K3 SP1
CPU TYPE / NUMBER: VCPU / 2
HOST TYPE: FSC RX300-S3 , 10 GB RAM; 2x XEON5130 2 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: DMX3 / 80 Disks 10k / Raid 5 (5+1)
+++++++++++++++++++++++++++++++++++
+ VM1 + VM2 on the same ESX Host +
+++++++++++++++++++++++++++++++++++
######################################################################################
TEST NAME--
######################################################################################
VM1 MaxThroughput-100%Read........\_22,00_..........___2670,14__.........__83,44____
VM2 MaxThroughput-100%Read........\_21,97_..........___2688,67__.........__84,02____
VM1 RealLife-60%Rand-65%Read......_16,39___........_3448,22___.........__27______
VM2 RealLife-60%Rand-65%Read......_16,50___........_3293,05___.........__26______
VM1 Max Throughput-50%Read........\_27,20_..........__2161,49__.......__67,55__
VM2 Max Throughput-50%Read........\_27,12_..........__2172,99___......__67,91__
VM1 Random-8k-70%Read............._18,04___.........___3027__.........__23,65___
VM2 Random-8k-70%Read............._17,15___.........___3176__.........__24,81___
\#####################################################################################
+++++++++++++++++++++++++++++++++++
+ VM1 + VM2 on different ESX Hosts +
+++++++++++++++++++++++++++++++++++
######################################################################################
TEST NAME--
######################################################################################
VM1 MaxThroughput-100%Read........\_25,13_..........___2346__.........__73,34____
VM2 MaxThroughput-100%Read........\_25,07_..........___2356__.........__73,63____
VM1 RealLife-60%Rand-65%Read......_15,59___........_3360,69___.........__26,26__
VM2 RealLife-60%Rand-65%Read......_15,81___........_3294,82___.........__25,74__
VM1 Max Throughput-50%Read........\_25,43_..........__2261,35__.......__70,67__
VM2 Max Throughput-50%Read........\_22,61_..........__2579,99___......__80,62__
VM1 Random-8k-70%Read............._12,69___.........___4218__.........__32,95___
VM2 Random-8k-70%Read............._12,85___.........___4163__.........__32,53___
Thanks for that; interesting to see how DMX3 can serve more than 1 server.
Hi all,
Here are the result I had on a EQUALLOGIC PS3600
I tested the unit from 2 differents VM. One running Windows 2003 32bits, the other one 64bits.
I also tested a windows longhorn 32 bits, but it seems that the MPIO is not running as it should.
For comparaison as well, I ran the test on a bare metal server with MPIO also.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS bare metal with PS3600
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS. PE860
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE860, 1GB RAM; 1x XEON 5100, 2,80 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R10
MS iSCSI initiator with MPIO (2 NIC's) enable
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read.......__8.6_____..........__6625____.........__207_____
RealLife-60%Rand-65%Read......__14.2____..........__3910____.........__30______
Max Throughput-50%Read........__11.0____..........__5263____.........__164_____
Random-8k-70%Read.............__14.4____..........__3838____.........__30______
EXCEPTIONS: CPU Util.-XX%;
\##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS VM on ESX 3.0.1 with PS3600
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM Windows 2003 r2 32bits
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE2950; 2GB RAM; 1x XEON 5110, 1,60 GHz, DC; boot ESX from SAN with QLA4050c
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3800 x 1 / 14+2 Disks / R10
MS iSCSI initiator from the VM with MPIO (2 NIC's) enable
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read.......__9.9_____..........__4778____.........__149_____
RealLife-60%Rand-65%Read......__14.0____..........__3957____.........__31______
Max Throughput-50%Read........__6.5_____..........__4536____.........__142_____
Random-8k-70%Read.............__14.4____..........__3860____.........__30______
EXCEPTIONS: CPU Util.-99-57-98-55%;
\##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS VM on ESX 3.0.1 with PS3600
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS. VM Windows 2003 r2 64bits
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE2950; 2GB RAM; 1x XEON 5110, 1,60 GHz, DC; boot ESX from SAN with QLA4050c
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3800 x 1 / 14+2 Disks / R10
MS iSCSI initiator from the VM with MPIO (2 NIC's) enable
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read.......__18.1____..........__3016____.........__94______
RealLife-60%Rand-65%Read......__13.8____..........__3979____.........__31______
Max Throughput-50%Read........__13.1____..........__3815____.........__119_____
Random-8k-70%Read.............__14.3____..........__3875____.........__30______
EXCEPTIONS: CPU Util.-92-60-96-57%;
\##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS VM on ESX 3.0.1 with PS3600
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS. VM Windows Longhorn 32bits
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE2950; 2GB RAM; 1x XEON 5110, 1,60 GHz, DC; boot ESX from SAN with QLA4050c
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3800 x 1 / 14+2 Disks / R10
MS iSCSI initiator from the VM with MPIO (2 NIC's) enable
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read.......__15.9____..........__3308____.........__103_____
RealLife-60%Rand-65%Read......__15.1____..........__3757____.........__29______
Max Throughput-50%Read........__24.8____..........__1393____.........__43______
Random-8k-70%Read.............__15.3____..........__3704____.........__29______
EXCEPTIONS: CPU Util.-98-68-76-65%;
Thanks for that.
You are using the PS3600 for all tests, aren't your (see PS3800 too)?
The MS iscsi initiator gives always the best results (for now).