Attention!
Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.
The new thread is available here:
[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]
My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any
warranty.
If anybody shouldn't be agreed with some results then he can make own tests and presents
his/her results too.
I hope this way to classify the different systems and give a "neutral" performance comparison.
Additionally I will mention that the performance is one of many aspects to choose the right system.
The others could be e.g.
\- support quality
\- system management integration
\- distribution
\- self made experiences
\- additional features
\- costs for storage system and infrastructure, etc.
There are examples of IOMETER Tests:
=====================================
\######## TEST NAME: Max Throughput-100%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
\######## TEST NAME: RealLife-60%Rand-65%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
\######## TEST NAME: Max Throughput-50%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
\######## TEST NAME: Random-8k-70%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
The global options are:
=====================================
Worker
Worker 1
Worker type
DISK
Default target settings for worker
Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
Disk maximum size,starting sector
8000000,0
Run time = 5 min
For testing the disk C is configured and the test file (8000000 sectors) will be created by
first running - you need free space on the disk.
The cache size has direct influence on results. By systems with cache over 2GB the test
file should be increased.
LINK TO IOMETER:
Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s
To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks
Here the config file *.icf
\####################################### BEGIN of *.icf
Version 2004.07.30
'TEST SETUP ====================================================================
'Test Description
IO-Test
'Run Time
' hours minutes seconds
0 5 0
'Ramp Up Time (s)
0
'Default Disk Workers to Spawn
NUMBER_OF_CPUS
'Default Network Workers to Spawn
0
'Record Results
ALL
'Worker Cycling
' start step step type
1 5 LINEAR
'Disk Cycling
' start step step type
1 1 LINEAR
'Queue Depth Cycling
' start end step step type
8 128 2 EXPONENTIAL
'Test Type
NORMAL
'END test setup
'RESULTS DISPLAY ===============================================================
'Update Frequency,Update Type
4,WHOLE_TEST
'Bar chart 1 statistic
Total I/Os per Second
'Bar chart 2 statistic
Total MBs per Second
'Bar chart 3 statistic
Average I/O Response Time (ms)
'Bar chart 4 statistic
Maximum I/O Response Time (ms)
'Bar chart 5 statistic
% CPU Utilization (total)
'Bar chart 6 statistic
Total Error Count
'END results display
'ACCESS SPECIFICATIONS =========================================================
'Access specification name,default assignment
Max Throughput-100%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
'Access specification name,default assignment
RealLife-60%Rand-65%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
'Access specification name,default assignment
Max Throughput-50%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
'Access specification name,default assignment
Random-8k-70%Read,ALL
'size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
'END access specifications
'MANAGER LIST ==================================================================
'Manager ID, manager name
1,PB-W2K3-04
'Manager network address
193.27.20.145
'Worker
Worker 1
'Worker type
DISK
'Default target settings for worker
'Number of outstanding IOs,test connection rate,transactions per connection
64,ENABLED,500
'Disk maximum size,starting sector
8000000,0
'End default target settings for worker
'Assigned access specs
'End assigned access specs
'Target assignments
'Target
C:
'Target type
DISK
'End target
'End target assignments
'End worker
'End manager
'END manager list
Version 2004.07.30
\####################################### ENDE of *.icf
TABLE SAMPLE
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........__________..........__________.........__________
RealLife-60%Rand-65%Read......__________..........__________.........__________
Max Throughput-50%Read..........__________..........__________.........__________
Random-8k-70%Read.................__________..........__________.........__________
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
I hope YOU JOIN IN !
Regards
Christian
A Google Spreadsheet version is here:
Message was edited by:
ken.cline@hp.com to remove ALL CAPS from thread title
Message was edited by:
RDPetruska
Added link to Atamido's Google Spreadsheet
Yeah, you are right. I am wrong..
They aren't SCSI they are SAS
S.A.S
Serially Attached SCSI ![]()
First I am very glad you posted your results here.
This way the results could be verified and discussed here.
In the postings before I tested one system from Infortrend (6X SAS15k) and I can tell you it rocks (6-8 times faster than the internal disks) and you can see similar results by seq. r/w like yours but by random tests it shows the real numbers (for 6 disks).
I can vote your Dell (we are here Dell shop too) can't reach much more as those system - that means this is physically not possible to reach such number of iops when the test file is bigger than the cache size (min 2x).
Maybe your test file could be not big enough - in addition you can run perfmon simultan to your iometer.
What we want here are verified real results - none magic.
Thanks again for your participation.
Regards
Christian
I am going to run this on my new datacore sanmelody set up and the new hp proliant dl servers. Should be intersting to compare sanmelody to the big boys. Dont think it will be as good as clarions or evas....
That will be interesting indeed - please join in.
ok here we go...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM (ms 2000 server) 2gb vram.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: hp dl385, 6GB RAM for host esx server; 2x amd opteron (dual core), 2.4 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: sanmelody server on win2003 r2 raid 10 (1tb lun) 2gb ram (used as cache by sanmelody)
iscsi, sata disks 6 spindles
\##################################################################################
TEST NAME--
\##################################################################################
Max Throughput-100%Read........___18.5_______..........___3189_______.........____99.6______
RealLife-60%Rand-65%Read......_____122.9_____.........._____469_____.........____3.66______
Max Throughput-50%Read.........._____28.5_____..........______1913____.........____59.8______
Random-8k-70%Read.................___106.6_______..........____554______.........___4.33_______
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
Message was edited by:
pauliew1978
Message was edited by:
pauliew1978
Message was edited by:
pauliew1978
How many spindles in your RAID? SATA / SCSI / SAS ?
iSCSI or FC?
Thanks
hi there,
it is iscsi, 6 spindles on sata
I'll update my post
Thanks..I'm just considering a similar configuration, maybe more cache (4GB) and 12-15 disks (500GB SATA)
How much cache do you have on your sanmelody server?
2gb is for the vm, right?
no problem,
I edited the host as I miss typed it, it is a hp dl385 not a dl585.
I must admit i am not exactly sure what my figures mean in the overall scheme of things. But considering the san came to a total cost of about 5k hopefully it is not too bad.
looking at the specs on the hp website it is
Processor cache
1MB L2 cache
yes, 2gig for vm
sorry being stupid here, you mean cache on the san right?
doh!
it has 2gb me thinks, just need to double check that...
I meant how much ram you have on your sanmelody server... but I see you have edited your post, so it's 6GB.
One more question, sanmelody (a,b,c,d) or sanmelody lite? just want to understand if your server is using all 6GB for data caching. Sanmelody lite is limited to 128MB
right ok...
my sanmelody server has 2gb ram, not sure what version (i think it is wither version 2 or 3 as it does have a limit but I havent reached that yet, it says it has a 6 spindle limit ).
my esx hosts (hp dl385s) have 6 gb of ram in them (2gb allocated to the vm 2000 server I ran iometer on)
sorry for the confusion. I am still getting my head around it all.
+++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS / PE2950 LOCAL STORAGE
+++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2003 Std SP1 - 512MB RAM
CPU TYPE / NUMBER: 1 vCPU
HOST TYPE: Dell PE2950, 16GB RAM, 2 x quadcore Xeon X5355
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Perc 5/i - 6 x 300GB SAS 15Krpm / 1MB VMFS3
TEST NAME--
Max Throughput-100%Read........____3.06____..........___13959___.........___438____
RealLife-60%Rand-65%Read......____27.8____..........___1711___.........____8.31____
Max Throughput-50%Read..........____5.71____..........___10343___.........____323____
Random-8k-70%Read.................____25.4____..........___1631___.........____6.70____
is that on fibre channel?
no, that's internal storage, no san... just testing one of my customer's server
sorry for mixing up things
pauliew1978 and black33
thanks for your input - this way we could verify the results from RParker (with black33 results).
I saw many test (and made many myself) and can say sata makes a little more response time and bounded with iscsi they make even more.
pauliew1978 - your numbers certify this, the "Response time" by "RealLife" test is very high.
I tested low entry sata systems too and saw similar numbers.
My testing here shows that the ESX with iscsi loses much performance in comparison to phys. server - the relation is much poorer than by fc.
SAS with dedicated channels for each disk demonstrates the power here.
The numbers are a little better than by my tests with Infortrend system -
I used the older server there - that can be the reason for this.
By SanMelody - it could be interesting to see this sofware with much stronger hw (e.g. a bunch of SAS disks)
Regards
Christian
yes, I am a little concerned the numbers are so high. I haven't tweaked anything at all yet. I am going to play around with my set up and see if i can get some better figures..... I am supposed to go live with this set up
in July for 1 sql server, 1 win 2003 running pervasive sql (very lite io though) and 1 application server running 2003 server. Hopefully with just 3vms it should not constitute a problem. We'll have to wait and see!
I am on esx 3.0 at the moment. I wonder if upgrading to 3.01 will have any effect?
