VMware Cloud Community
christianZ
Champion
Champion

Open unofficial storage performance thread

Attention!

Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.

The new thread is available here:

Oliver Reeh[/i]

[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]

My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any

warranty.

If anybody shouldn't be agreed with some results then he can make own tests and presents

his/her results too.

I hope this way to classify the different systems and give a "neutral" performance comparison.

Additionally I will mention that the performance is one of many aspects to choose the right system.

The others could be e.g.

\- support quality

\- system management integration

\- distribution

\- self made experiences

\- additional features

\- costs for storage system and infrastructure, etc.

There are examples of IOMETER Tests:

=====================================

\######## TEST NAME: Max Throughput-100%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

\######## TEST NAME: RealLife-60%Rand-65%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

\######## TEST NAME: Max Throughput-50%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

\######## TEST NAME: Random-8k-70%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

The global options are:

=====================================

Worker

Worker 1

Worker type

DISK

Default target settings for worker

Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

Disk maximum size,starting sector

8000000,0

Run time = 5 min

For testing the disk C is configured and the test file (8000000 sectors) will be created by

first running - you need free space on the disk.

The cache size has direct influence on results. By systems with cache over 2GB the test

file should be increased.

LINK TO IOMETER:

Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s

To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks

Here the config file *.icf

\####################################### BEGIN of *.icf

Version 2004.07.30

'TEST SETUP ====================================================================

'Test Description

IO-Test

'Run Time

' hours minutes seconds

0 5 0

'Ramp Up Time (s)

0

'Default Disk Workers to Spawn

NUMBER_OF_CPUS

'Default Network Workers to Spawn

0

'Record Results

ALL

'Worker Cycling

' start step step type

1 5 LINEAR

'Disk Cycling

' start step step type

1 1 LINEAR

'Queue Depth Cycling

' start end step step type

8 128 2 EXPONENTIAL

'Test Type

NORMAL

'END test setup

'RESULTS DISPLAY ===============================================================

'Update Frequency,Update Type

4,WHOLE_TEST

'Bar chart 1 statistic

Total I/Os per Second

'Bar chart 2 statistic

Total MBs per Second

'Bar chart 3 statistic

Average I/O Response Time (ms)

'Bar chart 4 statistic

Maximum I/O Response Time (ms)

'Bar chart 5 statistic

% CPU Utilization (total)

'Bar chart 6 statistic

Total Error Count

'END results display

'ACCESS SPECIFICATIONS =========================================================

'Access specification name,default assignment

Max Throughput-100%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

'Access specification name,default assignment

RealLife-60%Rand-65%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

'Access specification name,default assignment

Max Throughput-50%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

'Access specification name,default assignment

Random-8k-70%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

'END access specifications

'MANAGER LIST ==================================================================

'Manager ID, manager name

1,PB-W2K3-04

'Manager network address

193.27.20.145

'Worker

Worker 1

'Worker type

DISK

'Default target settings for worker

'Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

'Disk maximum size,starting sector

8000000,0

'End default target settings for worker

'Assigned access specs

'End assigned access specs

'Target assignments

'Target

C:

'Target type

DISK

'End target

'End target assignments

'End worker

'End manager

'END manager list

Version 2004.07.30

\####################################### ENDE of *.icf

TABLE SAMPLE

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM or PHYS.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........__________..........__________.........__________

RealLife-60%Rand-65%Read......__________..........__________.........__________

Max Throughput-50%Read..........__________..........__________.........__________

Random-8k-70%Read.................__________..........__________.........__________

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

I hope YOU JOIN IN !

Regards

Christian

A Google Spreadsheet version is here:

Message was edited by:

ken.cline@hp.com to remove ALL CAPS from thread title

Message was edited by:

RDPetruska

Added link to Atamido's Google Spreadsheet

Tags (1)
Reply
0 Kudos
457 Replies
CWedge
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM Wink2k3 Sp1 Std 1GB ram

CPU TYPE / NUMBER: VCPU / 1 3 GHZ

HOST TYPE: HP DL580G4, 16GB RAM; 4x XEON 3.0 GHz, DC HT Enabled

STORAGE TYPE / DISK NUMBER / RAID LEVEL / Cache: DMX3000 / Lots of Disks / Raid 7+1 / 32Gb

VMFS: 512GB LUN, 8MB Block Size

SAN TYPE / HBAs : 4GB FC, Emulex lp11002e Dual Dual HBAs, Dual Brocade 2G Switches

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........___4______..........____1500__.........___47_____

RealLife-60%Rand-65%Read......___1.8_____..........___2170___.........___17_____

Max Throughput-50%Read..........__6_______..........___1280___.........__40______

Random-8k-70%Read.................__1.9____..........__2240____........._18_____

EXCEPTIONS: CPU Util.-30%; Seams to be a limitation somewhere on the MB/s I'll play around a bit.

##################################################################################

Message was edited by:

CWedge@Amsa

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS EVA5000 Unit#1

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM Wink2k3 Sp1 Std 1GB ram

CPU TYPE / NUMBER: VCPU / 1 3 GHZ

HOST TYPE: HP DL580G4, 16GB RAM; 4x XEON 3.0 GHz, DC HT Enabled

STORAGE TYPE / DISK NUMBER / RAID LEVEL / Cache: EVA5000 / 66 Disks / Raid V5 / 2Gb

VMFS: 1TB LUN, 8MB Block Size

SAN TYPE / HBAs : 4GB FC, Emulex lp11002e Dual Dual HBAs, Dual Brocade 2G Switches

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__1.9___..........____1900__.........___59_____

RealLife-60%Rand-65%Read......___17_____..........___476___.........___4_____

Max Throughput-50%Read..........__25_______..........___479___.........__15______

Random-8k-70%Read.................__30____..........__339____........._3_____

EXCEPTIONS: CPU Util.-30%;

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS EVA 5000 Unit #2 96 Drives

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM Wink2k3 Sp1 Std 1GB ram

CPU TYPE / NUMBER: VCPU / 1 3 GHZ

HOST TYPE: HP DL580G4, 16GB RAM; 4x XEON 3.0 GHz, DC HT Enabled

STORAGE TYPE / DISK NUMBER / RAID LEVEL / Cache: EVA5000 / 96 Disks / Raid V5 / 2Gb

VMFS: 1TB LUN, 8MB Block Size

SAN TYPE / HBAs : 4GB FC, Emulex lp11002e Dual Dual HBAs, Dual Brocade 2G Switches

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__1.7___..........____2188__.........___68_____

RealLife-60%Rand-65%Read......___12_____..........___498___.........___4_____

Max Throughput-50%Read..........__9_______..........___598___.........__19______

Random-8k-70%Read.................__15____..........__486____........._4_____

EXCEPTIONS: CPU Util.-30%;

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS DMX3000, EVA5000 66-Drives, EVA5000 96-Drives

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM Wink2k3 Sp1 Std 1GB ram

CPU TYPE / NUMBER: VCPU / 1 3 GHZ

HOST TYPE: HP DL580G4, 16GB RAM; 4x XEON 3.0 GHz, DC HT Enabled

VMFS: 1TB LUN, 8MB Block Size

SAN TYPE / HBAs : 4GB FC, Emulex lp11002e Dual Dual HBAs, Dual Brocade 2G Switches

#######################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek--


#######################################################################################

..........................................DMX3k|EVA#1|EVA#2 DMX3k|EVA#1 |EVA#2 DMX3k|EVA#1|EVA#2|

******************************************************************************************************************

Max Throughput-100%Read....._4__|_1.9_|_1.7_..._1500_|_1900_|_2188_..__47_|__59_|__68_|

RealLife-60%Rand-65%Read...._1.8|__17_|_12_...._2170_|__476_|__498_....17_|___4_|___4_|

Max Throughput-50%Read......._6_|__25_|_9_......1280_|__479_|__598_..._40_|__15_|__19_|

Random-8k-70%Read.........._1.9_|__30_|_15_.....2240_|__339_|__486_..._18_|___3_|___4_|

EXCEPTIONS: CPU Util.-30%;

#######################################################################################

*notes:

The DMX seams to work faster the harder you make it work, most likely due to its massive 32Gbs of cache.

Message was edited by:

CWedge@Amsa

Reply
0 Kudos
acr
Champion
Champion

Nice thread Christian, has some interesting perf stats..

There always seems two interesting perf figures when measuring

1. Throughput in MB/s

2. IO/s

But when using either metric we really need to understand our needs..

When using these stats, MB/s and IO/s are almost inversely related. As overhead for block IO detracts from total throughput speed..

Deturmining which metric is best for real world performance critically depends upon the nature of the Data flowing through the SAN..

In a transactional-oriented environment IO/s are likely to be more important as transactions typically involve small blocks of data..

Where as video editing system will stress raw throughput and high MB/s values..

A real test of a SAN is its ability to support the relevant applications.

So you need to measure the characteristic that best characterizes your infrastructure, application mix and network.

IMHO..!

Reply
0 Kudos
christianZ
Champion
Champion

Great.

Thanks for your input.

Regards

Christian

Reply
0 Kudos
christianZ
Champion
Champion

acr,

thanks for your feedback.

Of course with such small tests you can't definitely say that one system is better than the another but I hate the declarations of some sell guys they say "Our one system with 10 disks can outperform the opponnent's one with 100".

You can't test it all (and they know it) and it could advantage to have a location where some tests results are placed - there you can estimate at least what throughput one system can reach.

As I mentioned before the throughput from one machine doesn't deliver the entire performance of one system - many have 2 or more storage processors but you can see nearly the direction, IMHO.

Regards

Christian

Reply
0 Kudos
acr
Champion
Champion

Agreed Christian, its nice to see such a wide range of performance results.. It a question i get asked many times when helping users move from physical to virtual, and without really knowing there environment, at times your guessing, so this adds as a good reference..

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

I'm in the process of re-running the tests with 2 Workers and it seams as though setting shares to HIGH makes a HUGE difference..

Reply
0 Kudos
Astro
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++

TABLE OF RESULTS - VM on 1MB Block Size VMFS

++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP BL480c Blade, 16GB RAM; 2x XEON 5150

(Dualcore), 2,66 GHz,

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA4000 x

2 / 12 x 10k FC HDD on vRAID5

VMFS: 100GB LUN, 1MB Block Size

SAN TYPE / HBAs : 4GB FC, HP/QLogic QMH2462 Dual

HBAs, Dual Brocade 4100 Switchesl

######################################################

############################

TEST NAME-------------------Av. Resp. Time

ms----Av. IOs/sek---Av. MB/sek----

######################################################

############################

Max

Throughput-100%Read.......___5.25______.......__10476.

40____....__327.39_____

RealLife-60%Rand-65%Read......___17.73_____.......__26

42.48______...._20.50_______

Max

Throughput-50%Read........___39.51______......._1311.6

9_____....__40.99______

Random-8k-70%Read.............___18.61_____.......__24

64.30_____....__19.28______

EXCEPTIONS:

Hi David

I've the same configuration (only difference: 4GB RAM - 14 HD), but the performance are not so good. Have you tuned parameters in HBA Fast!UTIL and / or in ESX ?

Many thanks

Armando

Reply
0 Kudos
RParker
Immortal
Immortal

We just got our new servers. These are our results. Keep in mind this is LOCAL storage.

IT competes with some SAN figures.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2k Advanced

CPU TYPE / NUMBER: CPU / 2

HOST TYPE: DELL PE 2950, 16GB RAM; 2x XEON 5345 (QUAD core), 2,33 GHz,

STORAGE TYPE / DISK NUMBER / RAID LEVEL: PERC 6 Di/ 6 x 10k SCSI HDD on vRAID5

VMFS: 1.35TB , 1MB Block Size

IOPS / R / W / MBps[/b] / R / W / Avg Resp[/i].

Max Throughput-100%Read

11207.24467 11207.24467 0 350.226396 350.226396[/b] 0 4.0581[/i]

RealLife-60%Rand-65%Read

14328.50613 9314.15392 5014.352208 111.941454[/b] 72.766828 39.174627 0.703825

Max Throughput-50%Read

8837.394088 4414.868495 4422.525592 276.168565 137.96464 138.203925 6.177222[/i]

Random-8k-70%Read

13778.1851 9641.448158 4136.736941 107.642071[/b] 75.323814 32.318257 0.714005

Reply
0 Kudos
christianZ
Champion
Champion

>RealLife-60%Rand-65%Read

>14328.50613 9314.15392 5014.352208 111.941454 72.766828 39.174627 0.703825

I'm not sure if iometer shows the right values on 64 bit OS ?

Is your test file big enough ?

Reply
0 Kudos
RParker
Immortal
Immortal

It's not a 64-bit OS.

Reply
0 Kudos
christianZ
Champion
Champion

I mean it is quite impossible to reach 14000 IOPS from 6 disks - one can't always trust in the iometer values.

I saw situations where the iometer didn't show the correct numbers - I think there is the fall too.

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

I mean it is quite impossible to reach 14000 IOPS

from 6 disks - one can't always trust in the iometer

values.

I saw situations where the iometer didn't show the

correct numbers - I think there is the fall too.

Also if the Disks are U320, 320MBps is the theorectical Max for the SCSI bus, no way could you reach 350MBps...

Reply
0 Kudos
christianZ
Champion
Champion

I have here one Dell Server with 6 x scsi hds bound at 2 scsi channels and could reach 540 MB/s by seq. read - but you are right, when you have only one channel you can reach max. 320 MB/s (by U320).

By Rparker it seems to me impossible to reach 14000 iops by such a test having only 6 disks (independent what one).

Reply
0 Kudos
RParker
Immortal
Immortal

Well it is what it is. I ran the same tests you did. The performance also seems to check out.

We had builds taking 29 minutes, reduced to 4 minutes, and 16-19 hour builds reduced to just under 3 hours, so read into whatever you want, numbers don't lie, and neither does real world tests.

This machine rocks, even if \*YOU* believe it's impossible.

I have an idea, go buy one! See for yourself, you want the part number?

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

Here is the confusing part...

Any single disk can't do over 200iops so you'd be saying that each disk is doing 2,333 iops each?

There is a controller and cache masking the numbers.

They did testing on the HP XP12000 and got 2 million Iops, of course real numbers were only 10% of that. like 200,000-300,000iops with 1024 drives which if you do the math works out to the 200iops per drive like i had said.

Rparker,

I don't doubt the test did what it did, and for those tests, those drives performed astounding.

Would you be able to re-run the test with a larger test size, like 10gb?

Thanks

Reply
0 Kudos
RParker
Immortal
Immortal

DUDE! what do you want from me?

I took a VM. The \*SAME* tests that \*EVERYONE* else in here used, and ran the tests. The \*ONLY* thing I did was move that \*SAME* VM to the new host, and ran the same damn IOMETER the \*SAME* way.

I then posted the numbers. What do you expect?

You think I am making this up? Maybe your calculations about how hard drives work are skewed, maybe you have wrong information, I don't know.

ALL I know is the SAME VM moved to the new host, these are the numbers. The VM was not modified, only copied.

So impossible or not, it is what it is.

I don't even know why I even bother to participate if you are going to question how I arrived at these figures. I guess I won't bother to give any performance figures or have anything to do with configuring an ESX server from now on, since all I was trying to do was offer a comparison.

Reply
0 Kudos
RParker
Immortal
Immortal

Yeah, you are right. I am wrong..

They aren't SCSI they are SAS Smiley Happy

Reply
0 Kudos