VMware Cloud Community
christianZ
Champion
Champion

Open unofficial storage performance thread

Attention!

Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.

The new thread is available here:

Oliver Reeh[/i]

[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]

My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any

warranty.

If anybody shouldn't be agreed with some results then he can make own tests and presents

his/her results too.

I hope this way to classify the different systems and give a "neutral" performance comparison.

Additionally I will mention that the performance is one of many aspects to choose the right system.

The others could be e.g.

\- support quality

\- system management integration

\- distribution

\- self made experiences

\- additional features

\- costs for storage system and infrastructure, etc.

There are examples of IOMETER Tests:

=====================================

\######## TEST NAME: Max Throughput-100%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

\######## TEST NAME: RealLife-60%Rand-65%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

\######## TEST NAME: Max Throughput-50%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

\######## TEST NAME: Random-8k-70%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

The global options are:

=====================================

Worker

Worker 1

Worker type

DISK

Default target settings for worker

Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

Disk maximum size,starting sector

8000000,0

Run time = 5 min

For testing the disk C is configured and the test file (8000000 sectors) will be created by

first running - you need free space on the disk.

The cache size has direct influence on results. By systems with cache over 2GB the test

file should be increased.

LINK TO IOMETER:

Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s

To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks

Here the config file *.icf

\####################################### BEGIN of *.icf

Version 2004.07.30

'TEST SETUP ====================================================================

'Test Description

IO-Test

'Run Time

' hours minutes seconds

0 5 0

'Ramp Up Time (s)

0

'Default Disk Workers to Spawn

NUMBER_OF_CPUS

'Default Network Workers to Spawn

0

'Record Results

ALL

'Worker Cycling

' start step step type

1 5 LINEAR

'Disk Cycling

' start step step type

1 1 LINEAR

'Queue Depth Cycling

' start end step step type

8 128 2 EXPONENTIAL

'Test Type

NORMAL

'END test setup

'RESULTS DISPLAY ===============================================================

'Update Frequency,Update Type

4,WHOLE_TEST

'Bar chart 1 statistic

Total I/Os per Second

'Bar chart 2 statistic

Total MBs per Second

'Bar chart 3 statistic

Average I/O Response Time (ms)

'Bar chart 4 statistic

Maximum I/O Response Time (ms)

'Bar chart 5 statistic

% CPU Utilization (total)

'Bar chart 6 statistic

Total Error Count

'END results display

'ACCESS SPECIFICATIONS =========================================================

'Access specification name,default assignment

Max Throughput-100%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

'Access specification name,default assignment

RealLife-60%Rand-65%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

'Access specification name,default assignment

Max Throughput-50%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

'Access specification name,default assignment

Random-8k-70%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

'END access specifications

'MANAGER LIST ==================================================================

'Manager ID, manager name

1,PB-W2K3-04

'Manager network address

193.27.20.145

'Worker

Worker 1

'Worker type

DISK

'Default target settings for worker

'Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

'Disk maximum size,starting sector

8000000,0

'End default target settings for worker

'Assigned access specs

'End assigned access specs

'Target assignments

'Target

C:

'Target type

DISK

'End target

'End target assignments

'End worker

'End manager

'END manager list

Version 2004.07.30

\####################################### ENDE of *.icf

TABLE SAMPLE

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM or PHYS.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........__________..........__________.........__________

RealLife-60%Rand-65%Read......__________..........__________.........__________

Max Throughput-50%Read..........__________..........__________.........__________

Random-8k-70%Read.................__________..........__________.........__________

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

I hope YOU JOIN IN !

Regards

Christian

A Google Spreadsheet version is here:

Message was edited by:

ken.cline@hp.com to remove ALL CAPS from thread title

Message was edited by:

RDPetruska

Added link to Atamido's Google Spreadsheet

Tags (1)
Reply
0 Kudos
457 Replies
chucks0
Enthusiast
Enthusiast

I have tested with both 3.01 and 3.02 and get the same results.

Message was edited by:

chucks0

Reply
0 Kudos
christianZ
Champion
Champion

Wow \!! So many new results!

I would to thank to

multirotor

chucks0 (Yes it was the first midrange EMC test!)

pops106

for your tests.

@ chucks0

Yes that's true, the Esx iscsi initiator is very poor; even with iscsi hba you can't reach so high number like by ms iscsi initiator (it is a ESX problem not EQL). When you will check my results (e.g) and results from guys with FC, you will see that ISCSI in ESX doesn't run so effective like FC (differences between physical and virtual).

@ pops106

By your Netapp are you configured one 14-disks aggregate or 2x 7 ?

Reply
0 Kudos
sbelisle
Contributor
Contributor

4-5 VM's assigned to physical hardware, but sitting idle during test. Disk presented as single R5 LUN (200GB) to the ESX server. VM instance IO ran on had 512MB assigned memory. VMDK 20gb C:, 180GB 😧

SERVER TYPE: VM or PHYS. - VM ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DL380G4 2x3.2GHZ DC XEON, 24GB RAM

STORAGE TYPE / DISK NUMBER / RAID LEVEL: CX700 / 4+1 Disks R5 / 500gb ATA 7200RPM

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___10.79____..........____4920__.........____153.7___

RealLife-60%Rand-65%Read......___79.43____.........._____692__........._____5.4___

Max Throughput-50%Read..........___7.78____..........____5361__.........___167__

Random-8k-70%Read.................__60.51_____..........__467____........._____3.6___

EXCEPTIONS: CPU Util.82,30,90,27%;

Message was edited by:

sbelisle

Reply
0 Kudos
sbelisle
Contributor
Contributor

Same test as above but to a 4x50=200GB meta

SERVER TYPE: VM or PHYS. - VM ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DL380G4 2x3.2GH DC XEON, 24GB RAM

STORAGE TYPE / DISK NUMBER / RAID LEVEL: CX700 / 4x4+1 Disks R5 / 500g ATA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___10.98____..........____5043__.........____157.5___

RealLife-60%Rand-65%Read......___40.59____.........._____1203__........._____9.4___

Max Throughput-50%Read..........___10.63____..........____5131__.........___160.3__

Random-8k-70%Read.................__44.70_____..........__1128____........._____8.8___

EXCEPTIONS: CPU Util. 73,43,79,40

Reply
0 Kudos
christianZ
Champion
Champion

Thanks for joining in.

Reply
0 Kudos
D_duke
Contributor
Contributor

Test was the only thing running on the system (not yet in production)

testfilesize set to 20000000 (10 GB file)

Tests were run on a VRaid0,1 and 5 Vdisk (size Vdisk was 100 GB in all cases)

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP ProLiant BL460c G1, 16GB RAM; 2x Intel XEON 5345, 2,33 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 6100 / 40 Fiber Channel 15k / Vraid1

SAN TYPE / HBAs : Fiber, Emulex LPe 1105-HP HBA

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__0,61____..........___6210___.........___194____

RealLife-60%Rand-65%Read......___6,1____..........___3667___.........____29____

Max Throughput-50%Read..........____6,2___..........___4336___.........___136____

Random-8k-70%Read.................____5,7___..........___3579___.........____28____

\##################################################################################

SERVER TYPE: VM ON ESX 3.0.2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP ProLiant BL460c G1, 16GB RAM; 2x Intel XEON 5345, 2,33 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 6100 / 40 Fiber Channel 15k / Vraid5

SAN TYPE / HBAs : Fiber, Emulex LPe 1105-HP HBA

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__0,56_____..........___8605___.........___269____

RealLife-60%Rand-65%Read......___9,3____..........___5113___.........____40____

Max Throughput-50%Read..........____38,3__..........___1374___.........___43____

Random-8k-70%Read.................____9,1___..........___5242___.........____41____

\##################################################################################

SERVER TYPE: VM ON ESX 3.0.2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP ProLiant BL460c G1, 16GB RAM; 2x Intel XEON 5345, 2,33 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 6100 / 40 Fiber Channel 15k / Vraid0

SAN TYPE / HBAs : Fiber, Emulex LPe 1105-HP HBA

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__0,54____..........__10068___.........___314____

RealLife-60%Rand-65%Read......___7,3____..........___6928___.........____54____

Max Throughput-50%Read..........____6,8___..........___5430___.........___170____

Random-8k-70%Read.................____6,8___..........___7476___.........____58____

Reply
0 Kudos
christianZ
Champion
Champion

Wow. That's really not bad. Thanks for testing.

Reply
0 Kudos
kenrobertson
Contributor
Contributor

Anyone done any tests with IET? Curious how IET's performance compares to SanMelody. I am currently looking at both... IET is attractive for being free, but SanMelody would likely have more support.

Reply
0 Kudos
thorwitt
Contributor
Contributor

Hallo,

ich bin vom 22.09.2007 bis zum 07.10.2007 nicht im Hause erreichbar.

Ihre Mail wird an Herrn Sachs weitergeleitet und bearbeitet.

Mit freundlichen Grüßen

TargoSoft IT-Systemhaus GmbH

Thorsten Witt

Technik

******************************************************************

***

      • TargoSoft IT-Systemhaus GmbH

      • Poseidonhaus

      • Amsinckstrasse 65

      • D-20097 Hamburg

      • Telefon 040 / 23 51 22-23

      • Telefax 040 / 23 51 22-40

      • http://www.targosoft.de

      • Geschäftsführer: Wolfgang Wündsch

      • Handelsregisternummer: HRB 33 548 Hamburg

      • Gerichtsstand: Hamburg

***

******************************************************************

Datensicherung . Archivierung . Hochverfügbarkeit . Konsolidierung

Virtualisierung ...im heterogenen SAN

Vereinbaren Sie einen Termin in unserem mehrfach ausgezeichneten

OPEN-SAN Solution Center(r) unter : http://opensan.TargoSoft.de

>>> communities-emailer 25.09.2007 05:22 >>>

Thorsten Witt,

A new message was posted in the thread "Open inofficial storage performance thread":

http://communities.vmware.com/message/756867

Author : kenrobertson

Profile : http://communities.vmware.com/people/kenrobertson

Message:

Reply
0 Kudos
cmanucy
Hot Shot
Hot Shot

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2K3 VM on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL385 G1, 16GB RAM, 2x 2.6GHz AMD Opteron Dual Core

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA 1500 / MSA30 with 5 x 147G 15K SCSI / RAID5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____12____..........___4772___.........___149____

RealLife-60%Rand-65%Read.......____59____..........____815___.........____6_____

Max Throughput-50%Read.........____22____..........____2625__.........____82____

Random-8k-70%Read..............____40____..........___1111___.........____9_____

EXCEPTIONS: CPU Util.-49-38-35-45;

##################################################################################

Note: about 5VM's on this same LUN, tests run during fairly low use time.

---- Carter Manucy
Reply
0 Kudos
cmanucy
Hot Shot
Hot Shot

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2K3 VM on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL385 G1, 16GB RAM, 2x 2.6GHz AMD Opteron Dual Core

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA 1500 / MSA30 with 13+1 x 147G 15K SCSI / RAID6

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____15____..........___3961___.........___124____

RealLife-60%Rand-65%Read.......____47____..........___1111___.........____9_____

Max Throughput-50%Read.........____38____..........___1591__.........____50____

Random-8k-70%Read..............____29____..........___1531___.........___12_____

EXCEPTIONS: CPU Util.-36-32-23-47;

##################################################################################

Note: about 20VM's on this same LUN, tests run during slightly low to moderate use time.

---- Carter Manucy
Reply
0 Kudos
cmanucy
Hot Shot
Hot Shot

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2K3 VM on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL385 G1, 16GB RAM, 2x 2.6GHz AMD Opteron Dual Core

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA 1500 / MSA20 with 12 x 250 7.5K SATA / RAID6

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____11____..........___5327___........____166___

RealLife-60%Rand-65%Read.......____186___..........___292___.........____2_____

Max Throughput-50%Read.........____48____..........___1248__.........____39____

Random-8k-70%Read..............____195___..........___283___.........____2_____

EXCEPTIONS: CPU Util.-43-23-22-22;

##################################################################################

Note: one other medium-traffic VM on this same LUN.

---- Carter Manucy
Reply
0 Kudos
GandhiII
Contributor
Contributor

to aschaef:

I have a Lefthand SAN similar to yours one. Instead of 3 IBM I have 3 HP DL380 with 18 x 146GB 15k U320SCSI disks and get a much better performance. I think you are using your system in a 2 way replication and do not use MPIO. And there is the reason for your bad performance! Use MPIO and you will get about 1/3 more performance. Or switch to 3 way replication.

As soon as I have time to run your I/O meter test, I will post my Lefthand results.

Regards

Andreas

Reply
0 Kudos
meistermn
Expert
Expert

Hello Christian,

I like this thread. But in most of the tests only 1 VM is measured. Will we have a different picture as we take 10, 20, 30 VM for a parallel tests.

Reply
0 Kudos
cmanucy
Hot Shot
Hot Shot

Do you mean 10, 20, 30 VM's running this same performance test at the same time?

You should be able to extrapolate that information out of the actual tests themselves - although that would put one heck of an I/O load on whatever backends storage system you've got... but if you didn't start the test on all the machines at exactly the same time, it would be impossible to really know what the results mean.

---- Carter Manucy
Reply
0 Kudos
larstr
Champion
Champion

I received a few new servers here so I decided to test a bit before hitting production. I still haven't tested all of the products I wanted, but I guess this is enough for a pretty long posting. So expect another similar one in a day or three. Smiley Wink

I have only tested local storage, and 32 bit windows VMs. The goal was to get an overview of the storage virtualization overhead between different products. VMs were installed from scratch and vendor native drivers (VMware Tools, VS Tools, Virtual Machine Additions) was installed before running iometer.

HP tools and drivers were also installed on windows hosts (non HP native cciss disk drivers were used by the Debian install and Virtual Iron).

SERVER TYPE: Physical Windows 2003R2sp2

CPU TYPE / NUMBER: 8 cpu cores, 2 sockets

HOST TYPE: HP DL360G5, 4GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs block size (4096)

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

3.18

18530

579

RealLife-60%Rand-65%Read

78.6

739

5.7

Max Throughput-50%Read

3.74

15579

486

Random-8k-70%Read.

72.7

787

6.1

SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Server on Windows Server 1.0.4 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

0.5

10900

340

RealLife-60%Rand-65%Read

156

368

2.8

Max Throughput-50%Read

1.22

7472

233

Random-8k-70%Read.

88.1

630

4.9

EXCEPTIONS: CPU Util. 99% 17% 98% 22%

SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Server on Debian Linux 4.0 2.6.18 x64

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default jfs (4096)

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

0.5

8550

267

RealLife-60%Rand-65%Read

79

747

5.8

Max Throughput-50%Read

0.63

3804

237

Random-8k-70%Read.

97

609

4.7

EXCEPTIONS: CPU Util. 100% 17% 98% 16%

SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Player 2.0.1 on Windows Server 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

0.5

9920

310

RealLife-60%Rand-65%Read

139

411

3.2

Max Throughput-50%Read

3.1

2656

83

Random-8k-70%Read.

93.3

632

4.9

EXCEPTIONS: CPU Util. 99% 17.5% 98% 23%

SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Iron 3.7

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

16.2

3732

116

RealLife-60%Rand-65%Read

169

353

2.75

Max Throughput-50%Read

15.2

3940

123

Random-8k-70%Read.

177

337

2.6

EXCEPTIONS: CPU Util. 39% 17% xx% 17%

SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Server 2005r2sp1 (1.1.603.0 EE R2 SP1) on Windows Server 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

15.5

3860

120

RealLife-60%Rand-65%Read

159

374

2.9

Max Throughput-50%Read

17.3

3444

107

Random-8k-70%Read.

198

300

2.3

EXCEPTIONS: CPU Util. 58% 17% 57% 16%

SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Server 2005r2sp1 (1.1.603.0 EE R2 SP1) (VT enabled) on Windows Server 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

15.9

3773

117

RealLife-60%Rand-65%Read

159

375

2.9

Max Throughput-50%Read

17.5

3420

106

Random-8k-70%Read.

199

299

2.3

EXCEPTIONS: CPU Util. 58% 17% 55% 16%

SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual PC 2007 (6.0.156.0) on Windows Server 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

16.7

3571

111

RealLife-60%Rand-65%Read

161

371

2.9

Max Throughput-50%Read

18.6

3219

100

Random-8k-70%Read.

200.5

298

2.3

EXCEPTIONS: CPU Util. 53% 16% 54% 15%

SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual PC 2007 (6.0.156.0) (VT enabled) on Windows Server 2003R2sp2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

Max Throughput-100%Read.

15.2.

3948

123

RealLife-60%Rand-65%Read

148.

403

3.2

Max Throughput-50%Read

16.8.

3561

111

Random-8k-70%Read.

184.

324

2.5

EXCEPTIONS: CPU Util. 56% 16% 51% 15%

Message was edited by: larstr

Added note about HP drivers.

Reply
0 Kudos
sstelter
Enthusiast
Enthusiast

Hi Meistermn,

Great question - it seems to me that performance must ultimately be limited by the performance of physical disks in the SAN. I don't think single VM performance can be extrapolated to larger numbers of VMs. As the number of VMs increases, the effects of cache should theoretically be negated. That's why it is so important that the size of the test file/volume/VM be larger than the cache on the SAN - otherwise you're just testing the speed/latency of memory and speed/latency of the fabric (and the speed/latency of the host and SAN interfaces) and not much else. As the number of VMs increases, the randomness of the io pattern should also increase. Coalesing and other techniques to make the data more sequential would seem to use SAN CPU cycles, so a single VM test might be even less useful as the SAN controller CPU gets bogged down with this type of work as the number of VMs increase.

I like this thread because I think it is better for potential SAN customers to have some data than to just use the marketing specs that SAN vendors publish (which usually represent I/O to cache on the SAN controller or the disk cache). Sifting through the data is the challenge, as is intepreting what it will actually mean for you in the real world.

Iometer is a great tool for testing multiple workloads on multiple servers simultaneously - the GUI can choreograph simulatenous test execution and can deliver the results of several different runs in one spreadsheet (after running the suite of tests overnight, for example). Maybe someone (Christian?) could cook up a set of VMs that could be deployed for such a test with the appropriate ICF file...happy to help if I can. This could remove some of the variability in the data due to (mis-)configuration. In theory the test VMs could be any OS (meaning a free, redistributable one might be a better choice) as long as it was the same OS, right?

Disclaimer: I work for LeftHand Networks, a SAN vendor, so it might not be appropriate for me to directly help with creating the VMs and ICF file. I sure am curious if this would be a viable means to test SAN performance with multiple VMs though...

Reply
0 Kudos
christianZ
Champion
Champion

When you check the results from "urbanb" or "mitchellm3" with concurrent vms, you will see that the ios numbers are not as high as by single vm and the response time is evidently higher - that phenomena should be always the same on systems with a large cache.

We can observe here (urbanb's tests, EMC DMX3) e.g. if you have one single vm then you can reach ~7000iops (reallife-test) - by 2 concurrent vms you can reach only the half of it and the response time will be higher (I guess all disks were involved here). Myself observed similar phenomena by testing on EQL - all disks are involved here too.

There was another constellation by mitchellm3's tests(IBM DS4800) - each vm worked on its own disks, i.e. one single vm couldn't saturate the whole system, but the 2 vms and one physical running concurrently could.

Therefore it would be really recommended to make the concurrent tests when one would see very high numbers by iops and very low response time. Such numbers could come only from cache and couldn't be any real indicator for storage performance in the reality.

Myself saw this by testing of SanMelody (observed high system ios but very low disks' ios).

Regards

Christian

Reply
0 Kudos
christianZ
Champion
Champion

...and there are new tests here !

Thanks to:

cmanucy

larstr

for joining in.

Reply
0 Kudos
ericdaly
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS - VM on 1MB Block Size VMFS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Windows 2003 STD VM ON ESX 3.0.2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380 G5, 32GB RAM; Dual Intel Quad Core 2GHz E5335,

STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6000 / 30 x 300gb FC HDD on vRAID1

VMFS: 500GB LUN, 1MB Block Size

SAN TYPE / HBAs : 4GB FC, HP StorageWorks FC1142SR 4Gb HBA's

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

RealLife-60%Rand-65%Read......__11.08______......._4391.31_______...._34.31_______

Reply
0 Kudos