VMware Cloud Community
christianZ
Champion
Champion

Open unofficial storage performance thread

Attention!

Since this thread is getting longer and longer, not to mention the load times, Christian and I decided to close this thread and start a new one.

The new thread is available here:

Oliver Reeh[/i]

[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]

My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any

warranty.

If anybody shouldn't be agreed with some results then he can make own tests and presents

his/her results too.

I hope this way to classify the different systems and give a "neutral" performance comparison.

Additionally I will mention that the performance is one of many aspects to choose the right system.

The others could be e.g.

\- support quality

\- system management integration

\- distribution

\- self made experiences

\- additional features

\- costs for storage system and infrastructure, etc.

There are examples of IOMETER Tests:

=====================================

\######## TEST NAME: Max Throughput-100%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

\######## TEST NAME: RealLife-60%Rand-65%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

\######## TEST NAME: Max Throughput-50%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

\######## TEST NAME: Random-8k-70%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

The global options are:

=====================================

Worker

Worker 1

Worker type

DISK

Default target settings for worker

Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

Disk maximum size,starting sector

8000000,0

Run time = 5 min

For testing the disk C is configured and the test file (8000000 sectors) will be created by

first running - you need free space on the disk.

The cache size has direct influence on results. By systems with cache over 2GB the test

file should be increased.

LINK TO IOMETER:

Significant results are: Av. Response time, Av. IOS/sek, Av. MB/s

To mention are: what server (vm or physical), Processor number/type; What storage system, How many disks

Here the config file *.icf

\####################################### BEGIN of *.icf

Version 2004.07.30

'TEST SETUP ====================================================================

'Test Description

IO-Test

'Run Time

' hours minutes seconds

0 5 0

'Ramp Up Time (s)

0

'Default Disk Workers to Spawn

NUMBER_OF_CPUS

'Default Network Workers to Spawn

0

'Record Results

ALL

'Worker Cycling

' start step step type

1 5 LINEAR

'Disk Cycling

' start step step type

1 1 LINEAR

'Queue Depth Cycling

' start end step step type

8 128 2 EXPONENTIAL

'Test Type

NORMAL

'END test setup

'RESULTS DISPLAY ===============================================================

'Update Frequency,Update Type

4,WHOLE_TEST

'Bar chart 1 statistic

Total I/Os per Second

'Bar chart 2 statistic

Total MBs per Second

'Bar chart 3 statistic

Average I/O Response Time (ms)

'Bar chart 4 statistic

Maximum I/O Response Time (ms)

'Bar chart 5 statistic

% CPU Utilization (total)

'Bar chart 6 statistic

Total Error Count

'END results display

'ACCESS SPECIFICATIONS =========================================================

'Access specification name,default assignment

Max Throughput-100%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

'Access specification name,default assignment

RealLife-60%Rand-65%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

'Access specification name,default assignment

Max Throughput-50%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

'Access specification name,default assignment

Random-8k-70%Read,ALL

'size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

'END access specifications

'MANAGER LIST ==================================================================

'Manager ID, manager name

1,PB-W2K3-04

'Manager network address

193.27.20.145

'Worker

Worker 1

'Worker type

DISK

'Default target settings for worker

'Number of outstanding IOs,test connection rate,transactions per connection

64,ENABLED,500

'Disk maximum size,starting sector

8000000,0

'End default target settings for worker

'Assigned access specs

'End assigned access specs

'Target assignments

'Target

C:

'Target type

DISK

'End target

'End target assignments

'End worker

'End manager

'END manager list

Version 2004.07.30

\####################################### ENDE of *.icf

TABLE SAMPLE

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM or PHYS.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 51xx, 2,66 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 Disks / R50

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........__________..........__________.........__________

RealLife-60%Rand-65%Read......__________..........__________.........__________

Max Throughput-50%Read..........__________..........__________.........__________

Random-8k-70%Read.................__________..........__________.........__________

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

I hope YOU JOIN IN !

Regards

Christian

A Google Spreadsheet version is here:

Message was edited by:

ken.cline@hp.com to remove ALL CAPS from thread title

Message was edited by:

RDPetruska

Added link to Atamido's Google Spreadsheet

Tags (1)
Reply
0 Kudos
457 Replies
cperdereau
Enthusiast
Enthusiast

Thank you guys

I will post my result later one.

I was asking this because when I clone a VMs, it seems slow for me. I wanted to test the IO from the Console for this purpose

Reply
0 Kudos
rock0n
VMware Employee
VMware Employee

I've installed a fresh Demo Lab.

2 x 1HE Certified S5000PAL/SR1550 TERRA Server and QLE2462 HBAs

1 x 20 Port QLogic SanBox

1 x F5402E Xyratex ( 6 x 74 SAS RAID10 & 6 x 250 SATA RAID10 )

I'll present some IOMeter results tomorrow.

kind regards

Raiko Mesterheide

Reply
0 Kudos
AnthonyM
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360G5, Intel Xeon Dual Core 5120 (4 cores @ 1.866GHz), 4GB RAM (512MB allocated to VM)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS100e x 1 / 14+2 SATA / R50

SAN TYPE / HBAs :Microsoft iSCSI initiatorNo Jumbo's and No Flow Control

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........\__19.07_..........___3,014___.........___94

____

RealLife-60%Rand-65%Read......___22__..........___2,030___.........____15.86__

Max Throughput-50%Read..........____8.49____..........___3,978___.........___124.30____

Random-8k-70%Read.................____23____..........___1,956___.........__15.28____

This VM is connected to the network with a single 1GB NIC, that was being shared with around 7 other VMs network traffic at the time, one of which is an exchange server serving ~ 125 staff.

Reply
0 Kudos
rb2006
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS NetApp 2xFAS3020c Metro-Cluster configuration

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DELL PowerEdge 2900, 2x Intel Xeon Dual Core 5160, 16GB RAM (2 GB allocated to VM)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: 2x FAS3020c metro cluster / 2x26 FC 144GB 10K HDD’s / RAID 4

SAN TYPE / HBAs : Brocade 3250 FC 2GB / QLA2432 HBAs

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read....__10,05___......___5814___.....___181,72___

RealLife-60%Rand-65%Read......___20,45__..........___2586___.........____20,21__

Max throughput-50%Read..........____7,05____..........___6818___.........___213,07____

Random-8k-70%Read.................____25,88____..........___2073___.........__16,2____

Reply
0 Kudos
williambishop
Expert
Expert

Let me know how the dmx3 goes....I'm loving the dmx3000, but I've got a serious hunger to try it on the new series.

--"Non Temetis Messor."
Reply
0 Kudos
christianZ
Champion
Champion

@all - thanks guys for your test results; RockOn - we are waiting for your results too !

The results from rb2006 (FAS3020c) are very different from those of Joachims - I think the results from rb2006 are more realistic.

RB2006 - could you run one test with 2 vms (simultan) so that one vm has a volume served over SP1 and the second vm has a volume served over SP2 - it would be interesting to see the overall throughput of your system (active/active).

RB2006 - are you using sync mirroring too ? How many spindles were involved in your tests (Flex vol here ?)?

BenConrad wanted to make a test with EQL volume stripped over 3 or 4 members - maybe forget ??

This could give us the scalability potential of EQL (I have already tested it with 2 members).

I heard many positives about Compellent Systems - maybe is there anybody he could make the tests too ??

So far only 2 systems could outperform the throughput of EQL with 2 members (DS8000 and DMX3000) - can anybody offer more ?? (meant not quite serious ).

Reply
0 Kudos
rb2006
Contributor
Contributor

Hi,

so we have two cluster nodes. One node serves CIFS and another is for VM's with LUN's. Yes, we are using sync mirroring. This has additional negativ impact on performance. We have one aggregate with two raid4 disk groups and with 26 disks on each node. Volumes are flex volumes.

It's dificult for me to make another test, because I have 24 live VM's running and one exchange server with 400 users wich is connected over iSCSI and it's very dificult for me, to find time frame without load.

Reply
0 Kudos
williambishop
Expert
Expert

I would imagine that a 4800 loaded with 810 cabinets the latest drives, appropriately connected would also probably beat it hands down. Also keep in mind, it's not the bandwidth afforded to 1 system for a test thats important, it's how far it will scale without degradation. On the dmx3000, it only has 2g connectors into the bloody box, but with a huge cache(in our case over 100gigs) and tons of front end, as well as tons of backend, you rarely have to hit it at a disk speed. It's all memory. Which is why it can rock longer and harder than most anything else. I drool over the dmx3....why oh why can't I have one?

--"Non Temetis Messor."
Reply
0 Kudos
BenConrad
Expert
Expert

BenConrad wanted to make a test with EQL volume

stripped over 3 or 4 members - maybe forget ??

This could give us the scalability potential of EQL

(I have already tested it with 2 members).

I still need to purchase (2) WS-X6748-GE-TX modules for our 6509's before I can post anything interesting.

Ben

Reply
0 Kudos
christianZ
Champion
Champion

Copied dctaylorit's results:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS VM ON ESX / LeftHand Storage on HP DL 320s[/b] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL350G5 - 6GB - 2x Xeon5150 2.66 DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: LeftHand DL320s / 10+2 15k SAS / R5

SAN TYPE / HBAs : iSCSI, QLA4050 HBA

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........\_17.56_..........\_3355_.........\_104.9_

RealLife-60%Rand-65%Read......\_24.19_..........\_2103_.........\_16.43_

Max Throughput-50%Read..........\_16.35_..........\_3466.2_.........\_108.32_

Random-8k-70%Read.................\_34.75_..........\_1582.83_.........\_12.37_

EXCEPTIONS: CPU Util.-27-35-34-26%;

Reply
0 Kudos
s_buerger
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS VM ON ESX / DAS (p600 and MSA50) on HP DL 380g5

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (50/50% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........\_7.50_..........\_7738.74_.........\_241.83_

RealLife-60%Rand-65%Read......\_16.16_..........\_2950.18_.........\_23.06_

Max Throughput-50%Read..........\_8.39_..........\_6956.14_.........\_217.26_

Random-8k-70%Read.................\_14.88_..........\_3147.66_.........\_24.42_

EXCEPTIONS: CPU Util.-54-45-48-46%

Reply
0 Kudos
s_buerger
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS VM ON ESX / DAS (p400) on HP DL 380g5

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (25/75% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........\_5.05_..........10930.53_.........\_341.99_

RealLife-60%Rand-65%Read......\_28.25_..........\_1381.72_.........\_10.60_

Max Throughput-50%Read..........\_5.45_..........\_10328.26_.........\_322.76_

Random-8k-70%Read.................\_25.71_..........\_1449.84_.........\_11.33_

EXCEPTIONS: CPU Util.-74-45-70-54%

Reply
0 Kudos
s_buerger
Contributor
Contributor

correction to the last 2 benchmarks, 200GB vmdk not 20.

Reply
0 Kudos
s_buerger
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS VM ON ESX / DAS (p400) on HP DL 380g5

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (25/75% read/write) / 2x 72GB 10k 2,5" SAS / Raid 1

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........\_0.71_..........26027.65_.........\_813.36_

(VI-Client shows Disk Usage Average/Rate of this VM 53MB/s and same for the vmhba Disk Read Rate)

RealLife-60%Rand-65%Read......\_83.59_..........\__557.11_.........\__4.35_

Max Throughput-50%Read..........\_5.85_..........\__9678.30_.........\_302.45_

Random-8k-70%Read.................\_77.10_..........___681.36_.........\_5.32_

EXCEPTIONS: CPU Util.-100-42-68-26%

##################################################################################

Don't understand why on the first test the cpu utilization is so high and the max troughput is so much better compared to the raid5 test on the same controller. Any explanation?

Reply
0 Kudos
larstr
Champion
Champion

Don't understand why on the first test the cpu

utilization is so high and the max troughput is so

much better compared to the raid5 test on the same

controller. Any explanation?

I don't know \*why* the cpu load is so much higher, but when the cpu load inside a guest VM is high, it's timing (clock) becomes highly unreliable and the numbers you get when running IOmeter will also not be very reliable because of this.

Lars

Reply
0 Kudos
davidbarclay
Virtuoso
Virtuoso

Here's a thought. Have we all violated the EULA by publishing benchmarks?

I saw a blogger cop a take down notice last week...surely VMware are aware of this thread?

Dave

Reply
0 Kudos
s_buerger
Contributor
Contributor

tested the cache and raid levels

SERVER TYPE: Win2k3 VM (1,5GB RAM, 200GB vmdk) on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (0/100% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 4.61 12905.24 403.29

RealLife-60%Rand-65%Read 16.79 2807.52 21.93

Max Throughput-50%Read 7.54 7715.67 241.11

Random-8k-70%Read 15.52 2980.51 23.29

EXCEPTIONS: CPU Util.-66-46-55-48%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (25/75% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 6.84 8469.97 264.69

RealLife-60%Rand-65%Read 16.81 2811.40 21.96

Max Throughput-50%Read 8.22 7099.51 221.86

Random-8k-70%Read 15.43 2977.07 23.26

EXCEPTIONS: CPU Util.-56-45-50-47%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (50/50% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 7.50 7738.74 241.83

RealLife-60%Rand-65%Read 16.16 2950.18 23.06

Max Throughput-50%Read 8.39 6956.14 217.26

Random-8k-70%Read 14.88 3147.66 24.42

EXCEPTIONS: CPU Util.-54-45-48-46%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (75/25% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 7.35 7753.15 242.29

RealLife-60%Rand-65%Read 22.09 2827.18 22.09

Max Throughput-50%Read 8.37 6859.84 214.37

Random-8k-70%Read 15.38 2999.85 23.44

EXCEPTIONS: CPU Util.-61-47-58-46%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (100/0% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 7.14 8098.47 253.08

RealLife-60%Rand-65%Read 29.24 1925.02 15.04

Max Throughput-50%Read 25.42 2226.64 69.58

Random-8k-70%Read 24.22 2304.24 18.00

EXCEPTIONS: CPU Util.-56-27-29-30%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (50/50% read/write) / 9x 146GB 10k 2,5" SAS / Raid 5

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read 7.53 7722.17 241.21

RealLife-60%Rand-65%Read 38.75 968.49 7.57

Max Throughput-50%Read 8.72 6747.46 210.86

Random-8k-70%Read 28.79 1313.49 10.26

EXCEPTIONS: CPU Util.-55-58-47-52%

##################################################################################

Conclusion: in production we go with Raid 1+0 on 10 disks on the msa50 and 50/50% cache levels

Reply
0 Kudos
s_buerger
Contributor
Contributor

cache level tests

SERVER TYPE: Win2k3 VM (1,5GB RAM, 200GB vmdk) on ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (0/100% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read 5.10 11695.10 365.47

RealLife-60%Rand-65%Read 29.87 1254.94 9.80

Max Throughput-50%Read 4.67 12033.78 376.06

Random-8k-70%Read 27.07 1322.83 10.33

EXCEPTIONS: CPU Util.-74-54-78-56%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (25/75% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read 5.05 10930.53 341.99

RealLife-60%Rand-65%Read 28.25 1381.72 10.60

Max Throughput-50%Read 5.45 10328.26 322.76

Random-8k-70%Read 25.71 1449.84 11.33

EXCEPTIONS: CPU Util.-74-45-70-54%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (50/50% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read 4.74 11723.34 366.35

RealLife-60%Rand-65%Read 28.45 1256.33 9.82

Max Throughput-50%Read 5.21 10819.85 328.12

Random-8k-70%Read 25.40 1345.89 10.51

EXCEPTIONS: CPU Util.-80-56-73-57%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (75/25% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read 4.37 10999.96 343.75

RealLife-60%Rand-65%Read 27.83 1266.14 9.89

Max Throughput-50%Read 5.38 9991.31 312.23

Random-8k-70%Read 25.20 1341.20 10.48

EXCEPTIONS: CPU Util.-91-58-82-59%

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (100/0% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read 4.90 11356.24 354.88

RealLife-60%Rand-65%Read 67.15 841.22 6.57

Max Throughput-50%Read 120.19 483.10 15.10

Random-8k-70%Read 51.14 1094.94 8.55

EXCEPTIONS: CPU Util.-79-23-30-25%

Reply
0 Kudos
taylorb
Hot Shot
Hot Shot

SERVER TYPE: VM ON ESX 3.0.1, 12GB VMDK, 512 RAM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP ML570 G4 16GB RAM; 4x XEON 7140, 3.4 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS8100 / 14+2 FC 10k / RAID5

SAN TYPE / HBAs / Fabric : FC / Emulex LPe1150 /4 Gb Brocade 200E

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__0.66______..........___9487___.........___296____

RealLife-60%Rand-65%Read......___12_____..........___2039___.........____15.9____

Max Throughput-50%Read..........____2.93____..........___7947___.........___248____

Random-8k-70%Read.................____3.3____..........___2292___.........____17.9____

EXCEPTIONS: VCPU Util. 100-79-98-99 %;

Looks like I had respectable results, but it seems like my test was CPU constrained. All the tests were hammering the vCPU in my test VM. Seems strange since I have the fastest CPUs out of any of the results I've seen, but the highest utilization.....

Reply
0 Kudos
taylorb
Hot Shot
Hot Shot

Retested with 2 vCPUs and more RAM. This didn't really make a difference with my CPU utilization issue, because now I am running at 50%, which was 1 CPU pegged and 1 idle. Interestingly, all the tests did slightly worse except for the 100% read.

SERVER TYPE: VM ON ESX 3.0.1, 20GB VMDK, 2GB RAM.

CPU TYPE / NUMBER: VCPU / 2

HOST TYPE: HP ML570 G4 16GB RAM; 4x XEON 7140, 3.4 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS8100 / 14+2 FC 10k / RAID5

SAN TYPE / HBAs / Fabric : FC / Emulex LPe1150 /4 Gb Brocade 200E

\##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

\##################################################################################

Max Throughput-100%Read........__0.76______..........___9913___.........___309____

RealLife-60%Rand-65%Read......___12.5_____..........___2010___.........____15.7____

Max Throughput-50%Read..........____7.1____..........___6823___.........___213____

Random-8k-70%Read.................____3.38____..........___2308___.........____18____

EXCEPTIONS: VCPU Util. 50-40-39-50 %;

Reply
0 Kudos