VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
jasonboche
Immortal
Immortal

I've updated my numbers above for the EMC Celerra NS-120 swISCSI and NFS tests. EMC best practices were applied to improve NFS performance. EMC also made the recommendation that the IOMETER test script should include 120 seconds of Ramp Up time. The current Ramp Up time of 0 seconds can skew test results. My new numbers include the Ramp Up time of 120 seconds for the swISCSI and NFS tests. I haven't retested fibre channel.






[i]Jason Boche, vExpert[/i]

[boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i]

[VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i]

[Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]

[vCalendar|http://www.boche.net/blog/index.php/vcalendar/] Author[/i]

VCDX3 #34, VCDX4, VCDX5, VCAP4-DCA #14, VCAP4-DCD #35, VCAP5-DCD, VCPx4, vEXPERTx4, MCSEx3, MCSAx2, MCP, CCAx2, A+
0 Kudos
UGrant
Contributor
Contributor

Here is what I got. We were looking for a low cost storage sans to dump some exchange databases on. This unit from Promise seemed to fit the bill.

We've had it for 3 months with no issues. MIght get another for storage at out DR site.

ANY SUGGESTIONS ON HOW I MIGHT IMPROVE THINGS IS WELCOMED.

SERVER TYPE: VMWare ESXi 4.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL360 G5, 12GB RAM, 1 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Promise VessRaid 1830i (512MB CACHE/SP) / 4 SATA WD 300GB Velociraptor 10k/ R10 -

SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (HP NC360T)

##################################################################################

BASE Test--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___17.1298_____.......___3497.61_____.....____109.30______

RealLife-60%Rand-65%Read......_____71.1607____.....____750.30____.....____5.86______

Max Throughput-50%Read.........____15.9227____.....____3812.36____.....____119.14______

Random-8k-70%Read..............____74.2_____.....____722.45______.....____5.64______

Jumbo Frames and Flow Controll on Stroage VLAN - HP 1810G 24port Switch

0 Kudos
_VR_
Contributor
Contributor

SERVER TYPE: VM

CPU TYPE / NUMBER: VCPU / 4

HOST TYPE: DL380 G5, 32GB RAM; 2 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS4000E / 14+2 DISK (7.2K SATA) / R50)

NOTES: 2 NIC, MS iSCSI, no-jumbo, flowcontrol on

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___17.92____..........___3156____.........___98.6____

RealLife-60%Rand-65%Read.......___19.33____..........___2176____.........___17.00__

Max Throughput-50%Read.........___13.86____..........___3828____.........___119____

Random-8k-70%Read..............___22.15____..........___1943____.........___15.19__

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

I believe the throughput is limited by the NIC configuration, otherwise the numbers look as they should for this configuration.

0 Kudos
tiaccadi
Contributor
Contributor

SERVER TYPE: VM XP SP3 on ESX3.5 U4

CPU TYPE / NUMBER / RAM: vCPU / 1 / 1GB

HOST TYPE: Dell PowerEdge 2950, 24GB RAM, 2x Intel X5450 3.0Ghz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp FAS3020c / 14x 300GB 10k 2Gb FC (per loop) / RAID DP

SAN TYPE / HBAs: iSCSI / sw iSCSI

OTHER: Disk.SchedNumReqOutstanding and HBA queue depth set to 64 on ESX host

Note that the LUN provisioned to ESX spans over one aggregate on one loop, i.e. 14 disks are used

Moreover, NO jumbo frames were used

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

17

3385

106

Real Life - 60% Rand / 65% Read

30

1609

13

Max Throughput - 50% Read

13

4020

126

Random 8K - 70% Read

32

1434

11

Are those good values?

0 Kudos
s_buerger
Contributor
Contributor

Ich bin vom 8.2. bis 10.2.2010 nicht im Büro und kann deshalb keine E-Mails lesen. E-Mails werden nicht weitergeleitet.

In dringenden Fällen wenden Sie sich bitte an meine Kollegen unter 0351 / 49701-150 bzw. per E-Mail saxonia.hotline@saxsys.de.

0 Kudos
PaulSvirin
Expert
Expert

You can have a look at StarWind performance details at

Sorry, nevermind... this is official one and I'm trying to delete the message. Smiley Sad

---

iSCSI SAN software

--- iSCSI SAN software http://www.starwindsoftware.com
0 Kudos
jakopino
Contributor
Contributor

SERVER TYPE: VMWare ESX 4u1

GUEST OS / CPU / RAM Win2K3 SP2, 1 VCPU, 2GB

HOST TYPE: IBM HS21 XM, 32GB RAM, 2 x Intel E5420, 2.50GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM Storage Module, 6 drive 300GB 10K, RAID 10

SAN TYPE / HBAs : SAS Shared

NOTES: Bladecenter S

Access Specification Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

CPU Utilization

Max Throughput-100%Read

9.02

5,613.22

175.41

25.17

RealLife-60%Rand-65%Read

49.70

1,073.48

8.39

15.06

Max Throughput-50%Read

7.84

5,855.29

182.98

31.41

Random-8k-70%Read

29.28

1,788.42

13.97

17.33

Any comments?

Reg

Dario

0 Kudos
chaddy
Contributor
Contributor

SERVER TYPE: VM 2003 Server ESX4

CPU TYPE / NUMBER / RAM: vCPU / 2 / 1GB

HOST TYPE: HP DL370 G6, 24GB RAM, 2x Intel X5430 2.4Ghz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS 15x 146GB SCSI 10K in RAID6

During this test I had 4 VMs running these tests at the same time. Includig the below test of Server 2008 R2 which had very poor Real Life and Random tests compared to this 2003 VM. The Max Throughput tests were very comparable.

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

10.1

5812

181.6

Real Life - 60% Rand / 65% Read

2.6

20452

159.7

Max Throughput - 50% Read

8.8

6631

207.2

Random 8K - 70% Read

2.6

21030

164.3

SERVER TYPE: VM 2008 Server R2 ESX4

CPU TYPE / NUMBER / RAM: vCPU / 2 / 1GB

HOST TYPE: HP DL370 G6, 24GB RAM, 2x Intel X5430 2.4Ghz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS 15x 146GB SCSI 10K in RAID6

I thought this was very odd that the results were so poor for this VM in the RealLife and Random tests. Any idea why these tests are so poor in 2008?

Also, during this test I had 4 VMs running these tests at the same time.

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

2.3

12450

393

Real Life - 60% Rand / 65% Read

32.9

243

1.9

Max Throughput - 50% Read

8.5

6546

204.5

Random 8K - 70% Read

24.9

157

1.2

0 Kudos
radimf
Contributor
Contributor

Hi,

Your numbers look too good to be true.

If I read your results correctly - DAS Random 8K test results at 21000 IOPS for only 15 spindles in RAID6? Simply impossible for "rotating rust".

If those drives were SSDs - my apology Smiley Happy

What was the size of your test io file?

You are probably testing against cache only, or your results are affected by high cpu usage, or other issue.

Regards,

Radim

0 Kudos
chaddy
Contributor
Contributor

They are 10k SCSI disks. And yes, I was using the 8k Random test from the config file. It could be controller or hard drive cache, not sure, also the CPU while running the test was around 50% for 2 cores. I too thought the scores were very high comparing to other results posted...

-Chad

0 Kudos
radimf
Contributor
Contributor

Try to expand test file size to at least 2x size of the cotroller cache and check its size prior tests.

Delete the original test file prior launching iometer and change sector count to apropriate number.

0 Kudos
chaddy
Contributor
Contributor

The controller is 256MB, where do I set the test file size? Is it the transfer request size? Also found it is 36MB read cache and 108MB write cache.

Also, I'm now quite certain the high IOPS is due to the controller cache, I migrated my test VM to another array of 4x 500GB 7.2k Sata Raid5 and it had 20,000+ IOPS albeit with a lower MB/s.

-Chad

0 Kudos
oparcollet
Enthusiast
Enthusiast

SERVER TYPE: VMWare ESX 4u1

GUEST OS / CPU / RAM Win2K3 SP2, 2 VCPU, 2GB

HOST TYPE: DELL R610, 32GB RAM, 2 x Intel E5520, 2.27GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: PILLAR DATA AX500 180 drives 525GB SATA, RAID5

SAN TYPE / HBAs : FCOE CNA EMULEX LP21002C on NEXUS 5010

TEST NAME--


Av.Resp.Time ms-Av.IOs/se-Av.MB/sek----

##################################################################

Max Throughput-100%Read....5.1609..........11275......... 362.86 CPU=22.84%

RealLife-60%Rand-65%Read...3.2424......... 17037........ 131.68 CPU=32.6%

Max Throughput-50%Read......4.2503 .........12742 ........ 403.35 CPU=26.45%

Random-8k-70%Read.............3.2759..........16824.........128.19 CPU=30.39%

##################################################################

The SAN is also running 214 other VMs ...

0 Kudos
fitzie22
Contributor
Contributor

I have been using iometer for a while now to get some data for my SAN. I created a test vm and have been migrating it to each san to get results. The interesting thing is that the first time that ran IOmeter the cpu in the program spiked and stayed at 100% and I was getting unbeleivable results. I have since moved that vm to another lun and the cpu is arround 40% with good not greeat results. Now when I move it back to tht lun that was giving me 100% it is not happening any longer. Any ideas or thoughts?

0 Kudos
chaddy
Contributor
Contributor

I'm not 100% sure, but it sounds similar to some results I was having posted just above. It was when I had the disk controller cache on I was getting amazing results (20,000+ IOPs) and was running ~50% CPU. When I turned the disk controller cache off I got very pedestrian results and CPU went down to 5-10%. Not sure if that relates, but sounds similar to what I encountered.

-Chad

0 Kudos
JRink
Enthusiast
Enthusiast

Sorry to ask this again, but... Can someone please help me with these questions so I can make sure I'm running IOmeter properly??

Thanks in advance.

J

Sorry for the ignorance. I am very new to IOMETER and I'm really trying to understand how to run these tests. Few questions...

1. When running these tests from inside a VM, should I be running them on a yellow icon (the VMs drive c: drive itself), or should I be creating a new unformatted drive that will show up in blue? Are people standardizing here?

2. Should I be using the .ICF file from the "original" storage performance thread?

3. When running these tests, are people running them on LUNs in production with VMs on them? Off-hours so disk activity is minimal? For example, my ESX box is connected to a iSCSI SAN with a single LUN/'Datastore on it that has 10 VMs. Should I just run the tests during normal hours? Or are people shutting down all VMs on the Datastore before running these, etc.??

4. I am really confused about the results people are posting. Everyone lists a Av. IOs/sek column, and a Av. MB/sek column, but I don't even see those in my csv results file (??). Are those the same as Total I/O's per second and Total MB's per second in my spreadsheet? If not, where am I supposed to look?

5. Which of the 4 tests are the best idea for how overall VM performance would be?

Sorry for the ignorance.

0 Kudos
cmanucy
Hot Shot
Hot Shot

Greetings again...

We have a Promise M610i in our data center that's mainly used for D2D or D2T stuff. Past attempts to have it interact with VMWare have gone pretty horribly. However, Promise recently released a new firmware that's actually certified under VMWare... and when we did an upgrade on the array to 2G disks, I decided to re-run the battery of tests on it.

Following is the test results for various RAID levels. Each array was created and only had one VM running. I'm using ESX 4's iSCSI initiator in each test, with 2 NICs setup with MPIO. The Promise box has 2x 1G ports on it, each of which are running LACP back to an HP 2848.

Here are the results from a 1MB stripe on the array - 15 2TB WD RE3 7.2K SATA disks (the 16th is a hot spare). The arrays are created by the management software, so I can only assume each array spreads across all 15 disks.

The numbers are actually quite impressive... this setup will cost you about $10K for 32TB of raw storage. If I have time, I might try running 8 tests at the same time against the device and see if I can break it Smiley Happy

Also I should have the same set of results soon from a 64K stripe... I was curious myself as to which would be best for VMWare. I might even do some MS iSCSI Initator tests as well - we'll see.

I find it very interesting how much better RAID6 is vs. a RAID 5 or 50 config on this box - not what I expected to see.

Hopefully this formats ok:

RAID0 - 1M Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.698648

3,552.02

111.000492

42.71

RealLife-60%Rand-65%Read

17.197776

2,999.36

23.432511

47.47

Max

Throughput-50%Read

9.772605

6,075.89

189.871513

52.00

Random-8k-70%Read.

20.884025

2,480.22

19.376709

44.59

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID1E - 1M Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.703526

3,547.56

110.86137

42.70

RealLife-60%Rand-65%Read

24.452243

2,006.67

15.677145

45.88

Max

Throughput-50%Read

11.085448

5,347.37

167.105412

47.62

Random-8k-70%Read.

25.384678

2,027.33

15.83852

41.71

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID5 - 1M Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.7175

3,547.87

110.87082

43.07

RealLife-60%Rand-65%Read

25.47476

1,894.31

14.79928

45.16

Max

Throughput-50%Read

23.764235

2,101.26

65.664415

43.51

Random-8k-70%Read.

31.122049

1,580.53

12.347879

42.12

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID50 - 1M Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.670398

3,553.60

111.050151

43.41

RealLife-60%Rand-65%Read

25.641059

1,890.47

14.769261

44.43

Max

Throughput-50%Read

27.965077

1,999.05

62.470316

33.63

Random-8k-70%Read.

31.866895

1,560.48

12.191244

41.90

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID6 - 1M Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.712874

3,548.61

110.894147

42.83

RealLife-60%Rand-65%Read

24.341849

2,010.24

15.704977

45.89

Max

Throughput-50%Read

11.128691

5,336.44

166.76372

47.52

Random-8k-70%Read.

25.380012

2,014.46

15.738006

43.14

-


Carter Manucy

---- Carter Manucy
0 Kudos
s_buerger
Contributor
Contributor

Ich bin vom 22. bis 26.3.2010 nicht im Büro und kann E-Mails nur eingeschränkt bearbeiten. E-Mails werden nicht weitergeleitet.

In dringenden Fällen wenden Sie sich bitte an meine Kollegen unter 0351 / 49701-150 bzw. per E-Mail saxonia.hotline@saxsys.de.

0 Kudos
cmanucy
Hot Shot
Hot Shot

Here are the test results from a 64K Stripe (vs 1MB above). I note that 1M seems the way to go with this box, however, RAID5/50 has a real issue with 1M stripes apparently. I re-rant the tests three times and got similar results - for some reason the Promise box gives less that 50% I/O on the 50% Read/Write test.

I also note after running these tests there are only two logical choices for a RAID set on this box - you either go with RAID0 or RAID6.

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID 0 - 64K Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.694244

3,548.69

110.896489

43.33

RealLife-60%Rand-65%Read

19.801217

2,636.31

20.596148

44.16

Max

Throughput-50%Read

9.854979

5,998.85

187.464111

52.06

Random-8k-70%Read.

22.863659

2,290.94

17.897995

42.30

Frontend: IBM x3650, 24GB, 2 Xeon 5450's, 2 Teamed NetExtreme

BCM5704 on HP 2824 w/MPLS, ESX 4.0, WinXP VM, VMWare iSCSI Initiator

|

RAID1E - 64K Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.703972

3,545.64

110.801323

45.42

RealLife-60%Rand-65%Read

27.012952

1,810.03

14.140862

45.44

Max

Throughput-50%Read

11.1241

5,332.29

166.634217

47.40

Random-8k-70%Read.

27.122818

1,876.15

14.657388

42.16

Frontend: HP DL385, 12GB RAM, 2xOpteron 2.6 DC, 2xIntel e1000

PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI

Initiator

|

RAID5 - 64K Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.695206

3,548.65

110.895285

43.04

RealLife-60%Rand-65%Read

29.524795

1,664.85

13.006644

42.43

Max

Throughput-50%Read

11.629027

4,336.17

135.505391

55.51

Random-8k-70%Read.

34.963324

1,441.83

11.264297

39.43

Frontend: HP DL385, 12GB RAM, 2xOpteron 2.6 DC, 2xIntel e1000

PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI

Initiator

|

RAID50 - 64K Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.69207

3,549.42

110.919421

43.27

RealLife-60%Rand-65%Read

31.85733

1,482.93

11.585355

44.82

Max

Throughput-50%Read

12.067723

4,540.16

141.880088

49.17

Random-8k-70%Read.

36.836303

1,321.80

10.326583

41.57

Frontend: HP DL385, 12GB RAM, 2xOpteron 2.6 DC, 2xIntel e1000

PCI-X teamed NICs, VMWare ESX 3.0.2, Win2K3 SP2 VM, 1MB BS, MS iSCSI

Initiator

|

RAID6 - 64K Stripe

TEST NAME

Av. Resp. Time ms

Av. IOs/sek

Av. MB/sek

CPU Use

Max

Throughput-100%Read.

16.697074

3,545.16

110.786351

43.39

RealLife-60%Rand-65%Read

35.59834

1,292.98

10.101445

45.97

Max

Throughput-50%Read

13.34382

3,775.64

117.988732

52.08

Random-8k-70%Read.

41.282322

1,155.31

9.025888

42.89

-


Carter Manucy

---- Carter Manucy
0 Kudos
Mnemonic
Enthusiast
Enthusiast

-

0 Kudos