VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
pinkerton
Enthusiast
Enthusiast

I'm indeed using Vraid 5. I can test a Vraid1 LUN later this week. Have you already tested a Vraid1 LUN?

Reply
0 Kudos
MKguy
Virtuoso
Virtuoso

Unfortunately not, and we are not going to get a vRAID1 LUN from the storage guys. I'd be quite interested in your results, please post them here once you test it.

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
larstr
Champion
Champion

I tested both RAID1 and RAID5 on EVA6400 here: http://communities.vmware.com/message/1334139#1334139

Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Hm, strange that the 50%/50% is so low on the EVAs. Seems to be the case on all EVAs...

Reply
0 Kudos
larstr
Champion
Champion

Hm, strange that the 50%/50% is so low on the EVAs. Seems to be the case on all EVAs...

The RAID system on EVA is different from most other SANs as EVA stripe many smaller RAID sets into a larger one. I guess that could be the reason why we're seeing this.

Lars

Reply
0 Kudos
larstr
Champion
Champion

iometer in linux with 2.6 kernel is limited to 1 IO queue so it will not give very good results. 2.4 kernels do however seem good.

Lars

Reply
0 Kudos
JRink
Enthusiast
Enthusiast

Anyone?

Sorry for the ignorance. I am very new to IOMETER and I'm really trying to understand how to run these tests. Few questions...

1. When running these tests from inside a VM, should I be running them on a yellow icon (the VMs drive c: drive itself), or should I be creating a new unformatted drive that will show up in blue? Are people standardizing here?

2. Should I be using the .ICF file from the "original" storage performance thread?

3. When running these tests, are people running them on LUNs in production with VMs on them? Off-hours so disk activity is minimal? For example, my ESX box is connected to a iSCSI SAN with a single LUN/'Datastore on it that has 10 VMs. Should I just run the tests during normal hours? Or are people shutting down all VMs on the Datastore before running these, etc.??

4. I am really confused about the results people are posting. Everyone lists a Av. IOs/sek column, and a Av. MB/sek column, but I don't even see those in my csv results file (??). Are those the same as Total I/O's per second and Total MB's per second in my spreadsheet? If not, where am I supposed to look?

5. Which of the 4 tests are the best idea for how overall VM performance would be?

Sorry for the ignorance.

Reply
0 Kudos
JaapL
Contributor
Contributor

Hi all,

I 've put up a small Labtest below.

My aim is to provide SMBs with an affordable VMware/swSAN solution.

My (arguable) SMB = 20/150 concurrent users using Exchange/SQL/File/Web server(s).

Will be using single Xeon procs on ESXi and SAN host(s)

LAB setup:

ESXi host, White box 17 920, 12 Gb, ESXi v4, 1x Intel Pro GT nic (to storage network)

Server VM, W2K8std, vmfs, 1 CPU, 8Gb memory

SAN Host: White box i7 920, 12Gb, W2K8std, systemdisk WD 1x RE3 TB, Raid0 4x WD RE3 500gb SATA

Netwerk: Vmhost 1Gb E1000 nic, Cisco switch SLM2008 8x 1Gb, swSAN Starwind 1Gb Marvell onboard nic

Jumbo frames, flow control, Rcv/Tr buffers 512

-


Av. RT ms----Av. IOs/sek--Av. MB/sek-----CPU

Max Throughput-100%Read-----19,6


3074
96,4
--


16,67

RealLife-60%Rand-65%Read----38,8


1342
10,5
--


29,9

Max Throughput-50%Read--


19,6
2898

90,4
--


18,56

Random-8k-70%Read--


37,6
1336

10,4
--


33,5

=

=

Is this any good?

With regards,

Jaap

Netherlands

Reply
0 Kudos
seanosully
Contributor
Contributor

Hi all this is my first post on this thread. I am trying to get to the bottom of some performance issues we are having with a SQL 2005 VM and came across this. I have run iometer using the supplied icf file and the results are below. At this stage I dont want to mention the SAN vendor but I will try to supply as much other info as I can. Can you please let me know your comments on the results.

We are running 4 X ESX 3.5 servers on 4 X HP BL460c blade system with 26GB RAM, 2 X Intel XEON 5430 2.66Ghz quad core procs and an iSCSI SAN with 12 X 15k 300GB SAS disks. This is a production ESX cluster with 38 VMs running across the cluster. Would you consider this to be an invalid test if there are 38 other VMs running while the tests are ran. Thanks in advance for your help.

The results are as follows.

SERVER TYPE: Windows 2003 VM

CPU TYPE / NUMBER: VCPU / 2

HOST TYPE: HP BL460c, 26GB RAM; 2 x QUAD CORE XEON 5430, 2,66 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: iSCSI SAN/36 Disks/RAID 5. We are using the software iSCSI initiator.

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........20..........2681.........84

RealLife-60%Rand-65%Read......21..........2240.........18

Max Throughput-50%Read..........16..........3042.........95

Random-8k-70%Read.................21..........2093.........16

Reply
0 Kudos
radimf
Contributor
Contributor

Hi,

I would not consider it invalid, but "reallife test" for your

environment.

Question is how heavy is storage traffic for those other 38 VMs...

I am a bit puzzled with one detail in your post - there is mention of

"iSCSI SAN with 12 X 15k 300GB SAS", but in results you have iSCSI

SAN/36 Disks/RAID 5.

What is right?

Youz setup is iSCSI based - do you use Microsoft initiator from inside

VM, or ESX level initiator?

Your results may be OK for a single 1gbit connection, thus you could get

a bit higher numbers in "MAX ...." tests.

What about MPIO?

Radim

Reply
0 Kudos
Stuarty1874
Contributor
Contributor

Guys, can someone help me get started here? At a glance do I look like I have my figures right?

I've almost read every thread here but would like someone to tell me if I'm on the right track.

I'm working with our storage guy to see if he can tell me how our disks are configured. I'll post the details when he gets back to me.

Any thoughts on the figures below??

ESX VERSION: ESX 3.5 U4

SERVER TYPE: Windows 2003 VM

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL585 G1, 32GB RAM; 4 x DUAL CORE AMD OPTERON 875, 2.2 GHz (LP10000 2GB PCiX)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: CX 700 FC SAN

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........2.545505..........1773.021069.........55.406908

RealLife-60%Rand-65%Read......25.126563..........478.345326.........3.737073

Max Throughput-50%Read..........3.474452..........1185.207428.........37.037732

Random-8k-70%Read.................20.079412..........499.191035.........3.89993

Reply
0 Kudos
dennes
Enthusiast
Enthusiast

Ik ben afwezig tot maandag 4 januari. Ik verzoek u om voor dringende zaken rechtstreeks met kantoor contact op te nemen.

Telefoonnummer: 013-5115088, of per e-mail naar sales@feju.nl<mailto:sales@feju.nl>.

Deze mail wordt niet doorgestuurd.

Groeten,

Dennes

Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

I'm back on January 4th. In urgent cases please contact support@mdm.de. Thank you.

Michael Groß, MDM IT dept.

Reply
0 Kudos
JaapL
Contributor
Contributor

Hi Stuart,

Performance looks poor to me if this is a lab test with no other workload of some kind.

I 'm running a small LAdtest myself (a few posts above yours) with better results.

Reply
0 Kudos
NTShad0w
Enthusiast
Enthusiast

Mnemonic and Ericmba,

quite old post but I think i can add something to your discusion about SATA/FATA/PATA/SAS and FC drives.

I have a lot of experience (10 years+) in implementing, configuring and tuning storage, arrays, san, nas... for hw and vm enviroments.

In my opinion SATA/FATA/PATA disks are general (of course it depends about disk vendor, cache, etc) but in general when we compare a speed (heeh what speed 😛 but ok... general) like this (example 1 is poor, 10 is exelent):

10x SATA/FATA/PATA 7k in RAID5 - 3/4

10x SAS/FC 10k in RAID5 - 7

10x SAS/FC 15k in RAID5 - 8

10x SATA/FATA/PATA in RAID10 - 7

10x SAS/FC 10k in RAID10 - 9

10x SAS/FC 15k in RAID10 - 10

so in answer to Mnemonic and You Ericmba... I can tell that SATA disks performing much better in RAID10 than RIAD5, but... of course it depends about array vendor, array software implemented, cache size,cache config, cache optimalization, array config and array optimalization, etc... and its really specific for different array it may be really different results.

kind regards

Dawid Fusek

IT Security Consultant and

Virtual Infrastructure Architect

COMP SA

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

I interpid this to you agree with me

Reply
0 Kudos
jasonboche
Immortal
Immortal


SERVER TYPE: 2003 R2 VM ON ESXi 4.0 U1

CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)

HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / RAID 5

SAN TYPE / HBAs: Emulex dual port 4Gb Fiber Channel, HP StorageWorks 2Gb SAN switch

OTHER: Disk.SchedNumReqOutstanding and HBA queue depth set to 64

Fibre Channel SAN Fabric Test

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

1.62

35,261.29

1,101.92

Real Life - 60% Rand / 65% Read

16.71

2,805.43

21.92

Max Throughput - 50% Read

5.93

10,028.25

313.38

Random 8K - 70% Read

11.08

3,700.69

28.91

-


SERVER TYPE: 2003 R2 VM ON ESXi 4.0 U1

CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)

HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5x146

SAN TYPE / HBAs: swISCSI

OTHER: Shared NetGear 1Gb Ethernet switch

swISCSI Test

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

17.79

3,351.07

104.72

Real Life - 60% Rand / 65% Read

14.74

3,481.25

27.20

Max Throughput - 50% Read

12.17

4,707.39

147.11

Random 8K - 70% Read

15.02

3,403.39

26.59

-


SERVER TYPE: 2003 R2 VM ON ESXi 4.0 U1

CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)

HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5x146

SAN TYPE / HBAs: NFS

OTHER: Shared NetGear 1Gb Ethernet switch

NFS Test

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

Max Throughput - 100% Read

17.28

3,472.43

108.51

Real Life - 60% Rand / 65% Read

21.05

2,726.38

21.30

Max Throughput - 50% Read

17.73

3,338.72

104.34

Random 8K - 70% Read

17.70

3,091.17

24.15

-







[i]Jason Boche, vExpert[/i]

[boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i]

[VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i]

[Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]

[vCalendar|http://www.boche.net/blog/index.php/vcalendar/] Author[/i]

Message was edited by: jasonboche

Updated NFS and swISCSI numbers on 1/31/10

VCDX3 #34, VCDX4, VCDX5, VCAP4-DCA #14, VCAP4-DCD #35, VCAP5-DCD, VCPx4, vEXPERTx4, MCSEx3, MCSAx2, MCP, CCAx2, A+
Reply
0 Kudos
larstr
Champion
Champion

Nice results Jason, but I wonder why your NFS results were that bad compared to iscsi. Haven't done any testing with Celerra myself so it could be normal...

Lars

Reply
0 Kudos
fbonez
Expert
Expert

Hi all,

this is the results of my setup.

SERVER TYPE: 2003 R2 VM ON ESX 4.0 U1

CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram

HOST TYPE: HP DL360 G6, 24GB RAM; XEON X5540

STORAGE TYPE / DISK NUMBER / RAID LEVEL: LeftHand P4300 x 1 / 7 +1 Raid 5 10K SAS Drives

SAN TYPE / HBAs: iSCSI, SWISCSI, 2x 82571EB GB Eth Port Nics, One connection on each MPIO enabled - Jumbo Frames Enabled - 4 iSCSI connections to Volume - 1x Hp Procurve Switch

Test Name

Avg. Response Time

Avg. I/O per Second

Avg. MB per Second

CPU Utilization

Max Throughput - 100% Read

13.94

4289.95

134.06

22.17

Real Life - 60% Rand / 65% Read

18.95

1952.18

15.25

54.70

Max Throughput - 50% Read

41.95

1284.81

40.13

27.41

Random 8K - 70% Read

15.56

2132.71

16.66

60.32

Best regards,

Francesco

-- If you find this information useful, please award points for "correct" or "helpful". | @fbonez | www.thevirtualway.it
Reply
0 Kudos
s_buerger
Contributor
Contributor

Ich bin am 27.01.2010 nicht im Büro und kann deshalb keine E-Mails lesen. E-Mails werden nicht weitergeleitet.

In dringenden Fällen wenden Sie sich bitte an meine Kollegen unter 0351 / 49701-150 bzw. per E-Mail saxonia.hotline@saxsys.de.

Reply
0 Kudos