VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
oreeh
Immortal
Immortal

For reference: the old thread

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

Maybe it would be a good ideer to create a new template for the results, that does not take up so much space in the thread.

And maybe a template to upload the result to a file for easy of download-import-compare results.

Oh yeah .. Someone should take all the results from the old thread and implement into the new template and post them to this thread.. Smiley Happy

Reply
0 Kudos
oreeh
Immortal
Immortal

Oh yeah .. Someone should take all the results from the old thread and implement into the new template and post them to this thread.. Smiley Happy

Go ahead... :smileygrin:

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

I will leave it up to christianZ to make the new template first. Maybe if I run into unemployment, I will consider taking the task.

Reply
0 Kudos
meistermn
Expert
Expert

I wish to categorize

Windows 2003 OS Benchmarks in a VM

Single Threaded Application= 1 Outstanding IO

Page 20

Multithreaded Application = 25 Outstanding IO

Page 21

Synthetic IO-Meter Benchmarks Hard Disks

Catagory SAN Storage

Catagory NFS Storage

Catagory ISCSI Storage

Catagory Software based Storage (Datacore,Falconstore, Lefthand, Sanrad)

Synthetic IO-Meter Benchmarks Solid State Disks (SSD)

Vendors of SSD's Intel, Stec, Samsung and so on

Synthetic IO-Meter Benchmarks PCI Express NAND

Fusionio Card 100.000 IOPS

Performance

Page 4-7

Real Filecopy Benchmark xcopy

Copy Large File 10 GB from Partition C: to 😧 in a VM

Copy Large File 10 GB File between two VM's VM1 to VM2 on the same ESX

Copy Large File 10 GB between two VM's VM1to VM2 on different ESX (ESX1 and ESX2)

create many small random and make the same test as for the large files

Cold migration Benchmark

cold Migrate 4 VM's from LUN1 to LUN2 at the same time

Database Benchmark

MS DB Hammer Tool

IO-Meter Benchmark with specific DB IO-Meter parameters

Reply
0 Kudos
ekos
Contributor
Contributor

Hi guys,

I did some testing on our ESX hosts and I'm getting the feeling that there's room for improvement.

Altough I'm finding it hard to compare our test to other tests posted earlier, because there's always something different in each configuration.

Does anyone have an opinion on our test results?

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL385, 16GB RAM; 2x AMD Opteron 285 (2.6 GHz), Dualcore, QLA4050C

STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp 3140 / 41 Disks x 274 GB / Double Parity

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek-

##################################################################################

Max Throughput-100%Read......___26.85____......._2204.42__........._68.89___

RealLife-60%Rand-65%Read..___21.82____.......__504.14__.........__3.94___

Max Throughput-50%Read........___14.58____.......__577.82__........._18.06___

Random-8k-70%Read...............___37.06____.......__489.40__.........__3.82___

EXCEPTIONS: CPU Util. 32% - 15% - 18% - 15%;

##################################################################################

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

That is indeed very poor performance.

What does you NetApp webinterface tell you about the load on the NetApp boxes? Are you sure nothing else is running..

I dont know if it is possible on fiber but can it be a link enogotion problem?

Reply
0 Kudos
Jakobwill
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM - Win2k3 R2

CPU TYPE / NUMBER: CPU / 1

HOST TYPE: VM, 1GB RAM; 1x vCPU

STORAGE TYPE / DISK NUMBER / RAID LEVEL: VMDK/VMFS via FC to SANmelody mirror

2x SANmelody 2.04 update 1 with a LUN each from the same array: HDS AMS2100 with 15x 10k SAS 400gb. 2gb cache.

SANmelody has 8gb ram - The LUN is spread on (long description gone short) 15 spindles.

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

(Gennem VI3 Client)

Max Throughput-100%Read........1.528815..........27032.........844.767__100% Seen via the VI3 Client -> (108mb/sec r - 1mb/sec w) (100% cpu)

RealLife-60%Rand-65%Read......17.870013..........2253.........17.602__62% Seen via the VI3 Client -> ( 13mb/sec r - 7mb/sec w) (50% cpu)

Max Throughput-50%Read..........3.970814..........12957.........404.909_ 67% Seen via the VI3 Client -> (217mb/sec r - 217mb/sec w) (82% cpu)

Random-8k-70%Read.................15.559686..........2802.........21.897___57% Seen via the VI3 Client -> ( 15mb/sec r - 7mb/sec w) (57% cpu)

EXCEPTIONS: CPU Util.-Is listed after the Av. MB/sek

##################################################################################

I know the first test - 100% read is off because of the 100% on vCPU = time is foooked. Smiley Happy

But the other results are pretty impressive. or whats our opinion?

Forgot to mention... These tests was done while in production. So there was 30 vm working against the same SANmelody servers. (On a different ESX servers offcourse. Smiley Happy )

Raid description gone long:

2 RAID5 groups with 7+1 10k SAS 400gb disks

In each Raid group we create 4x 640gb disks - so in total 8 disk of 640gb

Take on 640gb disk from each group and create a lun of 1280gb which is presented to the Datacore server. One for each which put in a pool where from i create a Virtual Vol as a VMFS to ESX. On the VMFS i create a VMDK to the VM, where i testing on. Smiley Happy

Sorry, but its a bit detailed. Smiley Happy

Reply
0 Kudos
christianZ
Champion
Champion

Well your numbers are not bad - but one wants to know, how many cache/ram have your San-Melody servers; you have 15 disk there, but your test lun was configured on how many spindles.

Reply
0 Kudos
Jakobwill
Enthusiast
Enthusiast

Description added. Smiley Happy

In short the test lun spread on every disk. Almost like EVA storage systems.

Reply
0 Kudos
christianZ
Champion
Champion

Well, for me it looks very good - but one shouldn't forget you have ca. 8 GB cache in your Sanmelody servers and the test file is only 4 GB on size.

You could try to make the test file bigger, e.g 20 GB and then test again.

Anyway thanks for that.

Reg

Christian

Reply
0 Kudos
iancampbell
Contributor
Contributor

The first test is on a 5 disk Raid 5 array and the second is on a 6 disk raid 5 array. It's interesting to compare these results to jmacdaddy's (page 22 of the original unoffical test results) MD3000i RAID 5 test results as the MD3000i is a Dell badged DS3300. I'll be receiving the cache module upgrades sometime this week and will upload test results to show any difference they have when I get the time.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (512MB CACHE/SP) / 5 SAS 15k/ R5

SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___16.99_____.......___3486.8_____.....____108.9______

RealLife-60%Rand-65%Read......_____48.89____.....____1062.7____.....____8.3______

Max Throughput-50%Read.........____22.9____.....______2579.9______.....____80.6______

Random-8k-70%Read..............____44.72_____.....____1204______.....____9.41______

EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (512MB CACHE/SP) / 6 SAS 15k/ R5

SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___16.7_____.......___3552_____.....____111______

RealLife-60%Rand-65%Read......_____40.6____.....____1293.2____.....____10.1______

Max Throughput-50%Read.........____20.33____.....______2955.16______.....____92.3______

Random-8k-70%Read..............____36.8_____.....____1449.2______.....____11.3______

EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

Reply
0 Kudos
christianZ
Champion
Champion

@[iancampbell|http://communities.vmware.com//people/iancampbell|Hier klicken, um das Profil von iancampbell anzuzeigen]

Thanks for that.

Yeh, basically the MD3000i and DS3300 are the same oem boxes (LSI Engenio) - so the results are very similar.

When I see the numbers from MD3000i and DS3300 with sas disks I wonder about the Infortrend results on sata's (page 21) - very impressive IMHO -

what's a pity that the administration and support are not on the same level.

Reply
0 Kudos
christianZ
Champion
Champion

Last weekend I listened the "Talk Shoe", recorded episode no. 40 ()

And I can't agree with the statement that all the benchmarking tests (especially storage) don't matter.

Well, until now I haven't seen any storage gear that benchmarked poor (I'm speaking about rational tests) and then in the production worked fast (or vice versa).

Of course one should use his own mind by analyzing of benchmark results - remember the storage is crucial for your VI infrastructure and it can be the most expensive component there.

The benchmarks are a kind of workload too - and if they are mixed, they can give some interesting view points.

It would be nice to know what is the max. throughput of series "A" from vendor "X" by deciding which gear to chose - this way one can avoid buying of insufficient boxes.

It would be nice to know, that one can get the specific throughput by vendor "X" or by vendor "Y" on the half of price.

As a customer one can get better proposals by comparing of competitors products.

For SMB it would be interesting to know that the needed performance you can buy by 3.rd storage vendors also but on the smaller price.

But I agree (as I wrote in my first posting in the original thread) the benchmarks results shouldn't be the only decision's factor.

The quality of services, reliability, vendors support, managment/simplicity of using and config, vendors connections, distribution, ... as well healthy mind aren't to forget.

Anyway until now I listened all the episodes - good work - keep it going.

Just my opinion, but maybe I'm not alone here.

Reg

Christian

Reply
0 Kudos
iancampbell
Contributor
Contributor

I installed the 1GB controller cache memory upgrades today and here are the results below. Strangely, the performance is actually slightly worse with the cache upgrade. Would I be right in thinking that the cache memory only improves performance when the system is heavily loaded or should I be seeing better performance on an unloaded system?

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (1GB CACHE/SP) / 6 SAS 15k/ R5

SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___16.74_____.......___3537.78_____.....____110.55______

RealLife-60%Rand-65%Read......_____40.38____.....____1278.09____.....____9.98______

Max Throughput-50%Read.........____21.29____.....______2760.94______.....____86.27______

Random-8k-70%Read..............____37.79_____.....____1396.66______.....____10.91______

EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

Here are the results prior to the upgrade with the 512MB cache:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (512MB CACHE/SP) / 6 SAS 15k/ R5

SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........___16.7_____.......___3552_____.....____111______

RealLife-60%Rand-65%Read......_____40.6____.....____1293.2____.....____10.1______

Max Throughput-50%Read.........____20.33____.....______2955.16______.....____92.3______

Random-8k-70%Read..............____36.8_____.....____1449.2______.....____11.3______

EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

The test is designed to eliminate the cache from the equation.

The test file is supposed to be so large that cache will not matter. But cache will always matte a little, but in real life the 1GB extra cache should do you much good.

Reply
0 Kudos
christianZ
Champion
Champion

As I remember one could configure the cache for read/write operations - have you done it?

Reply
0 Kudos
HdeJongh
Contributor
Contributor

Here are my test results, one in a vm with jumbo frames, on in a vm without jumboframes (same host), on a psyhical machine with jumboframs, (all 3 have mpio rr)

I would love to hear what you think of these results, i worry abit about the servers without the jumboframes..

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: PHYS.

CPU TYPE / NUMBER: CPU / 1, JUMBO FRAMES, MPIO RR

HOST TYPE: Dell PE2950, 4GB RAM; 2x XEON 5335, 2,00 GHz,

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS5000 x 1 / 14+2 Disks (sata)/ R5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____10,9___..........____5488__.........____171,5_

RealLife-60%Rand-65%Read......____39,4___..........____1107 __........._____13,2_

Max Throughput-50%Read..........____10,1 __.........._____5763_........._____180,1_

Random-8k-70%Read................._____32,7__.........._____1429_........._____11,1__

EXCEPTIONS: CPU Util.-XX%;

########################################################################

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM .

CPU TYPE / NUMBER: VCPU / 1 )NO JUMBO FRAMES, MPIO RR

HOST TYPE: Dell PE2950, 32GB RAM; 2x XEON 5440, 2,83 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL:EQL PS5000 x 1 / 14+2 Disks (sata)/ R5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____3,39___..........____1697__.........____53____

RealLife-60%Rand-65%Read......_____22,18_.........._____500__.........____3,9____

Max Throughput-50%Read.........._____4,2___.........._____695__.........____43____

Random-8k-70%Read................._____26,9__.........._____502__.........____3,9___

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM.

CPU TYPE / NUMBER: VCPU / 1 ) JUMBO FRAMES, MPIO RR

HOST TYPE: Dell PE2950, 32GB RAM; 2x XEON 5440, 2,83 GHz, DC

STORAGE TYPE / DISK NUMBER / RAID LEVEL:EQL PS5000 x 1 / 14+2 Disks (sata)/ R5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........____9,6___..........____5093___.........___159,00_

RealLife-60%Rand-65%Read......____26,6___..........___1678___.........___13,11__

Max Throughput-50%Read.........._____8,5__..........____4454___.........___139,20_

Random-8k-70%Read................._____31,3_..........____1483___.........___11,58____

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

 

Reply
0 Kudos
christianZ
Champion
Champion

>RealLife-60%Rand-65%Read......_____22,18_.........._____500__.........____3,9____

The results w/o jumbos are quite poor. How have you configured the iscsi connection there? Iscsi from Esx or ms iscsi in vm?

Reply
0 Kudos