VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
captainflannel
Contributor
Contributor

SERVER TYPE: HP Proliant DL360 G7 CPU TYPE / NUMBER: Intel 5660 x2 @2.8GHz / 96GB RAM HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1 STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10

Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to
this VMFS Datastore.  Connected via HP Proliant 2910al-24G Switch.  VMFS hosted via iSCSI

Virtual Machine System Drive (C:)
Hosted via 2TB VMFS Volume, 2 Storage Paths, RoundRobin
Test name (System 2TB)
Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read16.2636781146%
RealLife-60%Rand-65%Read12.0747043628%
Max Throughput-50%Read34.521741545%
Random-8k-70%Read11.9248263728%


 cpu load seems off perhaps...
SERVER TYPE: HP Proliant G7 CPU TYPE / NUMBER: Intel 5660 x2 96GB RAM HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1 STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10


Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to
this VMFS Datastore.  Connected via HP Proliant 2910al-24G Switch.  VMFS hosted via iSCSI

Virtual Machine Data Drive (D:)
Hosted via 300GB VMFS Volume, 2 Storage Paths, RoundRobin
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read19.043136981%
RealLife-60%Rand-65%Read16.8833202525%
Max Throughput-50%Read14.2142541321%
Random-8k-70%Read17.253200251%

SERVER TYPE: Dell R310 CPU TYPE / NUMBER: Intel Xeon x3323 @ 2.5GHz / 24GB RAM HOST TYPE: Server 2008 64bit/ 24GB RAM /  Direct Attached iSCSI Volume / 1 Path STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell MD3000i / 9 1TB 7200 / RAID5

Jumbo Frames Enabled.  Connected by Dell Powerconnect 6248 Framework
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read28.272116661%
RealLife-60%Rand-65%Read129.60338216%
Max Throughput-50%Read21.172751851%
Random-8k-70%Read148.8330529%
Reply
0 Kudos
qwerty22
Contributor
Contributor

CaptainFlannel,

Here is what I am getting.  I am using a VNXe3100 with 10 300GB SAS disks in two RAID5 arrays.  I am using iSCSI with a 512GB VMFS.  I have not turned on jumbo frames yet nor have I set up multipath I/O, just using a single 1GB ethernet port.  The VM is a Windows 2003 R2 server with 4 GM memory and 2 vCPUs.

I am troubled by the real world performance numbers, where your VNXe clearly outperforms mine. The random numbers seem low to me also.  What I find interesting is that the Max throughput on my box is the only number that is much better than yours.  I wonder why?

Best Regards.

SERVER TYPE:Dell R710 CPU TYPE / NUMBER: Xeon X5660 / 2 Processors HOST TYPE: Windows 2003 R2 32bit STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3100 / 10 300 GB 15K SAS / RAID 5 No Jumbo frame
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read18.04334810413%
RealLife-60%Rand-65%Read31.821681130%
Max Throughput-50%Read18.78325410112%
Random-8k-70%Read30.301730130%
Reply
0 Kudos
captainflannel
Contributor
Contributor

Interesting to compare.  At the moment I am just looking at my tests performed on our 😧 drive tests.  With that our numbers are similar, however with our 12 disks in a single RAID10 it looks like we are getting increased I/O with the 50% Read Tests.  Is your RAID5 two different Volumes? or a single?  I would have really thought the RAID10 compared to RAID5 would have much different numbers with similar amount of disks used.

Actually in further looking at your numbers the Read IO seems very similar, but when writes are involved I do see the increased IO available in the RAID10.

Interesting why your 100% read is a little faster.  What kind of networking equipment are you using.

Message was edited by: captainflannel

Reply
0 Kudos
captainflannel
Contributor
Contributor

SERVER TYPE: HP Proliant DL360 G7 CPU TYPE / NUMBER: Intel 5660 x2 @2.8GHz / 96GB RAM HOST TYPE: Server 2008 x64bit / 4vCPU / 16GB RAM / hosted by ESXi 4.1 STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10
MultiPath set for 2 1GB ethernet links via RoundRobin.  Jumbo Frames Enabled
IOMeter Tests run on an unformated Virtual Disk added to this Host
Test name Physical
Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read1.015645317642%
RealLife-60%Rand-65%Read25.422221171%
Max Throughput-50%Read13.22453914119%
Random-8k-70%Read17.653131241%
Reply
0 Kudos
PinkishPanther
Contributor
Contributor

My company has spent a significant amount of time on iSCSI / VMWare benchmarks over the last few months.

Using a Dell R710 connected to an MD3220i with 500GB 7.2K drives in the first shelf and 600GB 15K drives in the second shelf.

We originally only had the 7.2K drives and configured them with RAID6 and RAID10 (equal number of disks).

Max Throughput-100%Read and Random-8k-70%Read tests were the same - approximately 128MB/s and 135MB/sec throughput respectively  even with round robin configured.

RAID6 on these drives for RealLife-60%Rand-65%Read and Random-8k-70%Read throughput was about 8.7 and 8.8 MB/sec

RAID10 was 17 and 15 MB/sec.

RAID10 on the 15K drives was basically double the 7.2K at 31 and 33 MB/sec (we did briefly see 37 and 42 but are unable to repeat it).

The most interesting thing we discovered was that the iops need to be optimized for this array when using round robin.

The command is esxcli nmp roundrobin setconfig --type "iops" --iops=3 --device (your lun ID).

Once this command was run against our LUN the Max Throughput-100%Read and Random-8k-70%Read tests hit the limit of the NIC's, if we have 3 x 1Gbit NICs we get over 300 and 315 MB/sec.

Reply
0 Kudos
Gabriel_Chapman
Enthusiast
Enthusiast

I've been using a beta tool for testing storage IO on our esxi boxes and our storage array and getting some pretty good results that are a little more real world oriented then the 4 tests run here. I've run mixed workloads of Exchange 2003/2007, SQL, OLTP, Oracle,  and various other tests from VM's in tandem as to emulate  real world heavy transactional loads that can simulate impact of prolonged storage IO workloads instead of just single tests that try to achieve a "best result". Speak with your VMWare rep about the Storage IO Analyzer beta which has a very good set of workloads you can run from a VM to simulate multiple workloads. One caveat is that I've found that windows VM's are just not efficient enough to really tax a real Tier 1 storage system. Running 1 workload from each attached host tends to work better at least in my case when trying to truly hammer our boxes. I've managed to garner 80k IOPS from several VM's running in parallel with max throughput rates of around 1.3 GBps. 

Ex Gladio Equitas
Reply
0 Kudos
captainflannel
Contributor
Contributor

Interesting, when switched our iops from the default 1,000 to 3 we definetly see an increase in the maxthroughput tests.  Significant MBps and IOPS increases.

SERVER TYPE: HP Proliant G7 CPU TYPE / NUMBER: Intel 5660 x2 96GB RAM HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1 STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10 Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to this VMFS Datastore.  Connected via HP Proliant 2910al-24G S iops = 3
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read11.0252831651%
RealLife-60%Rand-65%Read17.1932412525%
Max Throughput-50%Read11.3753391661%
Random-8k-70%Read17.573139241%
Reply
0 Kudos
qwerty22
Contributor
Contributor

captainflannel,

I let the VNXe autoconfigure the pools and it created two RAID 5 (4+1) groups under the performance pool, one hot spare and one unused drive.  From what I understand, the data store uses all 10 of the drives.  As far as a switch is concerned, I am using a Cisco SGE2000 which is a small business product that has 24 GB ports. 

I tried using a NFS datastore, but the performance was very poor, the numbers were about 4 times less on the real world test.  I've been working with EMC Tech Support for almost two months, but have not really made any progress.  Did you try NFS and is so, how were your numbers?

Best Regards

Reply
0 Kudos
JaFF
Contributor
Contributor

Hi,

I am currently on paternity leave until the 13/08/2011.

If you require assistance, please call our helpdesk on 1300 10 11 12 begin_of_the_skype_highlighting 1300 10 11 12 end_of_the_skype_highlighting.

Alternatively, email service@anittel.com.au

Regards,

James Ackerly

Reply
0 Kudos
gokart
Contributor
Contributor

First off I just wanted to thank everyone for their contribution to this thread. It helped me immensely when planning my virtualization project and I really appreciate all the time people spend benching their storage. It really pushed me in the Equallogic direction, and based on my performance in the new setup, I'm very glad indeed.

So my setup consists of a PS400VX-600, 2 stacked PowerConnect 6248s, and three R610 ESXi hosts (the Dell show, basically). My SAN is configured with jumbos end-to-end, flow control ON, STP and unicast control disabled on the switches. Each host has four active links to the SAN, sadly the PS4000 is limited to two active links but I less worried about that now after looking at the numbers.

Firstly, I benched the array by using an RDM from my B2D box using Dell's MPIO initator:

SERVER TYPE: Dell NX3100

CPU TYPE / NUMBER: Intel 5620 x2 24GB RAM

HOST TYPE: Server 2008 64bit

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS4000XV-600 14 * 600GB 15K SAS @ R50

Access Specification NameIOPSMBps (binary)Avg. Response Time (ms)
Max Throughput-100%Read670620917
RealLife-60%Rand-65%Read 429833.522.5
Max Throughput-50%Read795624814
Random-8k-70%Read42323323.5

Then I got brave and configured one of my ESXi hosts with the Dell MEM plug-in for iSCSI, using the software VMware iSCSI initiator and threw a bare-bones Win2k8R2 Guest on there:

Access Specification NameIOPSMBpsAvg.Response Time (ms)
Max Throughput-100%Read71632238.3
RealLife-60%Rand-65%Read45163511.4
Max Throughput-50%Read69012158.4
Random-8k-70%Read44153411.9

So I was quite pleased with that, but then I noticed I only had two active links to the storage, but I've got 4 links on my host - so I upped the membersessions to 4 as outlined here: http://modelcar.hk/?p=2771

Acess Specification NameIOPSMBps (binary)Avg. Response Time (ms)
Max Throughput-100%Read71952248.3
RealLife-60%Rand-65%Read43753411.9
Max Throughput-50%Read77132417.6
Random-8k-70%Read421732.912.3

Gave me a pretty good bost on my sequential, but lessened my random-ish workloads and increased my latency a little bit, not sure which I'll go with... But overall very pleased so far!

Reply
0 Kudos
1538moss
Contributor
Contributor

Ferie, tilbake 25/7

Reply
0 Kudos
needmorstuff007
Contributor
Contributor

EMC VNX5500, 200gb fast cache 4x100 efd raid1)

Pool of 25x300gb 15k disks

Cisco UCS blades

Acess Specification Name IOPS MBps (binary) Avg. Response Time (ms)
Max Throughput-100%Read------ 16068 --- 502 ----- 1.71
RealLife-60%Rand-65%Read----- 3498 ---- 27 ----- 10.95
Max Throughput-50%Read-------- 12697 ---- 198 ---- 0.885
Random-8k-70%Read---------------- 4145 ----- 32.38 --- 8.635

VCP3 / VCP4 / VCP5 VCAPDCD-111 VCAPDCA-219
Reply
0 Kudos
andy0809
Contributor
Contributor

qwerty22, here's an NFS datastore on a VNXe3300, is this comparable to what you saw on your VNXe3100?

SERVER TYPE: HP DL360 G5
CPU TYPE / NUMBER: Intel X5450 x2 32GB RAM
Host Type: Windows 2008 R2 64bit / 1vCPU / 4GB RAM / ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3300 / 21x600GB SAS 15k / RAID 5

Access   SpecificationIOPsMB/sAvg IO response   time
Max Throughput 100% read342810718
RealLife-60% Rand/60% Read5965101
Max Throughput 50% read31839919
Random-8k 70% Read5624107
Reply
0 Kudos
andy0809
Contributor
Contributor

iSCSI results

SERVER TYPE: HP DL360 G5
CPU TYPE / NUMBER: Intel X5450 x2 32GB RAM
Host Type: Windows 2008 R2 64bit / 1vCPU / 4GB RAM / ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3300 / 21x600GB SAS 15k / RAID 5

Access   SpecificationIOPsMB/sAvg IO response   time
Max Throughput 100% read350210917
RealLife-60% Rand/60% Read37382914
Max Throughput 50% read578318110
Random-8k 70% Read36022815
Reply
0 Kudos
qwerty22
Contributor
Contributor

Andy0809,  Yes that is close to what I was seeing with NFS, very poor IOPs and high latency.   I understand that EMC has found and corrected the NFS issue and a new software release has been posted to correct the problem. I am out of the office currently so I haven't had the opportunity to install and test it, but at least one other person has and has reported back numbers similar to iSCSI.  Best Regards.

Reply
0 Kudos
captainflannel
Contributor
Contributor

Performed the upgrade to the latest VNXe Software Release and performed our tests again to see if any imporvement for iSCSI.  Was not expecting to see much, but we do see a big improvement in 100% read performance from the previous tests.

SERVER TYPE:HP DL 360 CPU TYPE / NUMBER: 2 Xeon 5660 @2.8GHz 96GBRAM HOST TYPE: ESXi VM running Server2008 64bit, 24GBRAM, 4vCPU STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100.  12 - 600GB 15k, RAID10
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read8.84676821112%
RealLife-60%Rand-65%Read17.8931162427%
Max Throughput-50%Read10.9055801741%
Random-8k-70%Read17.473177242%
Reply
0 Kudos
myzyr
Contributor
Contributor

CaptainF, did you receive those numbers for iSCSI after the VNXE OS 2.1.x update? or the 2.0.3 update?
It seems my number are somewhat comparable to others who have the 3100 series.


My results below:

SERVER TYPE:IBM x3550 CPU TYPE / NUMBER: 2 Intel Quad Core 3.x GHZ HOST TYPE: esx 4.1 W2K8 X64 STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXE 3100 12-300GB Raid 5
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read15.40388512122%
RealLife-60%Rand-65%Read14.733584280%
Max Throughput-50%Read12.86474114822%
Random-8k-70%Read13.533811290%
Reply
0 Kudos
captainflannel
Contributor
Contributor

Here are results for a recent setup of an HP P4500, hosts are all using iSCSI with vSPHERE.

SERVER TYPE: HP Proliant DL360 G7 CPU TYPE / NUMBER: Intel Xeon 5660 @2.8 (2 Processors) HOST TYPE: Server 2008R2, 4vCPU, 12GB RAM STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4500 SAN, 24 600GB 15K in NETRAID 10.  4 Paths to Virtual iSCSI IP, RoundRobin host IOPS policy set to 1000/Default Jumbo Frames Enabled Netflow Enabled
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read15.02400112518%
RealLife-60%Rand-65%Read18.3321321653%
Max Throughput-50%Read10.71562217520%
Random-8k-70%Read11.7629072259%


SERVER TYPE: HP Proliant DL360 G7 CPU TYPE / NUMBER: Intel Xeon 5660 @2.8 (2 Processors) HOST TYPE: Server 2008R2, 4vCPU, 12GB RAM STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4500 SAN, 24 600GB 15K in NETRAID 10.  4 Paths to Virtual iSCSI IP, RoundRobin host IOPS policy set to 1 Jumbo Frames Enabled Netflow Enabled
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read8.45711922222%
RealLife-60%Rand-65%Read15.6824231855%
Max Throughput-50%Read9.75600018725%
Random-8k-70%Read11.7129182261%


SERVER TYPE: HP Proliant DL360 G7 CPU TYPE / NUMBER: Intel Xeon 5660 @2.8 (2 Processors) HOST TYPE: Server 2008R2, 4vCPU, 12GB RAM STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4500 SAN, 24 600GB 15K in NETRAID 10. 
4 Paths to Virtual iSCSI IP, RoundRobin host IOPS policy set to 3 Jumbo Frames Enabled Netflow Enabled
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read8.58701321922%
RealLife-60%Rand-65%Read17.9322001753%
Max Throughput-50%Read10.07580618124%
Random-8k-70%Read11.6729142261%
Reply
0 Kudos
captainflannel
Contributor
Contributor

Have only tested with iSCSI, no NFS in our enviroment at the moment.

Reply
0 Kudos
Henriwithani
Contributor
Contributor

I made several tests with different configurations and here are the results. Check also my post about the tests on my blog.

-Henri

http://henriwithani.files.wordpress.com/2011/08/mbps1.png  

http://henriwithani.files.wordpress.com/2011/08/mbps21.png

http://henriwithani.files.wordpress.com/2011/08/iops11.png

http://henriwithani.files.wordpress.com/2011/08/iops22.png

http://henriwithani.files.wordpress.com/2011/08/lat11.png

http://henriwithani.files.wordpress.com/2011/08/lat21.png 

-Henri Twitter: http://twitter.com/henriwithani Blog: http://henriwithani.wordpress.com/
Reply
0 Kudos