VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
JaFF
Contributor
Contributor

Hi,

I am currently out of the office.

If you require assistance, please call our helpdesk on 1300101112.

Alternatively, email service@anittel.com.au

Regards,

James Ackerly

0 Kudos
dwinslow
Contributor
Contributor

I used iometer and the OpenPerformanceTest32.icf from: http://vmktree.org/iometer/ to do my testing.  I am fairly new to testing with IOMETER.  However, I did notice that I got the most consistent results when testing to an eager zeroed thick disk, on a thick provisioned volume.  When I tested to a thin disk, on a thin volume I kept getting resuts that were wildly different.

In addition, I did do this test using vSphere 5 native MPIO as specified in the Dell document: Configuring iSCSI Connectivity with VMWare vSphere 5 and Dell EqualLogic PS Series Storage.  My host was connected with 4 dedicated iSCSI NICs to a stack of two PowerConnect 6248 series switches.

SERVER TYPE: Dell PE R710 CPU TYPE / NUMBER: 2X E5606 HOST TYPE: Windows 2008 R2 VM with 4GB RAM running inside vSphere 5 STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell EqualLogic PS6500E / 48 SATA / RAID 10
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read0.0043381350%
RealLife-60%Rand-65%Read13.834823372%
Max Throughput-50%Read87.5153431660%
Random-8k-70%Read11.444658360%
0 Kudos
s1xth
VMware Employee
VMware Employee

Looks good to me. Wont get much more on SATA drives, maybe a little w/ Jumbo Frames if you aren't using them now. Using Intel or Broadcom NIC's for the ISCSI connections?

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
dwinslow
Contributor
Contributor

Hi s1xth - yes - I am using jumbo frames end to end.  Also - yes - Broadcom NICs using the VMWare iSCSI initiator...

0 Kudos
TristramCheer
Contributor
Contributor

We just got some demo gear in, HP P4300 G2 7.2tb ISCSI Sane in the 2 x 1gbit network link setup and a couple of HP DL360 G7's with dual E5650 CPU's and 12gb of ram. It's pretty dam quick:

SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 24
HOST TYPE: HP DL360 G7, 12GB RAM; 2x XEON 5650, 2,66 GHz, 2 x 1gbit NIC RR Iscsi, Vmware Software Iscsi
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4300 G2 x 1 / 8 x 450gb 15k RPM SAS / Raid 5 / Bonded 1gbit NIC's
##################################################################################
TEST NAME-- Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read........_____29.9____..........___7041____.........___220____
RealLife-60%Rand-65%Read......____77.7____..........___2272____.........___17.75_____
Max Throughput-50%Read..........___37.9_____..........___3928____.........___122.76____
Random-8k-70%Read.................__64.36_____..........____2601___.........___20.32____
EXCEPTIONS: CPU Util. Never above 6%;

I know it's the 2 x 1gbit NIC's on the P4300 slowing things down, I do wonder how fast it would be with the 10gbit NIC kit

0 Kudos
johnsojm
Contributor
Contributor

Hello again.

I have bounced back and forth between various configs. I have modified my RAID to use all 6 disks in a RAID10 config. My 100% read speeds have increased quite drastically(i know pointless), but the latency also on the max throughput 50% hasn't changed. I don't understand where the problem here lies...I looked through these posts and found another user with near identical hardware as I SteveEsx and his numbers are far superior (a 1.3ms latency compared to my ~200ms). Will be contacting him later.

Here are the old numbers with 4 15k 600GB sas disks:

SERVER TYPE: Dell R710
CPU TYPE / NUMBER: Xeon x5650 @ 2.67GHz x 2
HOST TYPE: ESXi 4.1 - Windows Server 2008 R2 Ent 80GB HD (thin provisioned) 10GB ram VM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Raid10 4 disks 600GB 15k SAS, PERC H700
Test name               Latency          Avg iops     Avg MBps     cpu load
Max Throughput-100%Read          0.00          7509          234          0%
RealLife-60%Rand-65%Read     5.22          1820          14          8%
Max Throughput-50%Read          196.77          12013          375          33%
Random-8k-70%Read          4.21          1713          13          2%
And here are the numbers with 6 disks:
SERVER TYPE: Dell R710
CPU TYPE / NUMBER: Xeon x5650 @ 2.67GHz x 2
HOST TYPE: ESXi 4.1 - Windows Server 2008 R2 Ent 80GB HD (thin provisioned) 10GB ram VM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Raid10 6 disks 600GB 15k SAS, PERC H700
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read 0.00 18591 580 0%
RealLife-60%Rand-65%Read 6.76 2362 18 4%
Max Throughput-50%Read 196.39 11982 374 33%
Random-8k-70%Read 5.52 2245 17 0%
0 Kudos
WalterHarris
Contributor
Contributor

Sehr geehrte Damen und Herren,

ich bin nicht im Büro und kann daher Ihre Anfrage nicht beantworten.

In der Zwischenzeit wenden Sie sich bitte an helpdesk_wien@styria-it.com

mit freundlichen Grüßen

WALTER HARRIS

walter.harris@styria-it.com

Tel. +43 01 / 51414- 325

Fax +43 01 / 60117- 420

http://www.styria-it.com

http://kundenportal.styria-it.com

Styria IT Solutions Wien GmbH

1030 Wien, Hainburgerstrasse 33

"Diese Nachricht kann vertrauliche Informationen enthalten und ist nur für die namentlich bezeichneten Empfänger bestimmt. Falls Sie nicht namentlich als Empfänger dieser Mitteilung angeführt sind, sollten Sie diese Mitteilung nicht weiterverbreiten, kopieren oder weiterleiten. Bitte informieren Sie uns umgehend per E-Mail, falls Sie diese Mitteilung fälschlicherweise erhalten haben und löschen Sie dieses E-Mail endgültig aus Ihrem System.

This message may contain confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail-message by mistake and delete this e-mail-message from your system."

0 Kudos
christianZ
Champion
Champion

Have you configured your raid controller with cache r/w enabled ("write back")?

Reg

Christian

0 Kudos
jsabbott25
Contributor
Contributor

I'm using the OpenPerformanceTest32.icf from: http://vmktree.org/iometer/ to do my testing as well.  I'm using a partitioned and formatted 40GB drive as my IOMETER target.

This test was run with vSphere 4.1 ESXi using Dell Equallogic's MEM multipathing.  Also, Jumbo Frames is configured end to end.  The switches are a pair of HP 2910al-24g switches with a 4Gbps LACP trunk group.  The host was connected with 4 dedicated iSCSI NICs (HP NC364T Intel) divided over the two switches.  The storage environment is pre-production, so this VM is the only VM running on it right now.

I think it looks pretty good at this point, but I'm not exactly what I should be expecting from it since I'm new to both IOMETER and Equallogic.  Any thoughts?

Jake

SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell EqualLogic Group with 1 - PS4100XV / 24 x 146GB 15k SAS / RAID 50  and 1 - PS4100X / 24 x 600GB 10k SAS/ RAID 50

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read6.88833026066%
RealLife-60%Rand-65%Read8.5644753470%
Max Throughput-50%Read8.10716122353%
Random-8k-70%Read7.9346163672%

You are right s.buerger, I'm using Equallogic firmware version 5.1.2 which is using the automatic load balancing.  At the time of the IOMETER test results above, 18% of the volume was residing on the 15k drives and the other 82% was on the 10k drives according to the Equallogic Group Manager software.  I have also tested by creating two separate storage pools and separating out the 10k and 15k arrays.  Here are the results from those IOMETER tests:

SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS4100XV / 24 x 146GB 15k SAS/ RAID 50


Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read7.84742723256%
RealLife-60%Rand-65%Read12.3936122853%
Max Throughput-50%Read9.12644920148%
Random-8k-70%Read11.8338022953%

SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL:  PS4100X / 24 x 600GB 10k SAS / RAID 50

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read7.86739723159%
RealLife-60%Rand-65%Read12.7632632559%
Max Throughput-50%Read9.96589918446%
Random-8k-70%Read12.7032872559%

Message was edited by: jsabbott25 - added individual array results

0 Kudos
christianZ
Champion
Champion

The numbers look good for me.

You are using 10k and 15k disks - it would be not optimal, if your Eql volumes are spanned over the both members - check it.

Reg

Christian

0 Kudos
_VR_
Contributor
Contributor

Agreed with christian. Performance looks good. Put each EQL unit in its own Pool.

There is one added benefit:

If you have both units in one pool and you lose one of them you'll lose all the volumes.

If they're in separate pools you'd only lose the volumes on the unit thats down.

After splitting them up you can put one test volume on the XV and one on X and test them individually.

0 Kudos
s_buerger
Contributor
Contributor

In recent firmware versions this setup is supported and there is a feature called "Automatic Performance Load Balancer (APLB)"

I guess it would not have any influence for the single run of the benchmark.

Would be nice to know if it does, when you run the same access pattern profile again and again for some time.

http://en.community.dell.com/support-forums/storage/f/3775/t/19370736.aspx

0 Kudos
johnsojm
Contributor
Contributor

write back is enabled as well as adaptive read ahead.

0 Kudos
makruger
Contributor
Contributor

Whitebox ESXi 5.0 Server - Single Intel E5700 (3.0Ghz dual core), Gigabyte GA-EP45UD3R, 
2 Realtek R8168B NICS (MTU 1500), RR MPIO, NMP IOPS value set to 1.
Guest VM (WIN2008R2 1vCPU, 2GB vRam, 30GB eager zeroed disk) 

Whitebox NAS - Openfiler 2.99 (File I/O WB ISCSI Target) Single Intel E7300 (2.66Ghz Core2Duo),
Intel Desktop MB, 2GB RAM, LSI 3041E-R HBA,  2 disk RAID 0 (500GB 7200RPM SATA WD Blue),
2 Realtek R8168B NICS (MTU 1500), Netgear GS108T switch.
Untitled.png
0 Kudos
hbato
Enthusiast
Enthusiast

Does anyone here have a data on a Fujitsu Eternus storage appliance?

Regards, Harold
0 Kudos
alexxdavid
Contributor
Contributor

SERVER TYPE: Dell PE R710
CPU TYPE / NUMBER: 2X E5649
HOST TYPE: Windows 2008 R2 VM with 20GB RAM running inside vSphere 5
STORAGE TYPE / DISK NUMBER / RAID LEVEL / CONNECTIVITY: Dell MD 3220i / 8 SAS 146GB 15KRpm / RAID 10 / 4 ISCSI

TEST NAMELATENCYAVG IOPSAVG MBPSCPU LOAD
Max Throughput 100% Read4.5013270.35430.9117.67
Real Life 60% Rand - 65% Read11.55012.8138.4314.8
Max Throughput 50% Read7.917333.31240.3013.25
Random 8K 70% Read12.523567.2329.2215.85

Are those results any good for and ISCSI storage?

0 Kudos
s1xth
VMware Employee
VMware Employee

Those numbers look great, awesome throughput, 240MBps is excellent, although you are using FOUR iSCSI connections so those numbers should be expected. Based on the hardware and design of the MD 3220 class hardware, these numbers look to me as being right on the dot, maybe even a little higher then expected from only 8 disks. The random MBps is a little lower, but this is because of the controller specs itself, not sure how much memory cache the 3220 has, think 1GB?

Expand that array to more drives and you will see even more performance.

<http://www.vmware.com/

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
alexxdavid
Contributor
Contributor

Both controllers have 2gb cache.

I have another lun with 8x300gb 10krpm in raid 10 and i get the same numbers , just a bit more on the latency, i am thinking of putting another 8x146gb 15k in a raid 10 to split the load on the drives although i only have around 10 servers running on them.

One thing i noticed is that when using intel gigabit et cards, i was only achieving around 70mbps per card but when switching all 4 to broadcom, the throughput jumped to 100mbps + per card, so now the intel are only used for the vm network. Don't know why but just a hint for those who would have the same problem.

0 Kudos
christianZ
Champion
Champion

Hi,

sorry, but that can't be true (iops 12870). What size has your testfile?

Reg

Christian

0 Kudos
christianZ
Champion
Champion

Hi,

I have tested another software storage appliances and never saw good performance - so I think that would be similar by Fujitsu -

don't expect to much.

Reg

Christian

0 Kudos