Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
Hi,
I am currently out of the office.
If you require assistance, please call our helpdesk on 1300101112.
Alternatively, email service@anittel.com.au
Regards,
James Ackerly
I used iometer and the OpenPerformanceTest32.icf from: http://vmktree.org/iometer/ to do my testing. I am fairly new to testing with IOMETER. However, I did notice that I got the most consistent results when testing to an eager zeroed thick disk, on a thick provisioned volume. When I tested to a thin disk, on a thin volume I kept getting resuts that were wildly different.
In addition, I did do this test using vSphere 5 native MPIO as specified in the Dell document: Configuring iSCSI Connectivity with VMWare vSphere 5 and Dell EqualLogic PS Series Storage. My host was connected with 4 dedicated iSCSI NICs to a stack of two PowerConnect 6248 series switches.
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 0.00 | 4338 | 135 | 0% |
RealLife-60%Rand-65%Read | 13.83 | 4823 | 37 | 2% |
Max Throughput-50%Read | 87.51 | 5343 | 166 | 0% |
Random-8k-70%Read | 11.44 | 4658 | 36 | 0% |
Looks good to me. Wont get much more on SATA drives, maybe a little w/ Jumbo Frames if you aren't using them now. Using Intel or Broadcom NIC's for the ISCSI connections?
Hi s1xth - yes - I am using jumbo frames end to end. Also - yes - Broadcom NICs using the VMWare iSCSI initiator...
We just got some demo gear in, HP P4300 G2 7.2tb ISCSI Sane in the 2 x 1gbit network link setup and a couple of HP DL360 G7's with dual E5650 CPU's and 12gb of ram. It's pretty dam quick:
I know it's the 2 x 1gbit NIC's on the P4300 slowing things down, I do wonder how fast it would be with the 10gbit NIC kit
Hello again.
I have bounced back and forth between various configs. I have modified my RAID to use all 6 disks in a RAID10 config. My 100% read speeds have increased quite drastically(i know pointless), but the latency also on the max throughput 50% hasn't changed. I don't understand where the problem here lies...I looked through these posts and found another user with near identical hardware as I SteveEsx and his numbers are far superior (a 1.3ms latency compared to my ~200ms). Will be contacting him later.
Here are the old numbers with 4 15k 600GB sas disks:
SERVER TYPE: Dell R710CPU TYPE / NUMBER: Xeon x5650 @ 2.67GHz x 2HOST TYPE: ESXi 4.1 - Windows Server 2008 R2 Ent 80GB HD (thin provisioned) 10GB ram VMSTORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Raid10 4 disks 600GB 15k SAS, PERC H700Test name Latency Avg iops Avg MBps cpu loadMax Throughput-100%Read 0.00 7509 234 0%RealLife-60%Rand-65%Read 5.22 1820 14 8%Max Throughput-50%Read 196.77 12013 375 33%Random-8k-70%Read 4.21 1713 13 2%And here are the numbers with 6 disks:SERVER TYPE: Dell R710CPU TYPE / NUMBER: Xeon x5650 @ 2.67GHz x 2HOST TYPE: ESXi 4.1 - Windows Server 2008 R2 Ent 80GB HD (thin provisioned) 10GB ram VMSTORAGE TYPE / DISK NUMBER / RAID LEVEL: Local Raid10 6 disks 600GB 15k SAS, PERC H700
Test name Latency Avg iops Avg MBps cpu load Max Throughput-100%Read 0.00 18591 580 0% RealLife-60%Rand-65%Read 6.76 2362 18 4% Max Throughput-50%Read 196.39 11982 374 33% Random-8k-70%Read 5.52 2245 17 0%
Sehr geehrte Damen und Herren,
ich bin nicht im Büro und kann daher Ihre Anfrage nicht beantworten.
In der Zwischenzeit wenden Sie sich bitte an helpdesk_wien@styria-it.com
mit freundlichen Grüßen
WALTER HARRIS
walter.harris@styria-it.com
Tel. +43 01 / 51414- 325
Fax +43 01 / 60117- 420
http://kundenportal.styria-it.com
Styria IT Solutions Wien GmbH
1030 Wien, Hainburgerstrasse 33
"Diese Nachricht kann vertrauliche Informationen enthalten und ist nur für die namentlich bezeichneten Empfänger bestimmt. Falls Sie nicht namentlich als Empfänger dieser Mitteilung angeführt sind, sollten Sie diese Mitteilung nicht weiterverbreiten, kopieren oder weiterleiten. Bitte informieren Sie uns umgehend per E-Mail, falls Sie diese Mitteilung fälschlicherweise erhalten haben und löschen Sie dieses E-Mail endgültig aus Ihrem System.
This message may contain confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail-message by mistake and delete this e-mail-message from your system."
Have you configured your raid controller with cache r/w enabled ("write back")?
Reg
Christian
I'm using the OpenPerformanceTest32.icf from: http://vmktree.org/iometer/ to do my testing as well. I'm using a partitioned and formatted 40GB drive as my IOMETER target.
This test was run with vSphere 4.1 ESXi using Dell Equallogic's MEM multipathing. Also, Jumbo Frames is configured end to end. The switches are a pair of HP 2910al-24g switches with a 4Gbps LACP trunk group. The host was connected with 4 dedicated iSCSI NICs (HP NC364T Intel) divided over the two switches. The storage environment is pre-production, so this VM is the only VM running on it right now.
I think it looks pretty good at this point, but I'm not exactly what I should be expecting from it since I'm new to both IOMETER and Equallogic. Any thoughts?
Jake
SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell EqualLogic Group with 1 - PS4100XV / 24 x 146GB 15k SAS / RAID 50 and 1 - PS4100X / 24 x 600GB 10k SAS/ RAID 50
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 6.88 | 8330 | 260 | 66% |
RealLife-60%Rand-65%Read | 8.56 | 4475 | 34 | 70% |
Max Throughput-50%Read | 8.10 | 7161 | 223 | 53% |
Random-8k-70%Read | 7.93 | 4616 | 36 | 72% |
You are right s.buerger, I'm using Equallogic firmware version 5.1.2 which is using the automatic load balancing. At the time of the IOMETER test results above, 18% of the volume was residing on the 15k drives and the other 82% was on the 10k drives according to the Equallogic Group Manager software. I have also tested by creating two separate storage pools and separating out the 10k and 15k arrays. Here are the results from those IOMETER tests:
SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS4100XV / 24 x 146GB 15k SAS/ RAID 50
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 7.84 | 7427 | 232 | 56% |
RealLife-60%Rand-65%Read | 12.39 | 3612 | 28 | 53% |
Max Throughput-50%Read | 9.12 | 6449 | 201 | 48% |
Random-8k-70%Read | 11.83 | 3802 | 29 | 53% |
SERVER TYPE: HP DL380 G5
CPU TYPE / NUMBER: 2X 5160
HOST TYPE: Windows XP SP3 VM with 1GB RAM running inside vSphere ESXi 4.1
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS4100X / 24 x 600GB 10k SAS / RAID 50
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 7.86 | 7397 | 231 | 59% |
RealLife-60%Rand-65%Read | 12.76 | 3263 | 25 | 59% |
Max Throughput-50%Read | 9.96 | 5899 | 184 | 46% |
Random-8k-70%Read | 12.70 | 3287 | 25 | 59% |
Message was edited by: jsabbott25 - added individual array results
The numbers look good for me.
You are using 10k and 15k disks - it would be not optimal, if your Eql volumes are spanned over the both members - check it.
Reg
Christian
Agreed with christian. Performance looks good. Put each EQL unit in its own Pool.
There is one added benefit:
If you have both units in one pool and you lose one of them you'll lose all the volumes.
If they're in separate pools you'd only lose the volumes on the unit thats down.
After splitting them up you can put one test volume on the XV and one on X and test them individually.
In recent firmware versions this setup is supported and there is a feature called "Automatic Performance Load Balancer (APLB)"
I guess it would not have any influence for the single run of the benchmark.
Would be nice to know if it does, when you run the same access pattern profile again and again for some time.
http://en.community.dell.com/support-forums/storage/f/3775/t/19370736.aspx
write back is enabled as well as adaptive read ahead.
Whitebox ESXi 5.0 Server - Single Intel E5700 (3.0Ghz dual core), Gigabyte GA-EP45UD3R,2 Realtek R8168B NICS (MTU 1500), RR MPIO, NMP IOPS value set to 1. Guest VM (WIN2008R2 1vCPU, 2GB vRam, 30GB eager zeroed disk)
Whitebox NAS - Openfiler 2.99 (File I/O WB ISCSI Target) Single Intel E7300 (2.66Ghz Core2Duo),
Intel Desktop MB, 2GB RAM, LSI 3041E-R HBA, 2 disk RAID 0 (500GB 7200RPM SATA WD Blue),
2 Realtek R8168B NICS (MTU 1500), Netgear GS108T switch.
Does anyone here have a data on a Fujitsu Eternus storage appliance?
SERVER TYPE: Dell PE R710 CPU TYPE / NUMBER: 2X E5649 HOST TYPE: Windows 2008 R2 VM with 20GB RAM running inside vSphere 5 STORAGE TYPE / DISK NUMBER / RAID LEVEL / CONNECTIVITY: Dell MD 3220i / 8 SAS 146GB 15KRpm / RAID 10 / 4 ISCSI
TEST NAME | LATENCY | AVG IOPS | AVG MBPS | CPU LOAD |
---|---|---|---|---|
Max Throughput 100% Read | 4.50 | 13270.35 | 430.91 | 17.67 |
Real Life 60% Rand - 65% Read | 11.5 | 5012.81 | 38.43 | 14.8 |
Max Throughput 50% Read | 7.91 | 7333.31 | 240.30 | 13.25 |
Random 8K 70% Read | 12.52 | 3567.23 | 29.22 | 15.85 |
Are those results any good for and ISCSI storage?
Those numbers look great, awesome throughput, 240MBps is excellent, although you are using FOUR iSCSI connections so those numbers should be expected. Based on the hardware and design of the MD 3220 class hardware, these numbers look to me as being right on the dot, maybe even a little higher then expected from only 8 disks. The random MBps is a little lower, but this is because of the controller specs itself, not sure how much memory cache the 3220 has, think 1GB?
Expand that array to more drives and you will see even more performance.
<http://www.vmware.com/
Both controllers have 2gb cache.
I have another lun with 8x300gb 10krpm in raid 10 and i get the same numbers , just a bit more on the latency, i am thinking of putting another 8x146gb 15k in a raid 10 to split the load on the drives although i only have around 10 servers running on them.
One thing i noticed is that when using intel gigabit et cards, i was only achieving around 70mbps per card but when switching all 4 to broadcom, the throughput jumped to 100mbps + per card, so now the intel are only used for the vm network. Don't know why but just a hint for those who would have the same problem.
Hi,
sorry, but that can't be true (iops 12870). What size has your testfile?
Reg
Christian
Hi,
I have tested another software storage appliances and never saw good performance - so I think that would be similar by Fujitsu -
don't expect to much.
Reg
Christian