Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
Here's the results with bytes 8800 policy set, no significant difference.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Windows 2008 R2, 1 vCPU, 4GB RAM, 40GB hard disk
CPU TYPE / NUMBER: Intel E5-2660, single vCPU
HOST TYPE: Dell PowerEdge R720, 256GB RAM; 2x E5-2660, 2.2 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Compellent SC8000, 11 data disks in Tier 1 (RAID10), 600GB 15k (bytes policy set to 8800)
##################################################################################
TEST NAME--
Resp. Time ms Avg IO/sec MB/sec
##################################################################################
Max Throughput-100%Read........____10.93___..........____5550.65__.........____173.46____
RealLife-60%Rand-65%Read......_____11.25_.........._____4067.83__.........____31.78____
Max Throughput-50%Read.........._____12.29___.........._____2927.64__.........____91.49____
Random-8k-70%Read................._____10.66__.........._____4347.17__.........____33.96___
My little contribution :
SERVER TYPE: Windows 2008 R2, 2 vCPU, 4GB RAM, 60GB hard disk
CPU TYPE / NUMBER: Intel X6550, 2 CPU
HOST TYPE: Dell PowerEdge R710, 64GB RAM, 2x E5-2660 (2.66 GHz), 4x1GB/s ISCSI ports
ISCSI LAN: 2x PowerConnect 6224 (MTU 9000)
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS6100X Firmware 6.0.5 (4x1GB/s ISCSI ports) - 22 SAS 10K 600GB - RAID10 + 2 spares - NO MEM
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 4.55 | 11910 | 372 | 1% |
RealLife-60%Rand-65%Read | 9.31 | 4938 | 38 | 4% |
Max Throughput-50%Read | 5.84 | 9430 | 294 | 0% |
Random-8k-70%Read | 9.11 | 5036 | 39 | 4% |
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 6 SATA 10K 500GB (WD Velociraptor) - RAID5
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 9.83 | 5994 | 187 | 1% |
RealLife-60%Rand-65%Read | 54.91 | 911 | 7 | 0% |
Max Throughput-50%Read | 12.91 | 4377 | 136 | 1% |
Random-8k-70%Read | 63.12 | 783 | 6 | 0% |
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID1 (block LUN)
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 9.43 | 6246 | 195 | 0% |
RealLife-60%Rand-65%Read | 17.53 | 3255 | 25 | 1% |
Max Throughput-50%Read | 10.97 | 5100 | 159 | 0% |
Random-8k-70%Read | 20.43 | 2760 | 21 | 0% |
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID0 (block LUN)
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 9.71 | 6066 | 189 | 2% |
RealLife-60%Rand-65%Read | 10.88 | 5215 | 40 | 0% |
Max Throughput-50%Read | 9.52 | 6040 | 188 | 2% |
Random-8k-70%Read | 11.77 | 4794 | 37 | 0% |
Just migrated from an Equallogic PS4000 to a NetApp FAS2240. These numbers seem low and the latency seems quite high. Any thoughts on these numbers?
ESXi 5.1 u1a. 4 physical NICs setup in 1to1 vmkernal ports and all vmk ports are bound to the VMware Software iSCSI initiator. Round robin is being used. MTU of 9000 set on all.
SERVER TYPE:Windows 7 64bit 1vCPU 4GB ram
CPU TYPE / NUMBER: quad-core AMD opteron 2389
HOST TYPE: HP DL385 G5p VMware ESXi 5.1u1a 1065491
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS2240-2 12x900GB 10K SAS. 1 Spare - 2 Parity RAID DP (Raid 6)
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 16.23 | 3149 | 98 | 3% |
RealLife-60%Rand-65%Read | 12.68 | 3827 | 29 | 3% |
Max Throughput-50%Read | 17.61 | 2616 | 81 | 2% |
Random-8k-70%Read | 13.27 | 3728 | 29 | 3% |
Sehr geehrte Damen und Herren,
ich bin erst am 04.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.
Freundliche Grüße
Michael Groß
It's not hopeless by any means, but it appears as though what you did is you took the 24 drives and split them between the controllers. This will have an impact on your performance.
I also have a FAS2240 (the -4 model) in my test lab with 24 1TB NL-SAS drives at 7k. You can see my read numbers are higher as I read from slightly more spindles. The more write-intensive benchmarks are higher on yours with slightly lower latency due to the faster drives but you would have been able to stretch them out more with a larger aggregate.
My layout has:
This means that I essentially have an active/passive configuration, controller 2 serves no data. Controller 1 benefits from a larger data aggregate, and both controllers have a same-sized hot spare available so no matter which controller owns the data aggregate, I always have a spare for rebuilds. Disks get added in groups of 16, either to the existing aggregate on controller 1 (depending on CPU utilization, cache hit %, and disk utilization), or you can start again on controller 2 if the first controller is heavily utilized.
NetApp FAS2240-4 | |||
Access Specification Name | IOps | MBps (Binary) | Avg Response Time |
Max Throughput-100%Read | 3,506.35 | 109.57 | 17.17 |
RealLife-60%Rand-65%Read | 2,862.38 | 22.36 | 17.27 |
Max Throughput-50%Read | 6,393.61 | 199.80 | 9.18 |
Random-8k-70%Read | 2,651.04 | 20.71 | 17.92 |
Taking a look at your random and real-life numbers on an 11-disk RAID-DP RAID set you are getting at worst roughly 330 IOPS/spindle. Spread across 16 disks that would probably be closer to 5,200 IOPS on those benchmarks.
As for the latency, I would focus more on the numbers coming out of OnCommand System Manager based on your actual workload to ensure they are reasonable rather than what you are seeing on a synthetic benchmark.You will find the caching algorithms are quite good in ONTAP but most benchmarks try to remove cache as much as possible.
Thank for the info you Mikey. Had the debate as to whether or not I should split the drives between both controllers; ended up splitting for better or worse.
I guess my main concern was that the numbers are not drastically better than our 6 year old equallogic. Although now that I look again the numbers are substantially better. These are the numbers from it:
SERVER TYPE:Windows 7 64bit 1vCPU 4GB ram
CPU TYPE / NUMBER: quad-core AMD opteron 2389
HOST TYPE: HP DL385 G5p VMware ESXi 5.1u1a 1065491
STORAGE TYPE / DISK NUMBER / RAID LEVEL:
Equallogic PS5000 14x7200 SATA Raid 5 w/1 spare
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 22.83 | 2582 | 80 | 3% |
RealLife-60%Rand-65%Read | 27.40 | 1693 | 13 | 2% |
Max Throughput-50%Read | 20.94 | 2793 | 87 | 4% |
Random-8k-70%Read | 26.12 | 1756 | 13 | 2% |
Netapp:
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 16.23 | 3149 | 98 | 3% |
RealLife-60%Rand-65%Read | 12.68 | 3827 | 29 | 3% |
Max Throughput-50%Read | 17.61 | 2616 | 81 | 2% |
Random-8k-70%Read | 13.27 | 3728 | 29 | 3% |
What is throwing me off though it the first test I ran on the Netapp using iometer (I ran 3 total for each SAN) I ended up with this result:
Notice the Max 100% read latency and avg MBps
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 2.65 | 11439 | 357 | 8% |
RealLife-60%Rand-65%Read | 10.27 | 4577 | 35 | 3% |
Max Throughput-50%Read | 20.66 | 2394 | 74 | 2% |
Random-8k-70%Read | 13.36 | 3725 | 29 | 3% |
Yes, that's odd but I normally run the benchmarks a few times over for consistency, so I would disregard any anomalous results if the later iterations are reasonably consistent.
I think the real take-away based on your results is the huge improvement in the RealLife and Random tests, that's greater than 2x gains in both IOPS and MBps, as well as about half the latency. That should make for a much improved user experience.
Does anyone have an opinion on my test results?
SERVER TYPE: Windows 2008 R4 64bit 1vCPU 4GB ram
CPU TYPE / NUMBER: Intel Xeon E5530 Processor
HOST TYPE: PowerEdge R710 / RAM 147443,0 MB - Qlogic QLE2460
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Supermicro Server / 16HD RAID10 / 2 HotSpare / 7.200K
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 5.22 | 10785 | 337 | 46% |
RealLife-60%Rand-65%Read | 37.14 | 1595 | 12 | 16% |
Max Throughput-50%Read | 3.35 | 15232 | 476 | 60% |
Random-8k-70%Read | 42.28 | 1349 | 10 | 18% |
Thanks
If that's for 14 drives in RAID10 plus two hot spares then I'd say the results are pretty predictable/normal for 7k drives.
What software is being used to serve up the storage?
Sent from my iPhone
as software i'm using open-e, about the disk im using SEAGATE 2TB SAS ST2000NM0001 Enterprise Capacity 3.5 HDD and as controller a Dell Perc H700 512/mb Ram
Hello, i had done some tweaks to the san os.....
SERVER TYPE: Windows 2008 R4 64bit 1vCPU 4GB ram
CPU TYPE / NUMBER: Intel Xeon E5530 Processor
HOST TYPE: PowerEdge R710 / RAM 147443,0 MB - Qlogic QLE2460
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Supermicro Server / 16HD RAID10 / 2 HotSpare / 7.200K
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 5.31 | 10683 | 333 | 37% |
RealLife-60%Rand-65%Read | 2.31 | 18767 | 146 | 68% |
Max Throughput-50%Read | 3.45 | 16613 | 519 | 64% |
Random-8k-70%Read | 2.45 | 14163 | 110 | 74% |
Does anyone have an opinion on my new test results?
Thanks
You need to decrease the amount of RAM dedicated to the VM.
4 GB is too hight. Is the same size of the test file used by IO Meter.
The risk is that all the IO operation use RAM as cache.
Try with 1,5 GB.
Francesco
Hello, i had done some tests using some SAN OS so after a few days here is it the results, in all 3 tests i used same storage, same test machine...... i would like to have an opinion..... Thanks
Are there any Cyrstal Disk Mark Benchmark for the SC3700 SSD, special for 4K Single Q
Intel SSD DC S3700 Series Enterprise SSD Review | StorageReview.com - Storage Reviews
Are ASS Benchmark
http://www.experts-exchange.com/Software/VMWare/Q_28176530.html
Sehr geehrte Damen und Herren,
ich bin erst am 14.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.
Freundliche Grüße
Michael Groß
Hello,
I did the test with this setup:
Dell Equallogic PS 4100 (2x ISCSI 1Gb ports)
24x 600GB 10K
RAID10 + 2 Spare
1 Volume 2TB
Stacked Cisco 3750X with Dell Recommended Scripts (MTU9000...)
VMware ESX 5.5
Dell PowerEdge R720 with 4x ISCSI 1Gb Ports
Dell MEM v1.2 PlugIn (DELL_EQL Path)
All done by the Dell Equallogic Compatibility Matrix and the Dell recommandations.
I installed 1 Windows Server 2008R2 station with 2vCPU's and 2GB RAM. No other VM's.
These are the results.
Are these normal/good figures?
It is hard for me to evaluate this and to know this is what is expected or not.
2GB RAM, 2vCPU | |||||
Access Specification Name | Idle | Max Throughput-100%Read | RealLife-60%Rand-65%Read | Max Throughput-50%Read | Random-8k-70%Read |
# Managers | |||||
# Workers | 0 | 1 | 1 | 1 | 1 |
# Disks | 0 | 1 | 1 | 1 | 1 |
IOps | 0 | 2561,809353 | 4070,736853 | 3391,763911 | 4128,44941 |
Read IOps | 0 | 2561,809353 | 2644,604667 | 1695,436977 | 2892,239794 |
Write IOps | 0 | 0 | 1426,132186 | 1696,326935 | 1236,209616 |
MBps (Binary) | 0 | 80,056542 | 31,802632 | 105,992622 | 32,253511 |
Read MBps (Binary) | 0 | 80,056542 | 20,660974 | 52,982406 | 22,595623 |
Write MBps (Binary) | 0 | 0 | 11,141658 | 53,010217 | 9,657888 |
MBps (Decimal) | 0 | 83,945369 | 33,347476 | 111,14132 | 33,820258 |
Read MBps (Decimal) | 0 | 83,945369 | 21,664601 | 55,556079 | 23,693228 |
Write MBps (Decimal) | 0 | 0 | 11,682875 | 55,585241 | 10,127029 |
Transactions per Second | 0 | 2561,809353 | 4070,736853 | 3391,763911 | 4128,44941 |
Connections per Second | 0 | 5,123485 | 8,139727 | 6,783015 | 8,256219 |
Average Response Time | 0 | 17,11121 | 13,739377 | 13,792968 | 13,422029 |
Average Read Response Time | 0 | 17,11121 | 14,234527 | 13,670445 | 13,710765 |
Average Write Response Time | 0 | 0 | 12,821176 | 13,915426 | 12,746501 |
Average Transaction Time | 0 | 17,11121 | 13,739377 | 13,792968 | 13,422029 |
Average Connection Time | 0 | 195,169291 | 122,823364 | 147,41385 | 121,105571 |
Maximum Response Time | 0 | 197,719612 | 395,593434 | 162,881386 | 378,444327 |
Maximum Read Response Time | 0 | 197,719612 | 395,593434 | 162,881386 | 378,444327 |
Maximum Write Response Time | 0 | 0 | 151,936908 | 115,824916 | 138,24697 |
Maximum Transaction Time | 0 | 197,719612 | 395,593434 | 162,881386 | 378,444327 |
Maximum Connection Time | 0 | 261,058808 | 836,18337 | 229,627369 | 858,949322 |
Errors | 0 | 0 | 0 | 0 | 0 |
Read Errors | 0 | 0 | 0 | 0 | 0 |
Write Errors | 0 | 0 | 0 | 0 | 0 |
Bytes Read | 0 | 25182863360 | 6499598336 | 16667607040 | 7108354048 |
Bytes Written | 0 | 0 | 3504979968 | 16676356096 | 3038273536 |
Read I/Os | 0 | 768520 | 793408 | 508655 | 867719 |
Write I/Os | 0 | 0 | 427854 | 508922 | 370883 |
Connections | 0 | 1537 | 2442 | 2035 | 2477 |
Transactions per Connection | -1 | 500 | 500 | 500 | 500 |
Total Raw Read Response Time | 0 | 1,88288E+11 | 1,61706E+11 | 99562042107 | 1,70345E+11 |
Total Raw Write Response Time | 0 | 0 | 78543683585 | 1,01399E+11 | 67688631231 |
Total Raw Transaction Time | 0 | 1,88288E+11 | 2,4025E+11 | 2,00961E+11 | 2,38033E+11 |
Total Raw Connection Time | 0 | 4295098914 | 4294518380 | 4295270512 | 4295146146 |
Maximum Raw Read Response Time | 0 | 2830985 | 5664178 | 2332165 | 5418634 |
Maximum Raw Write Response Time | 0 | 0 | 2175460 | 1658402 | 1979445 |
Maximum Raw Transaction Time | 0 | 2830985 | 5664178 | 2332165 | 5418634 |
Maximum Raw Connection Time | 0 | 3737887 | 11972624 | 3287846 | 12298591 |
Total Raw Run Time | 0 | 4295326535 | 4295598014 | 4295655898 | 4295686982 |
Starting Sector | 1,84467E+19 | 0 | 0 | 0 | 0 |
Maximum Size | 1,84467E+19 | 8000000 | 8000000 | 8000000 | 8000000 |
Queue Depth | -1 | 64 | 64 | 64 | 64 |
% CPU Utilization | 0,999542 | 29,201032 | 17,352905 | 27,637179 | 18,222367 |
% User Time | 0,58006 | 11,778161 | 5,236457 | 10,571545 | 5,519789 |
% Privileged Time | 0,429193 | 17,425438 | 12,118731 | 17,066311 | 12,703574 |
% DPC Time | 0,046821 | 1,422219 | 1,331214 | 1,760191 | 1,476797 |
% Interrupt Time | 0,04422 | 0,780011 | 0,691607 | 0,792996 | 0,857998 |
Processor Speed | 14318180 | 14318180 | 14318180 | 14318180 | 14318180 |
Interrupts per Second | 191,478093 | 1519,581044 | 1247,858847 | 1492,13929 | 1330,195429 |
CPU Effectiveness | 0 | 87,730098 | 234,585325 | 122,724679 | 226,559452 |
Packets/Second | 29,439698 | 29,736883 | 29,680131 | 29,252993 | 29,536417 |
Packet Errors | 0 | 0 | 0 | 0 | 0 |
Segments Retransmitted/Second | 0 | 0 | 0 | 0 | 0 |
Sehr geehrte Damen und Herren,
ich bin erst am 22.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.
Freundliche Grüße
Michael Groß
SERVER TYPE: Windows 2008 R2, 16 vCPU, 4GB RAM, 40GB hard disk
CPU TYPE / NUMBER: Intel E5-2690, 2 CPU
HOST TYPE: HP DL380P Gen8, 256GB RAM, 2x10GB Broadcom BCM57810
ISCSI LAN: Cisco Nexus 3064T (MTU 9000)
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS6110XS Firmware 6.0.6 (1x10GB/s ISCSI port) - 7x 400GB SSDs + 17x 600GB 10K HDDs - RAID6 (accelerated), 1 spare - NO MEM
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 2.10 | 26365 | 824 | 7% |
RealLife-60%Rand-65%Read | 5.81 | 9749 | 76 | 4% |
Max Throughput-50%Read | 5.16 | 11275 | 352 | 5% |
Random-8k-70%Read | 5.60 | 10175 | 79 | 4% |
Max Throughput-50% Read numbers seem a bit low, otherwise the results look decent
Hello
I would like to ask you
Regarding tests , can I run some domain controllers (file server) , mail server and maybe some snmp and vpn server for around 50 users ?
Is that iops performance enough ?
Thanks in advance
Hi guys,
We have 4 HP DL 380p Gen 8 servers 2 in each DC conected thru 10GB Fibre between DC
Then we have 8 HP Storevirtual HP StoreVirtual 4330 iSCSI also 4 in each DC conected with 10 GB Fibre between DC
The core switches are HP Procurve 3800
we have ESXI 5.1
We did a test with IOmeter on a new VM cause we have latency problems.
can anyone help verify if the below results are good or not ?
we compared it to other installs and it seemed slow
anone has a P4000 ??
Kind Regards,