VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
mikeyb79
Enthusiast
Enthusiast

Here's the results with bytes 8800 policy set, no significant difference.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Windows 2008 R2, 1 vCPU, 4GB RAM, 40GB hard disk

CPU TYPE / NUMBER: Intel E5-2660, single vCPU

HOST TYPE: Dell PowerEdge R720, 256GB RAM; 2x E5-2660, 2.2 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Compellent SC8000, 11 data disks in Tier 1 (RAID10), 600GB 15k (bytes policy set to 8800)

##################################################################################

TEST NAME--


                                                   Resp. Time ms                    Avg IO/sec                         MB/sec  

##################################################################################

Max Throughput-100%Read........____10.93___..........____5550.65__.........____173.46____

RealLife-60%Rand-65%Read......_____11.25_.........._____4067.83__.........____31.78____

Max Throughput-50%Read.........._____12.29___.........._____2927.64__.........____91.49____

Random-8k-70%Read................._____10.66__.........._____4347.17__.........____33.96___

Reply
0 Kudos
dam09fr
Contributor
Contributor

My little contribution :


SERVER TYPE: Windows 2008 R2, 2 vCPU, 4GB RAM, 60GB hard disk

CPU TYPE / NUMBER: Intel X6550, 2 CPU

HOST TYPE: Dell PowerEdge R710, 64GB RAM, 2x E5-2660 (2.66 GHz), 4x1GB/s ISCSI ports

ISCSI LAN: 2x PowerConnect 6224 (MTU 9000)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS6100X Firmware 6.0.5 (4x1GB/s ISCSI ports) - 22 SAS 10K 600GB - RAID10 + 2 spares - NO MEM

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read4.55119103721%
RealLife-60%Rand-65%Read9.314938384%
Max Throughput-50%Read5.8494302940%
Random-8k-70%Read9.115036394%

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 6 SATA 10K 500GB (WD Velociraptor) - RAID5

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read9.8359941871%
RealLife-60%Rand-65%Read54.9191170%
Max Throughput-50%Read12.9143771361%
Random-8k-70%Read63.1278360%

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID1 (block LUN)

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read9.4362461950%
RealLife-60%Rand-65%Read17.533255251%
Max Throughput-50%Read10.9751001590%
Random-8k-70%Read20.432760210%

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID0 (block LUN)

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read9.7160661892%
RealLife-60%Rand-65%Read10.885215400%
Max Throughput-50%Read9.5260401882%
Random-8k-70%Read11.774794370%


Reply
0 Kudos
mac1978
Enthusiast
Enthusiast

Just migrated from an Equallogic PS4000 to a NetApp FAS2240.  These numbers seem low and the latency seems quite high.  Any thoughts on these numbers?

ESXi 5.1 u1a.  4 physical NICs setup in 1to1 vmkernal ports and all vmk ports are bound to the VMware Software iSCSI initiator.  Round robin is being used.  MTU of 9000 set on all.

SERVER TYPE:Windows 7 64bit 1vCPU 4GB ram

CPU TYPE / NUMBER: quad-core AMD opteron 2389

HOST TYPE: HP DL385 G5p VMware ESXi 5.1u1a 1065491

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS2240-2 12x900GB 10K SAS. 1 Spare - 2 Parity RAID DP (Raid 6)

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read16.233149983%
RealLife-60%Rand-65%Read12.683827293%
Max Throughput-50%Read17.612616812%
Random-8k-70%Read13.273728293%
Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Sehr geehrte Damen und Herren,

ich bin erst am 04.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.

Freundliche Grüße

Michael Groß

Reply
0 Kudos
mikeyb79
Enthusiast
Enthusiast

It's not hopeless by any means, but it appears as though what you did is you took the 24 drives and split them between the controllers. This will have an impact on your performance.

I also have a FAS2240 (the -4 model) in my test lab with 24 1TB NL-SAS drives at 7k. You can see my read numbers are higher as I read from slightly more spindles. The more write-intensive benchmarks are higher on yours with slightly lower latency due to the faster drives but you would have been able to stretch them out more with a larger aggregate.

My layout has:

  • Controller 1 with 3 drives RAID-DP for vol0 and 1 hot spare;
  • Controller 2 with 3 drives RAID-DP for vol0 and 1 hot spare;
  • 1 aggregate of 16 disks (with a RAID size of 16) owned by controller 1.

This means that I essentially have an active/passive configuration, controller 2 serves no data. Controller 1 benefits from a larger data aggregate, and both controllers have a same-sized hot spare available so no matter which controller owns the data aggregate, I always have a spare for rebuilds. Disks get added in groups of 16, either to the existing aggregate on controller 1 (depending on CPU utilization, cache hit %, and disk utilization), or you can start again on controller 2 if the first controller is heavily utilized.

NetApp FAS2240-4
Access Specification NameIOpsMBps (Binary)Avg Response Time
Max Throughput-100%Read3,506.35109.5717.17
RealLife-60%Rand-65%Read2,862.3822.3617.27
Max Throughput-50%Read6,393.61199.809.18
Random-8k-70%Read2,651.0420.7117.92

Taking a look at your random and real-life numbers on an 11-disk RAID-DP RAID set you are getting at worst roughly 330 IOPS/spindle. Spread across 16 disks that would probably be closer to 5,200 IOPS on those benchmarks.

As for the latency, I would focus more on the numbers coming out of OnCommand System Manager based on your actual workload to ensure they are reasonable rather than what you are seeing on a synthetic benchmark.You will find the caching algorithms are quite good in ONTAP but most benchmarks try to remove cache as much as possible.

Reply
0 Kudos
mac1978
Enthusiast
Enthusiast

Thank for the info you Mikey.  Had the debate as to whether or not I should split the drives between both controllers; ended up splitting for better or worse.

I guess my main concern was that the numbers are not drastically better than our 6 year old equallogic.  Although now that I look again the numbers are substantially better.  These are the numbers from it:

SERVER TYPE:Windows 7 64bit 1vCPU 4GB ram

CPU TYPE / NUMBER: quad-core AMD opteron 2389

HOST TYPE: HP DL385 G5p VMware ESXi 5.1u1a 1065491

STORAGE TYPE / DISK NUMBER / RAID LEVEL:

Equallogic PS5000 14x7200 SATA Raid 5 w/1 spare

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read22.832582803%
RealLife-60%Rand-65%Read27.401693132%
Max Throughput-50%Read20.942793874%
Random-8k-70%Read26.121756132%

Netapp:

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read16.233149983%
RealLife-60%Rand-65%Read12.683827293%
Max Throughput-50%Read17.612616812%
Random-8k-70%Read13.273728293%

What is throwing me off though it the first test I ran on the Netapp using iometer (I ran 3 total for each SAN) I ended up with this result:


Notice the Max 100% read latency and avg MBps

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read2.65114393578%
RealLife-60%Rand-65%Read10.274577353%
Max Throughput-50%Read20.662394742%
Random-8k-70%Read13.36372529

3%

Reply
0 Kudos
mikeyb79
Enthusiast
Enthusiast

Yes, that's odd but I normally run the benchmarks a few times over for consistency, so I would disregard any anomalous results if the later iterations are reasonably consistent.

I think the real take-away based on your results is the huge improvement in the RealLife and Random tests, that's greater than 2x gains in both IOPS and MBps, as well as about half the latency. That should make for a much improved user experience.

Reply
0 Kudos
francescoghini
Enthusiast
Enthusiast

Does anyone have an opinion on my test results?

SERVER TYPE: Windows 2008 R4 64bit 1vCPU 4GB ram

CPU TYPE / NUMBER: Intel Xeon E5530 Processor

HOST TYPE: PowerEdge R710 / RAM 147443,0 MB - Qlogic QLE2460

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Supermicro Server / 16HD RAID10 / 2 HotSpare / 7.200K

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read5.221078533746%
RealLife-60%Rand-65%Read37.1415951216%
Max Throughput-50%Read3.351523247660%
Random-8k-70%Read42.2813491018%

Thanks

Reply
0 Kudos
mikeyb79
Enthusiast
Enthusiast

If that's for 14 drives in RAID10 plus two hot spares then I'd say the results are pretty predictable/normal for 7k drives.

What software is being used to serve up the storage?

Sent from my iPhone

Reply
0 Kudos
francescoghini
Enthusiast
Enthusiast

as software i'm using open-e, about the disk im using SEAGATE 2TB SAS ST2000NM0001 Enterprise Capacity 3.5 HDD and as controller a Dell Perc H700 512/mb Ram

Reply
0 Kudos
francescoghini
Enthusiast
Enthusiast

Hello, i had done some tweaks to the san os.....

SERVER TYPE: Windows 2008 R4 64bit 1vCPU 4GB ram

CPU TYPE / NUMBER: Intel Xeon E5530 Processor

HOST TYPE: PowerEdge R710 / RAM 147443,0 MB - Qlogic QLE2460

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Supermicro Server / 16HD RAID10 / 2 HotSpare / 7.200K

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read5.311068333337%
RealLife-60%Rand-65%Read2.311876714668%
Max Throughput-50%Read3.451661351964%
Random-8k-70%Read2.451416311074%

Does anyone have an opinion on my new test results?

Thanks

Reply
0 Kudos
fbonez
Expert
Expert

You need to decrease the amount of RAM dedicated to the VM.

4 GB is too hight. Is the same size of the test file used by IO Meter.

The risk is that all the IO operation use RAM as cache.

Try with 1,5 GB.

Francesco

-- If you find this information useful, please award points for "correct" or "helpful". | @fbonez | www.thevirtualway.it
Reply
0 Kudos
francescoghini
Enthusiast
Enthusiast

Hello, i had done some tests using some SAN OS so after a few days here is it the results, in all 3 tests i used same storage, same test machine...... i would like to have an opinion..... Thanks

STORAGE-RESULTS.png

Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Sehr geehrte Damen und Herren,

ich bin erst am 14.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.

Freundliche Grüße

Michael Groß

Reply
0 Kudos
Joris85
Enthusiast
Enthusiast

Hello,

I did the test with this setup:

Dell Equallogic PS 4100 (2x ISCSI 1Gb ports)

24x 600GB 10K

RAID10 + 2 Spare

1 Volume 2TB

Stacked Cisco 3750X with  Dell Recommended Scripts (MTU9000...)

VMware ESX 5.5

Dell PowerEdge R720 with 4x ISCSI 1Gb Ports

Dell MEM v1.2 PlugIn (DELL_EQL Path)

All done by the Dell Equallogic Compatibility Matrix and the Dell recommandations.

I installed 1 Windows Server 2008R2 station with 2vCPU's and 2GB RAM. No other VM's.

These are the results.

Are these normal/good figures?

It is hard for me to evaluate this and to know this is what is expected or not.

2GB RAM, 2vCPU
Access Specification NameIdleMax Throughput-100%ReadRealLife-60%Rand-65%ReadMax Throughput-50%ReadRandom-8k-70%Read
# Managers
# Workers01111
# Disks01111
IOps02561,8093534070,7368533391,7639114128,44941
Read IOps02561,8093532644,6046671695,4369772892,239794
Write IOps001426,1321861696,3269351236,209616
MBps (Binary)080,05654231,802632105,99262232,253511
Read MBps (Binary)080,05654220,66097452,98240622,595623
Write MBps (Binary)0011,14165853,0102179,657888
MBps (Decimal)083,94536933,347476111,1413233,820258
Read MBps (Decimal)083,94536921,66460155,55607923,693228
Write MBps (Decimal)0011,68287555,58524110,127029
Transactions per Second02561,8093534070,7368533391,7639114128,44941
Connections per Second05,1234858,1397276,7830158,256219
Average Response Time017,1112113,73937713,79296813,422029
Average Read Response Time017,1112114,23452713,67044513,710765
Average Write Response Time0012,82117613,91542612,746501
Average Transaction Time017,1112113,73937713,79296813,422029
Average Connection Time0195,169291122,823364147,41385121,105571
Maximum Response Time0197,719612395,593434162,881386378,444327
Maximum Read Response Time0197,719612395,593434162,881386378,444327
Maximum Write Response Time00151,936908115,824916138,24697
Maximum Transaction Time0197,719612395,593434162,881386378,444327
Maximum Connection Time0261,058808836,18337229,627369858,949322
Errors00000
Read Errors00000
Write Errors00000
Bytes Read0251828633606499598336166676070407108354048
Bytes Written003504979968166763560963038273536
Read I/Os0768520793408508655867719
Write I/Os00427854508922370883
Connections01537244220352477
Transactions per Connection-1500500500500
Total Raw Read Response Time01,88288E+111,61706E+11995620421071,70345E+11
Total Raw Write Response Time00785436835851,01399E+1167688631231
Total Raw Transaction Time01,88288E+112,4025E+112,00961E+112,38033E+11
Total Raw Connection Time04295098914429451838042952705124295146146
Maximum Raw Read Response Time02830985566417823321655418634
Maximum Raw Write Response Time00217546016584021979445
Maximum Raw Transaction Time02830985566417823321655418634
Maximum Raw Connection Time0373788711972624328784612298591
Total Raw Run Time04295326535429559801442956558984295686982
Starting Sector1,84467E+190000
Maximum Size1,84467E+198000000800000080000008000000
Queue Depth-164646464
% CPU Utilization0,99954229,20103217,35290527,63717918,222367
% User Time0,5800611,7781615,23645710,5715455,519789
% Privileged Time0,42919317,42543812,11873117,06631112,703574
% DPC Time0,0468211,4222191,3312141,7601911,476797
% Interrupt Time0,044220,7800110,6916070,7929960,857998
Processor Speed1431818014318180143181801431818014318180
Interrupts per Second191,4780931519,5810441247,8588471492,139291330,195429
CPU Effectiveness087,730098234,585325122,724679226,559452
Packets/Second29,43969829,73688329,68013129,25299329,536417
Packet Errors00000
Segments Retransmitted/Second00000
Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Sehr geehrte Damen und Herren,

ich bin erst am 22.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.

Freundliche Grüße

Michael Groß

Reply
0 Kudos
_VR_
Contributor
Contributor

SERVER TYPE: Windows 2008 R2, 16 vCPU, 4GB RAM, 40GB hard disk

CPU TYPE / NUMBER: Intel E5-2690, 2 CPU

HOST TYPE: HP DL380P Gen8, 256GB RAM, 2x10GB Broadcom BCM57810

ISCSI LAN: Cisco Nexus 3064T (MTU 9000)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS6110XS Firmware 6.0.6 (1x10GB/s ISCSI port) - 7x 400GB SSDs + 17x 600GB 10K HDDs  - RAID6 (accelerated), 1 spare - NO MEM

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read2.10263658247%
RealLife-60%Rand-65%Read5.819749764%
Max Throughput-50%Read5.16112753525%
Random-8k-70%Read5.6010175794%

Max Throughput-50% Read numbers seem a bit low, otherwise the results look decent

Reply
0 Kudos
RomanB1005
Enthusiast
Enthusiast

Hello

I would like to ask you

Regarding  tests , can I run some domain controllers (file server) , mail server and maybe some snmp and vpn server for around 50 users ?

Is that iops performance enough ?

RESULTS-ADVANCE+TEST.jpg

Thanks in advance

Reply
0 Kudos
SDYBELGIUM
Contributor
Contributor

Hi guys,

We have 4 HP DL 380p Gen 8 servers 2 in each DC conected thru 10GB Fibre  between DC

Then we have 8 HP Storevirtual HP StoreVirtual 4330 iSCSI also 4 in each DC conected with 10 GB Fibre between DC

The core switches are HP Procurve 3800

we have ESXI 5.1

We did a test with IOmeter on a new VM cause we have latency problems.

can anyone help verify if the below results are good or not ?

we compared it to other installs and it seemed slow

anone has a P4000 ??

storageiometer.png

Kind Regards,

Reply
0 Kudos