1 33 34 35 36 37 Previous Next 574 Replies Latest reply on Jun 1, 2018 5:44 AM by larstr Go to original post
      • 510. Re: New  !! Open unofficial storage performance thread
        cmekki Lurker

        Hi,

         

        The test was done on a single ESXi 5.1u1 Host with 2 physical 10GB nics connected in a Dell M1000 chassi with two 10GB Dell PowerConnect M8024 switches. The PC M8024 switches is conected two Dell Force10 10GB switches and at last the Equallogic PS6010 Storage array. This is isolated storage network.

         

        SERVER TYPE: Dell M710

        CPU TYPE / NUMBER: Intel Xeon X5680 Processor (3.33Ghz, 6C, 12M Cache, 6.40GT/s QPI, 130W TDP, Turbo, HT) / 1

        HOST TYPE: ESXi 5.1u1 (Dell MEM Installed) - VMware I/O Analyzer 1.5.1 (Disk2=8GB)

        STORAGE TYPE / DISK NUMBER / RAID LEVEL / CONNECTIVITY: Dell Equallogic PS6010 (FW 6.0.2) / 16 SAS 600GB 15KRpm / RAID 10 / 2 iSCSI path

         

        TEST NAMELATENCY/rd IOPS/cmdsMBPS/readsVM CPU LOAD
        Max Throughput 512k 100% Read20.111583.53782.46~58
        SQL Server 64k 100% Rand - 66% Read10.052224.8790.92~23
        OLTP 8K 70% Read - 100% Rand 5.723685.8520.6~26
        MAX IOPS 0.5k 100% Read0.7536949.0018.15~100

         

        I had a hard time to understand the numbers (see screenshot below), is it good or do we need to ses this over?

         

        io.PNG

        • 511. Re: New  !! Open unofficial storage performance thread
          _VR_ Novice

          Lab setup to evaluate what the Intel S3700 SSDs are capable of. Throughput was bottle-necked by the P410 controller. Tests performed in steady state.

           

          2x 800GB Intel S3700 SSDs / Raid 0 / 2x X5560 @ 2.80 Ghz / HP P410 Controller w/ 512MB Cache

           

          TestLatencyAvg IopsAvg MBpsCPU Load
          Max Throughput-100% Read4.48136674274.04%
          RealLife-60%Rand-65% Read3.21176141384.95%
          Max Throughput-50% Read1.334192013106.33%
          Random-8k-70% Read3.341710213410.56%

           

          Retested without the bottleneck

           

          2x 800GB Intel S3700 SSDs / Raid 0 / 2x E5-2690 @ 2.90 Ghz / HP P420 Controller w/ 2GB Cache / 76GB iobw.tst

           

          TestLatencyAvg IopsAvg MBpsCPU Load
          Max Throughput-100% Read0.1011279935256.39%
          RealLife-60%Rand-65% Read0.96535844185.71%
          Max Throughput-50% Read0.3311888037157.18%
          Random-8k-70% Read1.02501903924.88%
          • 512. Re: New  !! Open unofficial storage performance thread
            MhaynesVCI Lurker

            Hey guys, I know this thread isn't the most active place on the internets but I'm hoping maybe a storage networking guru can help me out. I'm troubleshooting poor performance on our iSCSI SAN and seeing some interesting IOmeter results:


            Array: FreeNAS 27 SATA disk array 2x 1Gb links

            Access Specification NameIOpsMBpsLatency(ms)
            Max Throughput-100%Read3563.99111.3716.80
            RealLife-60%Rand-65%Read984.897.6960.35
            Max Throughput-50%Read5800.14181.2510.11
            Random-8k-70%Read1692.6713.2235.09

             

            Now, looking past these fairly mediocre results - one thing that's come up consistently is that the 100% random 8K 70% Read tests is consistently faster than the RealLife test, sometimes up to 2x as "fast" and considerably less latency (although both are bad, I know). In my mind I'm thinking that the 100% randomness should result in slower performance... I'm wondering if this is a symptom of some sort of misconfig on our networking gear. I've isolated it to our iSCSI "core", which is a stack of four PowerConnect 8024s. If i isolated this array to a single switch in the stack and run the tests directly from my laptop these are the results (I think I'm hitting some bottlenecks related to the laptop NIC):

             

            Access Specification NameIOpsMBpsLatency(ms)
            Max Throughput-100%Read1798.6156.2125.48
            RealLife-60%Rand-65%Read5447.4642.566.20
            Max Throughput-50%Read1757.2554.9121.81
            Random-8k-70%Read5245.1240.986.10

             

            Anyway, I've reviewed the 8024s config and they're to Dell's recommended best practices (jumbos, flow control, unicast storm control disabled). Nothing looks obviously wrong in the stack, CPU/mem utilization is fine, no stackport errors, etc etc.

             

            Someone want to throw an idea my way? I'm out of ideas. Thanks!

            • 513. Re: New  !! Open unofficial storage performance thread
              fbonez Expert
              vExpert

              Sono fuori ufficio.

              Per urgenze contattare l'assistenza tecnica allo 045 8738738 o inviare una mail a supporto.tecnico@rtc-spa.it

               

              Francesco Bonetti

              RTC SpA

              --
              If you find this information useful, please award points for "correct" or "helpful". | @fbonez | www.thevirtualway.it
              • 514. Re: New  !! Open unofficial storage performance thread
                mikeyb79 Novice

                If removing one of the switches from the configuration is resulting in that significant of a performance increase, then I suspect there may be something up with the stacking. How are the switches stacked? Can you take a look at the status on the ports for collisions or dropped packets? May be worth taking a look at an EqualLogic on PowerConnect configuration guide again and stripping away the configuration details as they may be highly relevant to your situation. Also, what does the path selection policy look like in VMware?

                • 515. Re: New  !! Open unofficial storage performance thread
                  MhaynesVCI Lurker

                  Thanks for the reply!

                   

                  I didn't actually remove the switch from the stack for my 2nd test - I carved out a new interface & iscsi target on the array and connected it via a single link to to one of the member switches. Then I literally plugged my laptop into another port on the same switch and ran IOmeter from that. Switches are stacked using their 10Gb interfaces via DAC cables.

                  Path selection is round robin, iscsi bound vmks etc etc. I have another test I ran to this same array via a host that again, lives on the same switch as the array. The results were similar (better actually, as my ESXi guests are using multipathing and whatnot). It seems to be only traffic that traverses the core stack that get screwed up, yet the stack ports themselves show no errors / drops.

                   

                  I wonder if maybe we've hit a firmware bug of some sort. We're running a fairly old firmware revision, in fact the one that first supported ethernet port stacking, 4.2.2.3. Problem is the upgrade process would be site-wide downtime - so it'd be nice to know it's not a config issue of some sort.

                  • 516. Re: New  !! Open unofficial storage performance thread
                    mikeyb79 Novice

                    Compellent SC8000 - The SMALLEST config around!

                     

                    So it was a good week so far. We decided to buy some new, dedicated storage for VMware Replication, replicating our production environment to our DR location. We spoke with a large number of vendors to find something we were really happy with and Dell brought Compellent to the table as there was a promo on for 2x SC8000 controllers and 1x SC200 disk enclosure with 12x 600GB 15k drives for EXTREMELY reasonable pricing. We loved how it optimized data placement and how the controllers were isolated from the drives themselves and my boss was quite fond of the fact that adding drives in small increments and re-striping entire tiers was quite easy. Another local company with whom we've dealt with before has 400TB on Compellent and provided a glowing review for us. So we went ahead and ordered the bundle, along with an additional shelf with 8x 4TB 7k drives for "cold" data. This was mostly to get a feel for the product, and see if this was going to be a good fit on the production side where we would be requiring much more performance.

                     

                    I basically followed the guide and got it up and running no problem. Our config is 1GbE iSCSI, with a single dual-port HBA per controller for the time being. Our controllers are also the 16GB variety, but Dell spiffed us a pair of 64GB cache upgrade kits. Still waiting for those to come in. That means this is basically the lowest performance you could expect out of a Compellent Storage Center. The storage profile for this test was the default profile, and the data was spread across 11 drives (the twelfth drive is a hot spare). It was connected through an HP ProCurve 5412XL switch with two dedicated VLANs for iSCSI, flow control enabled but no jumbo frames.

                     

                    Needless to say, I am quite impressed with how many IOPS it can squeeze from 11 measly spindles (389 IOPS/spindle at best), and how consistent the results are. It really wants to deliver 4000 IOPS no matter what kind of workload you throw at it - good enough for me.

                     

                    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                    TABLE OF RESULTS

                    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                     

                    SERVER TYPE: Windows 2008 R2, 1 vCPU, 4GB RAM, 40GB hard disk

                    CPU TYPE / NUMBER: Intel E5-2660, single vCPU

                    HOST TYPE: Dell PowerEdge R720, 256GB RAM; 2x E5-2660, 2.2 GHz

                    STORAGE TYPE / DISK NUMBER / RAID LEVEL: Compellent SC8000, 11 data disks in Tier 1 (RAID10), 600GB 15k

                     

                    ##################################################################################

                    TEST NAME--


                                                                       Resp. Time ms                    Avg IO/sec                         MB/sec    

                    ##################################################################################

                     

                    Max Throughput-100%Read........____15.58___..........____3844.90__.........____120.15____

                     

                    RealLife-60%Rand-65%Read......_____11.34_.........._____3993.28__.........____31.20____

                     

                    Max Throughput-50%Read.........._____12.73___.........._____3784.41__.........____118.26____

                     

                    Random-8k-70%Read................._____10.72__.........._____4283.04__.........____33.50___

                    • 517. Re: New  !! Open unofficial storage performance thread
                      mikeyb79 Novice

                      Further testing continues today as I have time in between a number of other projects. One simple tweak:

                       

                      esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops 1 --device naa.xxxxxx

                       

                      where "device" is the 500GB Compellent volume has resulted in a reasonably dramatic performance improvement on the 100% Read tests, but nothing worth noting on the others. Going to keep playing with this to find the best performing combination for this storage array, next testing an IOPS policy of 3 then possibly enabling jumbo frames end-to-end and trying a "bytes" policy of 8800.

                       

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                      TABLE OF RESULTS

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                       

                      SERVER TYPE: Windows 2008 R2, 1 vCPU, 4GB RAM, 40GB hard disk

                      CPU TYPE / NUMBER: Intel E5-2660, single vCPU

                      HOST TYPE: Dell PowerEdge R720, 256GB RAM; 2x E5-2660, 2.2 GHz

                      STORAGE TYPE / DISK NUMBER / RAID LEVEL: Compellent SC8000, 11 data disks in Tier 1 (RAID10), 600GB 15k (IOPS policy set to 1)

                       

                      ##################################################################################

                      TEST NAME--


                                                                         Resp. Time ms                    Avg IO/sec                         MB/sec   

                      ##################################################################################

                       

                      Max Throughput-100%Read........____10.07___..........____5944.75__.........____185.77____

                       

                      RealLife-60%Rand-65%Read......_____11.38_.........._____3944.52__.........____30.82____

                       

                      Max Throughput-50%Read.........._____13.01___.........._____2893.33__.........____90.42____

                       

                      Random-8k-70%Read................._____10.61__.........._____4327.21__.........____33.81___

                      • 518. Re: New  !! Open unofficial storage performance thread
                        JonT Hot Shot

                        If the storage is iSCSI the recommendation that I have seen is to not use IOPS as the path limit control, but to definitely use JUMBO frames if possible and set the BYTE limit to the maximum payload packet size of jumbo minus the header overhead. I am not sure what that BYTE setting is but am sure it is elsewhere in this very long thread.

                        • 519. Re: New  !! Open unofficial storage performance thread
                          mikeyb79 Novice

                          Yes, it should be set to 8800 bytes. I have it set now and the benchmark is cooking away.

                          • 520. Re: New  !! Open unofficial storage performance thread
                            mikeyb79 Novice

                            Here's the results with bytes 8800 policy set, no significant difference.

                             

                            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                            TABLE OF RESULTS

                            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                             

                            SERVER TYPE: Windows 2008 R2, 1 vCPU, 4GB RAM, 40GB hard disk

                            CPU TYPE / NUMBER: Intel E5-2660, single vCPU

                            HOST TYPE: Dell PowerEdge R720, 256GB RAM; 2x E5-2660, 2.2 GHz

                            STORAGE TYPE / DISK NUMBER / RAID LEVEL: Compellent SC8000, 11 data disks in Tier 1 (RAID10), 600GB 15k (bytes policy set to 8800)

                             

                            ##################################################################################

                            TEST NAME--


                                                                               Resp. Time ms                    Avg IO/sec                         MB/sec  

                            ##################################################################################

                             

                            Max Throughput-100%Read........____10.93___..........____5550.65__.........____173.46____

                             

                            RealLife-60%Rand-65%Read......_____11.25_.........._____4067.83__.........____31.78____

                             

                            Max Throughput-50%Read.........._____12.29___.........._____2927.64__.........____91.49____

                             

                            Random-8k-70%Read................._____10.66__.........._____4347.17__.........____33.96___

                            • 521. Re: New  !! Open unofficial storage performance thread
                              dam09fr Lurker

                              My little contribution :


                              SERVER TYPE: Windows 2008 R2, 2 vCPU, 4GB RAM, 60GB hard disk

                              CPU TYPE / NUMBER: Intel X6550, 2 CPU

                              HOST TYPE: Dell PowerEdge R710, 64GB RAM, 2x E5-2660 (2.66 GHz), 4x1GB/s ISCSI ports

                              ISCSI LAN: 2x PowerConnect 6224 (MTU 9000)

                               

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS6100X Firmware 6.0.5 (4x1GB/s ISCSI ports) - 22 SAS 10K 600GB - RAID10 + 2 spares - NO MEM

                              Test nameLatencyAvg iopsAvg MBpscpu load
                              Max Throughput-100%Read4.55119103721%
                              RealLife-60%Rand-65%Read9.314938384%
                              Max Throughput-50%Read5.8494302940%
                              Random-8k-70%Read9.115036394%

                               

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 6 SATA 10K 500GB (WD Velociraptor) - RAID5

                              Test nameLatencyAvg iopsAvg MBpscpu load
                              Max Throughput-100%Read9.8359941871%
                              RealLife-60%Rand-65%Read54.9191170%
                              Max Throughput-50%Read12.9143771361%
                              Random-8k-70%Read63.1278360%

                               

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID1 (block LUN)

                              Test nameLatencyAvg iopsAvg MBpscpu load
                              Max Throughput-100%Read9.4362461950%
                              RealLife-60%Rand-65%Read17.533255251%
                              Max Throughput-50%Read10.9751001590%
                              Random-8k-70%Read20.432760210%

                               

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: Synology DS1813+ DSM 4.3 (3x1GB/s ISCSI ports) - 2 SSD Crucial M4 256GB - RAID0 (block LUN)

                              Test nameLatencyAvg iopsAvg MBpscpu load
                              Max Throughput-100%Read9.7160661892%
                              RealLife-60%Rand-65%Read10.885215400%
                              Max Throughput-50%Read9.5260401882%
                              Random-8k-70%Read11.774794370%

                               


                              • 522. Re: New  !! Open unofficial storage performance thread
                                mac1978 Enthusiast

                                Just migrated from an Equallogic PS4000 to a NetApp FAS2240.  These numbers seem low and the latency seems quite high.  Any thoughts on these numbers?

                                 

                                ESXi 5.1 u1a.  4 physical NICs setup in 1to1 vmkernal ports and all vmk ports are bound to the VMware Software iSCSI initiator.  Round robin is being used.  MTU of 9000 set on all.

                                 

                                SERVER TYPE:Windows 7 64bit 1vCPU 4GB ram

                                CPU TYPE / NUMBER: quad-core AMD opteron 2389

                                HOST TYPE: HP DL385 G5p VMware ESXi 5.1u1a 1065491

                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS2240-2 12x900GB 10K SAS. 1 Spare - 2 Parity RAID DP (Raid 6)

                                Test nameLatencyAvg iopsAvg MBpscpu load
                                Max Throughput-100%Read16.233149983%
                                RealLife-60%Rand-65%Read12.683827293%
                                Max Throughput-50%Read17.612616812%
                                Random-8k-70%Read13.273728293%
                                • 523. Re: New  !! Open unofficial storage performance thread
                                  pinkerton Enthusiast

                                  Sehr geehrte Damen und Herren,

                                   

                                  ich bin erst am 04.11.2013 wieder erreichbar. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.

                                   

                                  Freundliche Grüße

                                  Michael Groß

                                  • 524. Re: New  !! Open unofficial storage performance thread
                                    mikeyb79 Novice

                                    It's not hopeless by any means, but it appears as though what you did is you took the 24 drives and split them between the controllers. This will have an impact on your performance.

                                     

                                    I also have a FAS2240 (the -4 model) in my test lab with 24 1TB NL-SAS drives at 7k. You can see my read numbers are higher as I read from slightly more spindles. The more write-intensive benchmarks are higher on yours with slightly lower latency due to the faster drives but you would have been able to stretch them out more with a larger aggregate.

                                     

                                    My layout has:

                                    • Controller 1 with 3 drives RAID-DP for vol0 and 1 hot spare;
                                    • Controller 2 with 3 drives RAID-DP for vol0 and 1 hot spare;
                                    • 1 aggregate of 16 disks (with a RAID size of 16) owned by controller 1.

                                     

                                    This means that I essentially have an active/passive configuration, controller 2 serves no data. Controller 1 benefits from a larger data aggregate, and both controllers have a same-sized hot spare available so no matter which controller owns the data aggregate, I always have a spare for rebuilds. Disks get added in groups of 16, either to the existing aggregate on controller 1 (depending on CPU utilization, cache hit %, and disk utilization), or you can start again on controller 2 if the first controller is heavily utilized.

                                     

                                    NetApp FAS2240-4
                                    Access Specification NameIOpsMBps (Binary)Avg Response Time
                                    Max Throughput-100%Read3,506.35109.5717.17
                                    RealLife-60%Rand-65%Read2,862.3822.3617.27
                                    Max Throughput-50%Read6,393.61199.809.18
                                    Random-8k-70%Read2,651.0420.7117.92

                                     

                                    Taking a look at your random and real-life numbers on an 11-disk RAID-DP RAID set you are getting at worst roughly 330 IOPS/spindle. Spread across 16 disks that would probably be closer to 5,200 IOPS on those benchmarks.

                                     

                                    As for the latency, I would focus more on the numbers coming out of OnCommand System Manager based on your actual workload to ensure they are reasonable rather than what you are seeing on a synthetic benchmark.You will find the caching algorithms are quite good in ONTAP but most benchmarks try to remove cache as much as possible.

                                    1 33 34 35 36 37 Previous Next