1 2 3 Previous Next 568 Replies Latest reply on Jul 6, 2016 1:20 AM by mpogr

    New  !! Open unofficial storage performance thread

    christianZ Virtuoso

       

      Hello everybody,

       

       

      the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

       

       

      Oliver will make a few links between the old and the new one and then he will close the old thread.

       

       

      Thanks for joining in.

       

       

       

       

       

      Reg

       

       

      Christian

       

       

        • 2. Re: New  !! Open unofficial storage performance thread
          Brian Knutsson Enthusiast

          Maybe it would be a good ideer to create a new template for the results, that does not take up so much space in the thread.

           

          And maybe a template to upload the result to a file for easy of download-import-compare results.

           

           

           

           

           

          Oh yeah .. Someone should take all the results from the old thread and implement into the new template and post them to this thread..

          • 3. Re: New  !! Open unofficial storage performance thread
            oreeh Guru

            Oh yeah .. Someone should take all the results from the old thread and implement into the new template and post them to this thread..

             

            Go ahead...

            • 4. Re: New  !! Open unofficial storage performance thread
              Brian Knutsson Enthusiast

              I will leave it up to christianZ to make the new template first. Maybe if I run into unemployment, I will consider taking the task. christianZ

              • 5. Re: New  !! Open unofficial storage performance thread
                meistermn Master

                I wish to categorize

                 

                Windows 2003 OS Benchmarks in a VM

                Single Threaded Application= 1 Outstanding IO

                http://www.snia.org/education/tutorials/2007/spring/storage/Storage_Performance_Testing.pdf

                Page 20

                Multithreaded Application = 25 Outstanding IO

                http://www.snia.org/education/tutorials/2007/spring/storage/Storage_Performance_Testing.pdf

                Page 21

                Synthetic IO-Meter Benchmarks Hard Disks

                Catagory SAN Storage

                Catagory NFS Storage

                Catagory ISCSI Storage

                Catagory Software based Storage (Datacore,Falconstore, Lefthand, Sanrad)

                Synthetic IO-Meter Benchmarks Solid State Disks (SSD)

                Vendors of SSD's Intel, Stec, Samsung and so on

                Synthetic IO-Meter Benchmarks PCI Express NAND

                Fusionio Card 100.000 IOPS

                Performance http://www.fusionio.com/PDFs/Medusa%20report.pdf

                Page 4-7

                Real Filecopy Benchmark xcopy

                Copy Large File 10 GB from Partition C: to D: in a VM

                Copy Large File 10 GB File between two VM's VM1 to VM2 on the same ESX

                Copy Large File 10 GB between two VM's VM1to VM2 on different ESX (ESX1 and ESX2)

                create many small random and make the same test as for the large files

                Cold migration Benchmark

                cold Migrate 4 VM's from LUN1 to LUN2 at the same time

                Database Benchmark

                MS DB Hammer Tool

                IO-Meter Benchmark with specific DB IO-Meter parameters

                 

                • 6. Re: New  !! Open unofficial storage performance thread
                  ekos Novice

                   

                  Hi guys,

                   

                   

                  I did some testing on our ESX hosts and I'm getting the feeling that there's room for improvement.

                  Altough I'm finding it hard to compare our test to other tests posted earlier, because there's always something different in each configuration.

                  Does anyone have an opinion on our test results?

                   

                   

                  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                  TABLE oF RESULTS

                  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                   

                   

                  SERVER TYPE: VM

                  CPU TYPE / NUMBER: VCPU / 1

                  HOST TYPE: HP DL385, 16GB RAM; 2x AMD Opteron 285 (2.6 GHz), Dualcore, QLA4050C

                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp 3140 / 41 Disks x 274 GB / Double Parity

                   

                   

                  ##################################################################################

                  TEST NAME--


                  Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek-

                  ##################################################################################

                   

                   

                  Max Throughput-100%Read......___26.85____......._2204.42__........._68.89___

                   

                   

                  RealLife-60%Rand-65%Read..___21.82____.......__504.14__.........__3.94___

                   

                   

                  Max Throughput-50%Read........___14.58____.......__577.82__........._18.06___

                   

                   

                  Random-8k-70%Read...............___37.06____.......__489.40__.........__3.82___

                   

                   

                  EXCEPTIONS: CPU Util. 32% - 15% - 18% - 15%;

                   

                   

                  ##################################################################################

                   

                   

                  • 7. Re: New  !! Open unofficial storage performance thread
                    Brian Knutsson Enthusiast

                     

                    That is indeed very poor performance.

                     

                     

                    What does you NetApp webinterface tell you about the load on the NetApp boxes? Are you sure nothing else is running..

                     

                     

                    I dont know if it is possible on fiber but can it be a link enogotion problem?

                     

                     

                    • 8. Re: New  !! Open unofficial storage performance thread
                      Jakobwill Hot Shot

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                       

                      SERVER TYPE: VM - Win2k3 R2

                      CPU TYPE / NUMBER: CPU / 1

                      HOST TYPE: VM, 1GB RAM; 1x vCPU

                      STORAGE TYPE / DISK NUMBER / RAID LEVEL: VMDK/VMFS via FC to SANmelody mirror

                       

                      2x SANmelody 2.04 update 1 with a LUN each from the same array: HDS AMS2100 with 15x 10k SAS 400gb. 2gb cache.

                       

                       

                      SANmelody has 8gb ram - The LUN is spread on (long description gone short) 15 spindles.

                       

                       

                       

                      ##################################################################################

                      TEST NAME--


                      Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                      ##################################################################################

                      (Gennem VI3 Client)

                      Max Throughput-100%Read........1.528815..........27032.........844.767__100%        Seen via the VI3 Client -> (108mb/sec r - 1mb/sec w)     (100% cpu)  

                       

                      RealLife-60%Rand-65%Read......17.870013..........2253.........17.602__62%           Seen via the VI3 Client -> ( 13mb/sec r - 7mb/sec w)     (50% cpu)   

                       

                      Max Throughput-50%Read..........3.970814..........12957.........404.909_ 67%            Seen via the VI3 Client -> (217mb/sec r - 217mb/sec w)     (82% cpu)

                       

                      Random-8k-70%Read.................15.559686..........2802.........21.897___57%           Seen via the VI3 Client -> ( 15mb/sec r - 7mb/sec w)     (57% cpu)

                       

                      EXCEPTIONS: CPU Util.-Is listed after the Av. MB/sek

                       

                      ##################################################################################

                      I know the first test - 100% read is off because of the 100% on vCPU = time is foooked.

                       

                       

                       

                       

                      But the other results are pretty impressive. or whats our opinion?

                       

                       

                       

                       

                       

                       

                       

                       

                       

                      Forgot to mention... These tests was done while in production.  So there was 30 vm working against the same SANmelody servers. (On a different ESX servers offcourse. )

                       

                       

                       

                       

                       

                      Raid description gone long:

                       

                       

                      2 RAID5 groups with 7+1 10k SAS 400gb disks

                       

                       

                      In each Raid group  we create 4x 640gb disks - so in total 8 disk of 640gb

                       

                       

                      Take on 640gb disk from each group and create a lun of 1280gb which is presented to the Datacore server. One for each which put in a pool where from i create a Virtual Vol as a VMFS to ESX. On the VMFS i create a VMDK to the VM, where i testing on.

                       

                       

                      Sorry, but its a bit detailed.

                      • 9. Re: New  !! Open unofficial storage performance thread
                        christianZ Virtuoso

                         

                        Well your numbers are not bad - but one wants to know, how many cache/ram have your San-Melody servers; you have 15 disk there, but your test lun was configured on how many spindles.

                         

                         

                         

                         

                         

                        • 10. Re: New  !! Open unofficial storage performance thread
                          Jakobwill Hot Shot

                           

                          Description added.

                           

                           

                          In short the test lun spread on every disk. Almost like EVA storage systems.

                           

                           

                           

                           

                           

                          • 11. Re: New  !! Open unofficial storage performance thread
                            christianZ Virtuoso

                             

                            Well, for me it looks very good - but one shouldn't forget you have ca. 8 GB cache in your Sanmelody servers and the test file is only 4 GB on size.

                             

                             

                            You could try to make the test file bigger, e.g 20 GB and then test again.

                             

                             

                            Anyway thanks for that.

                             

                             

                            Reg

                             

                             

                            Christian

                             

                             

                            • 12. Re: New  !! Open unofficial storage performance thread
                              iancampbell Novice

                               

                              The first test is on a 5 disk Raid 5 array and the second is on a 6 disk raid 5 array.  It's interesting to compare these results to jmacdaddy's (page 22 of the original unoffical test results) MD3000i RAID 5 test results as the MD3000i is a Dell badged DS3300.   I'll be receiving the cache module upgrades sometime this week and will upload test results to show any difference they have when I get the time.

                               

                               

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                              TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                               

                               

                              SERVER TYPE: VM.

                              CPU TYPE / NUMBER: VCPU / 1

                              HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (512MB CACHE/SP) / 5 SAS 15k/ R5

                              SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

                               

                               

                              ##################################################################################

                              TEST NAME--


                              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                              ##################################################################################

                               

                               

                               

                              Max Throughput-100%Read........___16.99_____.......___3486.8_____.....____108.9______

                               

                               

                               

                              RealLife-60%Rand-65%Read......_____48.89____.....____1062.7____.....____8.3______

                               

                               

                               

                              Max Throughput-50%Read.........____22.9____.....______2579.9______.....____80.6______

                               

                               

                               

                              Random-8k-70%Read..............____44.72_____.....____1204______.....____9.41______

                               

                               

                               

                              EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

                               

                               

                               

                               

                               

                               

                               

                               

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                              TABLE OF RESULTS 1X VM WIN2003 R2 SP2 / ESX 3.5 ON IBM DS3300

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                               

                               

                              SERVER TYPE: VM.

                              CPU TYPE / NUMBER: VCPU / 1

                              HOST TYPE: HP DL380 G5, 10GB RAM, 2 x Intel E5440, 2.83GHz, QuadCore

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3300 (512MB CACHE/SP) / 6 SAS 15k/ R5

                              SAN TYPE / HBAs : Ethernet 1Gb; VMWare iSCSI software initiator (Intel 82571EB NIC)

                               

                               

                              ##################################################################################

                              TEST NAME--


                              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                              ##################################################################################

                               

                               

                               

                              Max Throughput-100%Read........___16.7_____.......___3552_____.....____111______

                               

                               

                               

                              RealLife-60%Rand-65%Read......_____40.6____.....____1293.2____.....____10.1______

                               

                               

                               

                              Max Throughput-50%Read.........____20.33____.....______2955.16______.....____92.3______

                               

                               

                               

                              Random-8k-70%Read..............____36.8_____.....____1449.2______.....____11.3______

                               

                               

                               

                              EXCEPTIONS: No Jumbo Frames, no flow control, ethernet switch has storage vlan but is shared with LAN,

                               

                               

                               

                               

                               

                              • 13. Re: New  !! Open unofficial storage performance thread
                                christianZ Virtuoso

                                 

                                @[iancampbell|http://communities.vmware.com//people/iancampbell|Hier klicken, um das Profil von iancampbell anzuzeigen]

                                 

                                 

                                Thanks for that.

                                 

                                 

                                Yeh, basically the MD3000i and DS3300 are the same oem boxes (LSI Engenio) - so the results are very similar.

                                 

                                 

                                When I see the numbers from MD3000i and DS3300 with sas disks I wonder about  the Infortrend results on sata's (page 21) - very impressive IMHO -

                                 

                                 

                                what's a pity that the administration and support  are not on the same level.

                                 

                                 

                                • 14. Re: New  !! Open unofficial storage performance thread
                                  christianZ Virtuoso

                                  Last weekend I listened the "Talk Shoe", recorded episode no. 40 (http://www.talkshoe.com/talkshoe/web/talkCast.jsp?masterId=19367)

                                   

                                  And I can't agree with the statement that all the benchmarking tests (especially storage) don't matter.

                                   

                                  Well, until now I haven't seen any storage gear that benchmarked poor (I'm speaking about rational tests) and then in the production worked fast (or vice versa).

                                   

                                  Of course one should use his own mind by analyzing of benchmark results - remember the storage is crucial for your VI infrastructure and it can be the most expensive component there.

                                   

                                  The benchmarks are a kind of workload too - and if they are mixed, they can give some interesting view points.

                                   

                                  It would be nice to know what is the max. throughput of series "A" from vendor "X" by deciding which gear to chose - this way one can avoid buying of insufficient boxes.

                                   

                                  It would be nice to know, that one can get the specific throughput by vendor "X" or by vendor "Y" on the half of price.

                                   

                                  As a customer one can get better proposals by comparing of competitors products.

                                   

                                  For SMB it would be interesting to know that the needed performance you can buy by 3.rd storage vendors also but on the smaller price.

                                   

                                  But I agree (as I wrote in my first posting in the original thread) the benchmarks results shouldn't be the only decision's factor.

                                  The quality of services, reliability, vendors support, managment/simplicity of using and config, vendors connections, distribution, ... as well healthy mind aren't to forget.

                                   

                                  Anyway until now I listened all the episodes - good work - keep it going.

                                   

                                  Just my opinion, but maybe I'm not alone here.

                                   

                                  Reg

                                  Christian

                                  1 2 3 Previous Next