1 7 8 9 10 11 Previous Next 457 Replies Latest reply on Mar 5, 2009 6:53 AM by oreeh Go to original post
      • 120. Re: Open inofficial storage performance thread
        cperdereau Enthusiast

        Thank you guys

        I will post my result later one.

         

        I was asking this because when I clone a VMs, it seems slow for me. I wanted to  test the IO from the Console for this purpose

        • 121. Re: Open inofficial storage performance thread
          rock0n Expert
          VMware Employees

          I've installed a fresh Demo Lab.

           

          2 x 1HE Certified S5000PAL/SR1550 TERRA Server and QLE2462 HBAs

           

          1 x 20 Port QLogic SanBox

           

          1 x F5402E Xyratex ( 6 x 74 SAS RAID10 & 6 x 250 SATA RAID10 )

           

          I'll present some IOMeter results tomorrow.

           

           

          kind regards

          Raiko Mesterheide

          • 122. Re: Open inofficial storage performance thread
            AnthonyM Enthusiast

            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

            TABLE OF RESULTS

            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

             

            SERVER TYPE: VM ON ESX 3.0.1

            CPU TYPE / NUMBER: VCPU / 1

            HOST TYPE: HP DL360G5, Intel Xeon Dual Core 5120 (4 cores @ 1.866GHz), 4GB RAM (512MB allocated to VM)

            STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS100e x 1 / 14+2 SATA / R50

            SAN TYPE / HBAs :Microsoft  iSCSI initiatorNo Jumbo's and No Flow Control

             

            \##################################################################################

            TEST NAME--


            Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

            \##################################################################################

             

            Max Throughput-100%Read........\__19.07_..........___3,014___.........___94

            ____

             

            RealLife-60%Rand-65%Read......___22__..........___2,030___.........____15.86__

             

            Max Throughput-50%Read..........____8.49____..........___3,978___.........___124.30____

             

            Random-8k-70%Read.................____23____..........___1,956___.........__15.28____

             

            This VM is connected to the network with a single 1GB NIC, that was being shared with around 7 other VMs network traffic at the time, one of which is an exchange server serving ~ 125 staff.

            • 123. Re: Open inofficial storage performance thread
              rb2006 Lurker

              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

              TABLE OF RESULTS NetApp 2xFAS3020c Metro-Cluster configuration

              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

               

              SERVER TYPE: VM ON ESX 3.0.1

              CPU TYPE / NUMBER: VCPU / 1

              HOST TYPE: DELL PowerEdge 2900, 2x Intel Xeon Dual Core 5160, 16GB RAM (2 GB allocated to VM)

              STORAGE TYPE / DISK NUMBER / RAID LEVEL: 2x FAS3020c metro cluster / 2x26 FC 144GB 10K HDD’s / RAID 4

              SAN TYPE / HBAs : Brocade 3250 FC 2GB / QLA2432 HBAs

               

              \##################################################################################

              TEST NAME--


              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

              \##################################################################################

               

              Max Throughput-100%Read....__10,05___......___5814___.....___181,72___

               

              RealLife-60%Rand-65%Read......___20,45__..........___2586___.........____20,21__

               

              Max throughput-50%Read..........____7,05____..........___6818___.........___213,07____

               

              Random-8k-70%Read.................____25,88____..........___2073___.........__16,2____

              • 124. Re: Open inofficial storage performance thread
                williambishop Master

                Let me know how the dmx3 goes....I'm loving the dmx3000, but I've got a serious hunger to try it on the new series.

                • 125. Re: Open inofficial storage performance thread
                  christianZ Virtuoso

                  @all - thanks guys for your test results; RockOn - we are waiting for your results too !

                   

                  The results from rb2006 (FAS3020c) are very different from those of Joachims - I think the results from rb2006 are more realistic.

                   

                  RB2006 - could you run one test with 2 vms (simultan) so that one vm has a volume served over SP1 and the second vm has a volume served over SP2 - it would be interesting to see the overall throughput of your system (active/active).

                   

                  RB2006 - are you using sync mirroring too ? How many spindles were involved in your tests  (Flex vol here ?)?

                   

                   

                  BenConrad wanted to make a test with EQL volume stripped over 3 or 4 members - maybe forget ??

                   

                  This could give us the scalability potential of EQL (I have already tested it with 2 members).

                   

                  I heard many positives about Compellent Systems - maybe is there anybody he could make the tests too ??

                   

                  So far only 2 systems could outperform the throughput of EQL with 2 members (DS8000 and DMX3000) - can anybody offer more ?? (meant not quite serious ).

                  • 126. Re: Open inofficial storage performance thread
                    rb2006 Lurker

                    Hi,

                     

                    so we have two cluster nodes. One node serves CIFS and another is for VM's with LUN's. Yes, we are using sync mirroring. This has additional negativ impact on performance. We have one aggregate with two raid4 disk groups and with 26 disks on each node. Volumes are flex volumes.

                     

                    It's dificult for me to make another test, because I have 24 live VM's running and one exchange server with 400 users wich is connected over iSCSI and it's very dificult for me, to find time frame without load.

                    • 127. Re: Open inofficial storage performance thread
                      williambishop Master

                      I would imagine that a 4800 loaded with 810 cabinets the latest drives, appropriately connected would also probably beat it hands down. Also keep in mind, it's not the bandwidth afforded to 1 system for a test thats important, it's how far it will scale without degradation. On the dmx3000, it only has 2g connectors into the bloody box, but with a huge cache(in our case over 100gigs) and tons of front end, as well as tons of backend, you rarely have to hit it at a disk speed. It's all memory. Which is why it can rock longer and harder than most anything else. I drool over the dmx3....why oh why can't I have one?

                      • 128. Re: Open inofficial storage performance thread
                        BenConrad Master

                         

                        BenConrad wanted to make a test with EQL volume

                        stripped over 3 or 4 members - maybe forget ??

                         

                        This could give us the scalability potential of EQL

                        (I have already tested it with 2 members).

                         

                        I still need to purchase (2) WS-X6748-GE-TX modules for our 6509's before I can post anything interesting.

                         

                        Ben

                        • 129. Re: Open inofficial storage performance thread
                          christianZ Virtuoso

                          Copied dctaylorit's results:

                           

                           

                           

                           

                          ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                          TABLE OF RESULTS VM ON ESX / LeftHand Storage on HP DL 320s[/b] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                           

                           

                          SERVER TYPE: ESX 3.0.1

                          CPU TYPE / NUMBER: VCPU / 1

                          HOST TYPE: HP DL350G5 - 6GB - 2x Xeon5150 2.66 DC

                          STORAGE TYPE / DISK NUMBER / RAID LEVEL: LeftHand DL320s / 10+2 15k SAS / R5

                          SAN TYPE / HBAs : iSCSI, QLA4050 HBA

                           

                          \##################################################################################

                          TEST NAME--


                          Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                          \##################################################################################

                           

                          Max Throughput-100%Read........\_17.56_..........\_3355_.........\_104.9_

                           

                          RealLife-60%Rand-65%Read......\_24.19_..........\_2103_.........\_16.43_

                           

                          Max Throughput-50%Read..........\_16.35_..........\_3466.2_.........\_108.32_

                           

                          Random-8k-70%Read.................\_34.75_..........\_1582.83_.........\_12.37_

                           

                          EXCEPTIONS: CPU Util.-27-35-34-26%;

                           

                           

                          • 130. Re: Open inofficial storage performance thread
                            s.buerger Enthusiast

                            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                            TABLE OF RESULTS VM ON ESX / DAS (p600 and MSA50) on HP DL 380g5

                            ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                             

                             

                            SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

                            CPU TYPE / NUMBER: VCPU / 1

                            HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

                            STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS HP MSA50 Enclosure on HP P600-Controller w. 256MB BBWC (50/50% read/write) / 10x 146GB 10k 2,5" SAS / Raid 1+0

                             

                            ##################################################################################

                            TEST NAME--


                            Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                            ##################################################################################

                             

                            Max Throughput-100%Read........\_7.50_..........\_7738.74_.........\_241.83_

                             

                            RealLife-60%Rand-65%Read......\_16.16_..........\_2950.18_.........\_23.06_

                             

                            Max Throughput-50%Read..........\_8.39_..........\_6956.14_.........\_217.26_

                             

                            Random-8k-70%Read.................\_14.88_..........\_3147.66_.........\_24.42_

                             

                            EXCEPTIONS: CPU Util.-54-45-48-46%

                             

                             

                            • 131. Re: Open inofficial storage performance thread
                              s.buerger Enthusiast

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                              TABLE OF RESULTS VM ON ESX / DAS (p400) on HP DL 380g5

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                               

                               

                              SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

                              CPU TYPE / NUMBER: VCPU / 1

                              HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (25/75% read/write) / 6x 146GB 10k 2,5" SAS / Raid 5

                               

                              ##################################################################################

                              TEST NAME--


                              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                              ##################################################################################

                               

                              Max Throughput-100%Read........\_5.05_..........10930.53_.........\_341.99_

                               

                              RealLife-60%Rand-65%Read......\_28.25_..........\_1381.72_.........\_10.60_

                               

                              Max Throughput-50%Read..........\_5.45_..........\_10328.26_.........\_322.76_

                               

                              Random-8k-70%Read.................\_25.71_..........\_1449.84_.........\_11.33_

                               

                              EXCEPTIONS: CPU Util.-74-45-70-54%

                               

                               

                              • 132. Re: Open inofficial storage performance thread
                                s.buerger Enthusiast

                                correction to the last 2 benchmarks, 200GB vmdk not 20.

                                • 133. Re: Open inofficial storage performance thread
                                  s.buerger Enthusiast

                                  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                                  TABLE OF RESULTS VM ON ESX / DAS (p400) on HP DL 380g5

                                  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                                   

                                  SERVER TYPE: Win2k3 VM (1,5GB RAM, 20GB vmdk) on ESX 3.0.1

                                  CPU TYPE / NUMBER: VCPU / 1

                                  HOST TYPE: HP DL380G5 - 20GB - 2x Xeon5345 2.33GHz Quadcore

                                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS on HP P400-Controller w. 512MB BBWC (25/75% read/write) / 2x 72GB 10k 2,5" SAS / Raid 1

                                   

                                  ##################################################################################

                                  TEST NAME--


                                  Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                                  ##################################################################################

                                   

                                  Max Throughput-100%Read........\_0.71_..........26027.65_.........\_813.36_

                                     (VI-Client shows Disk Usage Average/Rate of this VM 53MB/s and same for the vmhba Disk Read Rate)

                                   

                                  RealLife-60%Rand-65%Read......\_83.59_..........\__557.11_.........\__4.35_

                                   

                                  Max Throughput-50%Read..........\_5.85_..........\__9678.30_.........\_302.45_

                                   

                                  Random-8k-70%Read.................\_77.10_..........___681.36_.........\_5.32_

                                   

                                  EXCEPTIONS: CPU Util.-100-42-68-26%

                                  ##################################################################################

                                   

                                  Don't understand why on the first test the cpu utilization is so high and the max troughput is so much better compared to the raid5 test on the same controller. Any explanation?

                                  • 134. Re: Open inofficial storage performance thread
                                    larstr Virtuoso
                                    vExpert

                                    Don't understand why on the first test the cpu

                                    utilization is so high and the max troughput is so

                                    much better compared to the raid5 test on the same

                                    controller. Any explanation?

                                     

                                    I don't know \*why* the cpu load is so much higher, but when the cpu load inside a guest VM is high, it's timing (clock) becomes highly unreliable and the numbers you get when running IOmeter will also not be very reliable because of this.

                                     

                                    Lars

                                    1 7 8 9 10 11 Previous Next