1 9 10 11 12 13 Previous Next 574 Replies Latest reply on Jun 1, 2018 5:44 AM by larstr Go to original post
      • 150. Re: ESXi 4, Dell R610 & MD3000i Results
        Mindflux Enthusiast

         

        Hrm. I'm trying to run this but I'm getting really out there numbers.

         

         

         

         

         

        Like 41,000 IOPS/s and over 1200MB/s on a 6x 300GB 15k array in RAID6 (local storage) on Server 2008 64 bit.

         

         

         

         

         

         

         

         

        • 151. Re: ESXi 4, Dell R610 & MD3000i Results
          s1xth Expert
          VMware Employees

           

          How many iSCSI links did you have from your host servers to your storage for those numbers? Just curious.

           

           

           

           

           

          Thanks!

           

           

          • 152. Re: New  !! Open unofficial storage performance thread
            JonKnaggs Lurker

            Hi, we have a pretty basic setup, and probably far from best practice. But here are our results. Our array takes a massive hit on 60% random 65% read. Looks like our results are pretty average. I think I should look at seperating the iSCSI traffic, at least onto its own VLAN. I wont be able to get a dedicated switch.

             

            VM tested is on our busiest ESX host and LUN.

            Antivirus was on during test.

             

            ESXi 4

            MD3000i Dual Controller w/146GB 15K SAS

            2 x Dell 2950 (32GB, 2 x E5450@3.00GHz (8CPU), 2 x Broadcom Embedded, 1 Intel Quad GBe Card)

            1 x Cisco 3560G for all traffic. iSCSI is not separated onto its own VLAN unfortunately.

            2 x Gb nic for iSCSI traffic using software initiator and 2 x Gb nic for VM network traffic on each host.

             

            VM: Windows 2003 Server R2 (32Bit)

            7 Disk - Raid 5

            No Jumbo Frames

            -


            IOPs--


            MB/s
            RT
            --


            CPU

            VM1: Max Throughput-100%Read--


            3292.766-102.89918.148--7.201

            VM1: RealLife-60%Rand-65%Read--


            845.845---6.608
            56.374
            --10.01

            VM1: Max Throughput-50%Read--


            2572.745-80.398---22.614--7.232

             

            Sorry for formatting.

            • 153. Re: New  !! Open unofficial storage performance thread
              eMax04 Enthusiast

              Hello All!!

               

              I am working on benchmarking a new storage product in my environment.  I have built out a VMmark environment however I am getting very low scores and it seems that VMmark does not really push the SAN too hard.  I really want to get SAN benchmark data from a typical ESX environment, which is why I wanted to use VMmark.  I stumbled upon this thread and have some questions.

               

              Are you just running IOmeter in a windows VM and posting results?  Am I missing somthing?  Can anyone suggest any other ways of getting SAN performance numbers out of ESX?  Perhaps by leveraging my exsisitng VMmark environment to generate a load, but then use somthing else to show scores/ratings?

               

              any and all ideas would be appreciated..  I have considered ramping up VMmark load and then watching esxtop, however I dont know how to increase the load across all of the VMs in VMmark.

               

              THANK YOU!!!

              • 154. Re: New  !! Open unofficial storage performance thread
                dennes Enthusiast

                Ik ben afwezig tot maandag 7 december. Ik verzoek u om voor dringende zaken rechtstreeks met kantoor contact op te nemen.

                Telefoonnummer: 013-5115088, of per e-mail naar sales@feju.nl<mailto:sales@feju.nl>.

                Deze mail wordt niet doorgestuurd.

                 

                Groeten,

                Dennes

                • 155. new storage tests - raid level
                  Sebi! Novice

                   

                  Hi,

                   

                   

                  i got an EXP3000 with 12x 300GB 15k HDDs for my DS3400. I tested some RaidLevel an so on with the new free space.

                   

                   

                  Now I need some tips from you how to configure my new expansion. I need some fast IOs cause i wanna setup an DB2 and Notes server. For the log files i thought i take 4 HDDs in a Raid10 and its ok, but after my tests I saw that the performace isn´t very good. I don´t want to take 10 HDDs in Raid10 with 1,5 TB only for my log files.

                   

                   

                  So should I build a new Raid5 or 6 with the 11 HDDs? Or maybe expand the Raid6 from my DS3400 and get 22 HDDs in a Raid6?

                   

                   

                  I can´t test the 22HDD Raid6 so i hope someone has some infos for me.

                   

                   

                   

                   

                   

                  And here are some results from my tests: 

                   

                   

                  SERVER TYPE: VM Windows 2003, 1GB RAM

                  CPU TYPE / NUMBER: 1 VCPU

                  HOST TYPE: IBM x3650 M2, 34GB RAM, 2x X5550, 2,66 GHz QC

                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (1024MB CACHE/Dual Cntr) 11x SAS 15k 300GB / R6                                                                                + EXP3000 (12x SAS 15k 300GB) for the tests 

                   

                   

                  SAN TYPE / HBAs : FC, QLA2432 HBA

                   

                   

                   

                   

                   

                  ##################################################################################

                  RAID10- 10HDDs -


                  Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----

                  ##################################################################################

                   

                  Max Throughput-100%Read_______5,8_______________9941_______310

                   

                   

                  RealLife-60%Rand-65%Read_____16,7______________3083_________24

                   

                   

                  Max Throughput-50%Read________12,6______________4731________147

                   

                   

                   

                  Random-8k-70%Read___________15,5______________3201________25

                   

                  ##################################################################################

                   

                   

                   

                  ##################################################################################

                  RAID10- 4HDDs -


                  Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----

                  ##################################################################################

                   

                  Max Throughput-100%Read_______5,6_______________10402_______325

                   

                   

                  RealLife-60%Rand-65%Read_____36,8______________1467_________11

                   

                   

                  Max Throughput-50%Read________12,1______________4873________152

                   

                   

                  Random-8k-70%Read___________37,2______________1427________11

                   

                  ##################################################################################

                   

                   

                   

                  ##################################################################################

                  RAID5- 10HDDs -


                  Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----

                  ##################################################################################

                   

                  Max Throughput-100%Read_______5,9_______________9656_______301

                   

                   

                  RealLife-60%Rand-65%Read_____20,7______________2374_________18

                   

                   

                  Max Throughput-50%Read________7,8______________4937________154

                   

                   

                  Random-8k-70%Read___________20,4______________2551________19

                   

                  ##################################################################################

                   

                   

                   

                  ##################################################################################

                  RAID6- 10HDDs -


                  Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----

                  ##################################################################################

                   

                  Max Throughput-100%Read_______5,7_______________9827_______307

                   

                   

                  RealLife-60%Rand-65%Read_____23,2______________1850_________14

                   

                   

                  Max Throughput-50%Read________12______________4858________151

                   

                   

                  Random-8k-70%Read___________21,3______________2005________16

                   

                   

                  ##################################################################################

                   

                   

                   

                   

                   

                  Thanks

                   

                   

                  Sebi

                   

                   

                  • 156. Re: New  !! Open unofficial storage performance thread
                    pinkerton Enthusiast

                    Here are my results:

                     

                    SERVER TYPE: VM Windows 2003 SP2, 1GB RAM

                    CPU TYPE / NUMBER: 2 VCPU

                    HOST TYPE: ESXi 4 U1, HP DL380 G6, 64GB RAM, 2x E5520, 2,27 GHz QC

                    STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 4400 (2048MB Cache per Controller) 24x FC 10k 300GB

                     

                    SAN TYPE / HBAs : FC, HP FC1142SR QLogic HBA, HP StorageWorks 8/8 San Switches

                     

                    ##################################################################################

                    RAID5- 24HDDs -


                    Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----

                    ##################################################################################

                     

                    Max Throughput-100%Read_______5,3______________10900_______340,6

                     

                    RealLife-60%Rand-65%Read______14,8______________2999________23,4

                     

                    Max Throughput-50%Read________32,3______________1627________50,8

                     

                    Random-8k-70%Read____________16,2______________2836________22,1

                    ##################################################################################

                     

                     

                    It's strange that the EVA seems to perform so low in the MAX throughput 50%/50% test. This however is not the case when performing the test on a physical host with Windows Server 2008. I have seen that other users with EVAs see similar impacts in the 50%/50% test. Any ideas why this might be the case?

                    • 157. Re: New  !! Open unofficial storage performance thread
                      s1xth Expert
                      VMware Employees

                      Here are my results, brand new Equallogic PS4000 half filled.

                       

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                      TABLE OF RESULTS

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                      SERVER TYPE: 2008 R2 VM ON ESXi 4.0 U1

                      CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram

                      HOST TYPE: Dell PE R710, 24GB RAM; XEON X5550 2,66 GHz, Dual Quad

                      STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS4000 x 1 / 7 +1 Raid 5 10K SAS Drives

                      SAN TYPE  / HBAs : iSCSI, SWISCSI, 2x Intel 1000PT Dual Port Nics, One connection on each

                      MPIO enabled - Jumbo Frames Enabled - 6 iSCSI connections to Volume - 2x Dell PC 5424 Switches

                       

                      ##################################################################################

                      TEST NAME--


                      Av. Resp. Time ms--Av. IOs/sek---Av.MB/sek----

                      ##################################################################################

                      Max

                      Throughput-100%Read........        __15______.......... ___3776___.........___118____

                      RealLife-60%Rand-65%Read......___13_____.......... ___3345___......... ____26___

                      Throughput-50%Read..........     ____21____..........  ___2683___.........  ___83____

                      Random-8k-70%Read...............____18____.......... ___2477___......... ____19____

                      EXCEPTIONS: n/a

                       

                      • 158. Re: New  !! Open unofficial storage performance thread
                        Dr.Virt Enthusiast

                         

                        Wow, talk about a yo-yo...

                         

                        -


                         

                        SERVER TYPE: 2008 R2 VM ON ESX 4.0 U1

                        CPU TYPE / NUMBER: VCPU / 1 / 2GB Ram

                        HOST TYPE: HP BL460 G6, 32GB RAM; XEON X5520

                        STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-240 / 3x 300GB 15K FC / RAID 5

                        SAN TYPE / HBAs: 8Gb Fiber Channel

                         

                        Test Name

                        Avg. Response Time

                        Avg. I/O per Second

                        Avg. MB per Second

                        CPU Utilization

                        Max Throughput - 100% Read

                        5.03

                        12,029.33

                        375.92

                        21.87

                        Real Life - 60% Rand / 65% Read

                        42.81

                        1,074.93

                        8.39

                        19.57

                        Max Throughput - 50% Read

                        3.63

                        16,444.30

                        513.88

                        29.67

                        Random 8K - 70% Read

                        51.44

                        1,039.38

                        8.12

                        14.01

                           

                         

                         

                         

                        -


                         

                        SERVER TYPE: 2008 R2 VM ON ESX 4.0 U1

                        CPU TYPE / NUMBER: VCPU / 1 / 2GB Ram

                        HOST TYPE: HP BL460 G6, 32GB RAM; XEON X5520

                        STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-240 / 5x 1TB 7.2K SATA / RAID 5

                        SAN TYPE / HBAs: 8Gb Fiber Channel / QLogic

                         

                        Test Name

                        Avg. Response Time

                        Avg. I/O per Second

                        Avg. MB per Second

                        CPU Utilization

                        Max Throughput - 100% Read

                        5.05

                        11,896.71

                        371.77

                        55.42

                        Real Life - 60% Rand / 65% Read

                        90.51

                        574.87

                        4.49

                        29.05

                        Max Throughput - 50% Read

                        3.99

                        14,371.41

                        449.10

                        70.61

                        Random 8K - 70% Read

                        109.86

                        482.12

                        3.76

                        27.25

                         

                         

                         

                         

                        • 159. Re: New  !! Open unofficial storage performance thread
                          MKguy Virtuoso

                          I've seen this too on an EVA 8000: (http://communities.vmware.com/message/1350705#1350705)

                           

                          Someone suggested it might be because of vRAID5 on the LUN we are using. Which vRAID are you using for that LUN?

                          • 160. Re: New  !! Open unofficial storage performance thread
                            pinkerton Enthusiast

                            I'm indeed using Vraid 5. I can test a Vraid1 LUN later this week. Have you already tested a Vraid1 LUN?

                            • 161. Re: New  !! Open unofficial storage performance thread
                              MKguy Virtuoso

                              Unfortunately not, and we are not going to get a vRAID1 LUN from the storage guys. I'd be quite interested in your results, please post them here once you test it.

                              • 163. Re: New  !! Open unofficial storage performance thread
                                pinkerton Enthusiast

                                Hm, strange that the 50%/50% is so low on the EVAs. Seems to be the case on all EVAs...

                                • 164. Re: New  !! Open unofficial storage performance thread
                                  larstr Virtuoso
                                  vExpert

                                  Hm, strange that the 50%/50% is so low on the EVAs. Seems to be the case on all EVAs...

                                   

                                  The RAID system on EVA is different from most other SANs as EVA stripe many smaller RAID sets into a larger one. I guess that could be the reason why we're seeing this.

                                   

                                  Lars

                                  1 9 10 11 12 13 Previous Next