1 18 19 20 21 22 Previous Next 457 Replies Latest reply: Mar 5, 2009 6:53 AM by oreeh Go to original post RSS
      • 275. Re: Open inofficial storage performance thread
        larstr Virtuoso vExpert

        I received a few new servers here so I decided to test a bit before hitting production. I still haven't tested all of the products I wanted, but I guess this is enough for a pretty long posting. So expect another similar one in a day or three.

         

        I have only tested local storage, and 32 bit windows VMs. The goal was to get an overview of the storage virtualization overhead between different products. VMs were installed from scratch and vendor native drivers (VMware Tools, VS Tools, Virtual Machine Additions) was installed before running iometer.

         

        HP tools and drivers were also installed on windows hosts (non HP native cciss disk drivers were used by the Debian install and Virtual Iron).

         

        SERVER TYPE: Physical Windows 2003R2sp2

        CPU TYPE / NUMBER: 8 cpu cores, 2 sockets

        HOST TYPE: HP DL360G5, 4GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs block size (4096)

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        3.18

        18530

        579

        RealLife-60%Rand-65%Read

        78.6

        739

        5.7

        Max Throughput-50%Read

        3.74

        15579

        486

        Random-8k-70%Read.

        72.7

        787

        6.1

         

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Server on Windows Server 1.0.4 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        0.5

        10900

        340

        RealLife-60%Rand-65%Read

        156

        368

        2.8

        Max Throughput-50%Read

        1.22

        7472

        233

        Random-8k-70%Read.

        88.1

        630

        4.9

        EXCEPTIONS: CPU Util. 99% 17% 98% 22%

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Server on Debian Linux 4.0 2.6.18 x64

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default jfs (4096)

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        0.5

        8550

        267

        RealLife-60%Rand-65%Read

        79

        747

        5.8

        Max Throughput-50%Read

        0.63

        3804

        237

        Random-8k-70%Read.

        97

        609

        4.7

        EXCEPTIONS: CPU Util. 100% 17% 98% 16%

         

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on VMware Player 2.0.1 on Windows Server 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        0.5

        9920

        310

        RealLife-60%Rand-65%Read

        139

        411

        3.2

        Max Throughput-50%Read

        3.1

        2656

        83

        Random-8k-70%Read.

        93.3

        632

        4.9

        EXCEPTIONS: CPU Util. 99% 17.5% 98% 23%

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Iron 3.7

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        16.2

        3732

        116

        RealLife-60%Rand-65%Read

        169

        353

        2.75

        Max Throughput-50%Read

        15.2

        3940

        123

        Random-8k-70%Read.

        177

        337

        2.6

        EXCEPTIONS: CPU Util. 39% 17% xx% 17%

         

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Server 2005r2sp1 (1.1.603.0 EE R2 SP1) on Windows Server 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        15.5

        3860

        120

        RealLife-60%Rand-65%Read

        159

        374

        2.9

        Max Throughput-50%Read

        17.3

        3444

        107

        Random-8k-70%Read.

        198

        300

        2.3

        EXCEPTIONS: CPU Util. 58% 17% 57% 16%

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual Server 2005r2sp1 (1.1.603.0 EE R2 SP1) (VT enabled) on Windows Server 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        15.9

        3773

        117

        RealLife-60%Rand-65%Read

        159

        375

        2.9

        Max Throughput-50%Read

        17.5

        3420

        106

        Random-8k-70%Read.

        199

        299

        2.3

        EXCEPTIONS: CPU Util. 58% 17% 55% 16%

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual PC 2007 (6.0.156.0) on Windows Server 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        16.7

        3571

        111

        RealLife-60%Rand-65%Read

        161

        371

        2.9

        Max Throughput-50%Read

        18.6

        3219

        100

        Random-8k-70%Read.

        200.5

        298

        2.3

        EXCEPTIONS: CPU Util. 53% 16% 54% 15%

         

         

        SERVER TYPE: Virtual Windows 2003R2sp2 on Virtual PC 2007 (6.0.156.0) (VT enabled) on Windows Server 2003R2sp2

        CPU TYPE / NUMBER: VCPU / 1

        HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

        STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default ntfs 4096

        TEST NAME

        Av. Resp. Time ms

        Av. IOs/sek

        Av. MB/sek

        Max Throughput-100%Read.

        15.2.

        3948

        123

        RealLife-60%Rand-65%Read

        148.

        403

        3.2

        Max Throughput-50%Read

        16.8.

        3561

        111

        Random-8k-70%Read.

        184.

        324

        2.5

        EXCEPTIONS: CPU Util. 56% 16% 51% 15%

         

        Message was edited by: larstr

        Added note about HP drivers.

        • 276. Re: Open inofficial storage performance thread
          sstelter Enthusiast

           

          Hi Meistermn,

           

           

          Great question - it seems to me that performance must ultimately be limited by the performance of physical disks in the SAN.  I don't think single VM performance can be extrapolated to larger numbers of VMs.  As the number of VMs increases, the effects of cache should theoretically be negated.  That's why it is so important that the size of the test file/volume/VM be larger than the cache on the SAN - otherwise you're just testing the speed/latency of memory and speed/latency of the fabric (and the speed/latency of the host and SAN interfaces) and not much else.  As the number of VMs increases, the randomness of the io pattern should also increase.  Coalesing and other techniques to make the data more sequential would seem to use SAN CPU cycles, so a single VM test might be even less useful as the SAN controller CPU gets bogged down with this type of work as the number of VMs increase.

           

           

          I like this thread because I think it is better for potential SAN customers to have some data than to just use the marketing specs that SAN vendors publish (which usually represent I/O to cache on the SAN controller or the disk cache).  Sifting through the data is the challenge, as is intepreting what it will actually mean for you in the real world.

           

           

          Iometer is a great tool for testing multiple workloads on multiple servers simultaneously - the GUI can choreograph simulatenous test execution and can deliver the results of several different runs in one spreadsheet (after running the suite of tests overnight, for example).  Maybe someone (Christian?) could cook up a set of VMs that could be deployed for such a test with the appropriate ICF file...happy to help if I can.  This could remove some of the variability in the data due to (mis-)configuration.  In theory the test VMs could be any OS (meaning a free, redistributable one might be a better choice) as long as it was the same OS, right?

           

           

          Disclaimer: I work for LeftHand Networks, a SAN vendor, so it might not be appropriate for me to directly help with creating the VMs and ICF file.  I sure am curious if this would be a viable means to test SAN performance with multiple VMs though...

           

           

          • 277. Re: Open inofficial storage performance thread
            christianZ Virtuoso

             

            When you check the results from "urbanb" or "mitchellm3" with concurrent vms, you will see that the ios numbers are not as high as by single vm and the response time is evidently higher - that phenomena should be always the same on systems with a large cache.

             

             

            We can observe here (urbanb's tests, EMC DMX3) e.g. if you have one single vm then you can reach ~7000iops (reallife-test) - by 2 concurrent vms you can reach only the half of it and the response time will be higher (I guess all disks were involved here). Myself observed similar phenomena by testing on EQL -  all disks are involved here too.

             

             

            There was another constellation by mitchellm3's tests(IBM DS4800) - each vm worked on its own disks, i.e. one single vm couldn't saturate the whole system, but the 2 vms and one physical running concurrently could.

             

             

            Therefore it would be really recommended to make the concurrent tests when one would see very high numbers by iops and very low response time. Such numbers could come only from cache and couldn't be any real indicator for storage performance in the reality.

             

             

            Myself saw this by testing of SanMelody (observed high system ios but very low disks' ios).

             

             

             

             

             

            Regards

             

             

            Christian

             

             

            • 278. Re: Open inofficial storage performance thread
              christianZ Virtuoso

               

              ...and  there are new tests here !

               

               

              Thanks to:

               

               

              cmanucy

               

               

              larstr

               

               

              for joining in.

               

               

              • 279. Re: OPEN INOFFICIAL STORAGE PERFORMANCE THREAD
                ericdaly Enthusiast

                ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                TABLE OF RESULTS - VM on 1MB Block Size VMFS

                ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                 

                SERVER TYPE: Windows 2003 STD VM ON ESX 3.0.2

                CPU TYPE / NUMBER: VCPU / 1

                HOST TYPE: HP DL380 G5, 32GB RAM; Dual Intel Quad Core 2GHz E5335,

                STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6000 / 30 x 300gb FC HDD on vRAID1

                VMFS: 500GB LUN, 1MB Block Size

                SAN TYPE / HBAs : 4GB FC, HP StorageWorks FC1142SR 4Gb HBA's

                 

                ##################################################################################

                TEST NAME--


                Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                ##################################################################################

                 

                RealLife-60%Rand-65%Read......__11.08______......._4391.31_______...._34.31_______

                • 280. Re: Open inofficial storage performance thread
                  larstr Virtuoso vExpert

                  Here are some more results. This time I've tested ESX. I guess using the descheduled time service will give more accurate results due to the timing issues. Still, it doesn't make 100% sense that we get better performance on RealLife in a vm on ESX than on a physical install.

                   

                  SERVER TYPE: Virtual Windows 2003R2sp2 on ESX 3.0.2. Descheduled time service disabled

                  CPU TYPE / NUMBER: VCPU / 1

                  HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default vmfs 1MB

                  TEST NAME

                  Av. Resp. Time ms

                  Av. IOs/sek

                  Av. MB/sek

                  Max Throughput-100%Read.

                  5.3

                  9711

                  303

                  RealLife-60%Rand-65%Read

                  43

                  786

                  6.1

                  Max Throughput-50%Read

                  6.4

                  8796

                  274

                  Random-8k-70%Read.

                  55

                  778

                  6

                  EXCEPTIONS: CPU Util. 73% 55% 56% 41%

                   

                  SERVER TYPE: Virtual Windows 2003R2sp2 on ESX 3.0.2. Descheduled time service enabled

                  CPU TYPE / NUMBER: VCPU / 1

                  HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default vmfs 1MB

                  TEST NAME

                  Av. Resp. Time ms

                  Av. IOs/sek

                  Av. MB/sek

                  Max Throughput-100%Read.

                  5.4

                  9887

                  308

                  RealLife-60%Rand-65%Read

                  42

                  777

                  6.0

                  Max Throughput-50%Read

                  6

                  8987

                  280

                  Random-8k-70%Read.

                  53

                  539

                  6

                  EXCEPTIONS: CPU Util. 67% 56% 67% 43%

                   

                  SERVER TYPE: Virtual Windows 2003R2sp2 on ESX 3.0.2. Descheduled time service enabled, arrayaccelerator=disable

                  CPU TYPE / NUMBER: VCPU / 1

                  HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB arrayaccelerator=disable / 2xSAS 15k rpm / raid 1 / 128KB stripe size / default vmfs 1MB

                  TEST NAME

                  Av. Resp. Time ms

                  Av. IOs/sek

                  Av. MB/sek

                  Max Throughput-100%Read.

                  24

                  2384

                  74.5

                  RealLife-60%Rand-65%Read

                  96

                  607

                  4.7

                  Max Throughput-50%Read

                  76

                  758

                  23.7

                  Random-8k-70%Read.

                  87

                  671

                  5.2

                  EXCEPTIONS: CPU Util. 27% 17% 20% 17%

                  • 281. Re: Open inofficial storage performance thread
                    dalepa Hot Shot

                     

                    Sorry for the format change...  

                     

                     

                    Summary: it appears that most of the numbers are about the same across Netapp Heads.

                     

                    *

                                   OS

                     

                    *

                    *

                                   CPUxMemxGhz

                     

                    *

                    *

                                   Storage Vendor

                     

                    *

                    *

                                   Model

                     

                    *

                    *

                                   Protocol

                     

                    *

                    *

                                    

                    1. Disks

                     

                    *

                    *

                                   RAID

                     

                    *

                    *

                                   Results

                     

                    *

                    *

                                   Max Throughput

                     

                     

                                   100%Read.

                     

                    *

                    *

                                   RealLife

                     

                     

                                   60%Rand-65%Read

                     

                    *

                    *

                                   Max Throughput

                     

                     

                                   50%Read

                     

                    *

                    *

                                   Random

                     

                     

                                   8k-70%Read.

                     

                    *

                    Win 2003R2sp2

                    8x16x2.6

                    Netapp

                    FAS3070

                    NFS/1G

                    40

                    RAID-DP













                    Av. RTime ms

                    16.8

                    12.9

                    4.7

                    12.8








                    Av. IOs/sek

                    3465

                    500

                    1135

                    506








                    Av. MB/sek

                    108

                    2.5

                    17.8

                    2.7








                    CPU

                    50

                    34

                    37

                    29

                    Win 2003r2sp2

                    8x16x2.6

                    Netapp

                    FAS6070

                    NFS/1G

                    28

                    RAID-DP













                    Av. Rtime ms

                    17.86

                    8.9

                    6.13

                    8.9








                    Av. IOs/sek

                    3304

                    506

                    1056

                    501








                    Av. MB/sek

                    103

                    3.9

                    33

                    3.9








                    CPU

                    66

                    31

                    35

                    27

                    Win 2003r2sp2

                    8x16x2.6

                    Netapp

                    FAS6070

                    ISCSI/1G

                    28

                    RAID-DP













                    Av. Rtime ms

                    17.86

                    18.39

                    5.6

                    20.7








                    Av. IOs/sek

                    3310

                    502

                    974

                    501








                    Av. MB/sek

                    103

                    3.9

                    30.4

                    3.9








                    CPU

                    58

                    31

                    33

                    35

                    Win 2003r2sp2

                    8x16x2.6

                    Netapp

                    FAS3050

                    NFS/1G

                    32

                    RAID-DP













                    Av. Rtime ms

                    17.80

                    18.6

                    6.2

                    20.8








                    Av. IOs/sek

                    3309

                    502

                    1189

                    501








                    Av. MB/sek

                    103

                    3.9

                    37.1

                    3.9








                    CPU

                    60

                    30

                    36

                    21

                    Win 2003r2sp2

                    8x16x2.6

                    Netapp

                    R200

                    NFS/1G

                    27

                    RAID DP













                    Av. Rtime ms

                    17.23

                    41

                    5.9

                    43.2








                    Av. IOs/sek

                    3412

                    502

                    1237

                    501








                    Av. MB/sek

                    106

                    3.9

                    38.6

                    3.9








                    CPU

                    54

                    36

                    38

                    40

                     

                    • 282. Re: Open inofficial storage performance thread
                      lasswellt Lurker

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                      TABLE oF RESULTS - VM on 1MB Block Size VMFS

                      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                       

                      SERVER TYPE: VM on VMware ESX 3.0.2

                      CPU TYPE / NUMBER: VCPU / 1

                      HOST TYPE: Dell PE2950, 32GB RAM; 2x XEON, 2.0 GHz, Quad-Core

                      STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp FAS3070 / 14+2 Disks / 144GB 15k

                      SAN TYPE / HBAs : FC, QLA2432

                       

                      ##################################################################################

                      TEST NAME--


                      Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                      ##################################################################################

                       

                      Max Throughput-100%Read........_____4.9__.........___11206___.........____350____

                       

                      RealLife-60%Rand-65%Read......_____4_____..........___642___........._____5_____

                       

                      Max Throughput-50%Read..........____1______..........__2114___.........___66____

                       

                      Random-8k-70%Read.................____2.7____..........____922___.........____7______

                       

                      EXCEPTIONS: CPU / 60%, 6%, 15%, 8%

                       

                      ##################################################################################

                       

                      Message was edited by: lasswellt

                      Added CPU Util.

                      • 283. Re: Open inofficial storage performance thread
                        ericdaly Enthusiast

                        ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                        TABLE OF RESULTS - VM on 1MB Block Size VMFS - IBM DS4800 (RAID 1)

                        ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                         

                        SERVER TYPE: VM ON ESX 3.0.2

                        CPU TYPE / NUMBER: VCPU / 1

                        HOST TYPE: IBM x3650, 36GB RAM; 2x XEON 5355 (Quadcore), 2,66 GHz,

                        STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS4800 / 30 x 146GB 15k FC HDD on RAID1

                        VMFS: 500GB LUN, 1MB Block Size

                        SAN TYPE / HBAs : 4GB FC, QLogic QLA2432 Dual HBAs, Dual Cisco MDS 9216i Switches

                         

                        ##################################################################################

                        TEST NAME--


                        Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                        ##################################################################################

                         

                        Max Throughput-100%Read.......___4.93______.......___11339.62___....___354.36____

                         

                        RealLife-60%Rand-65%Read......___6.41_____......._____7859.46___....____61.40____

                         

                        Max Throughput-50%Read........___2.49______.......___17374.72___....___542.96____

                         

                        Random-8k-70%Read.............___6.43_____.......____ 7783.61___....____60.81____

                        • 284. Re: Open inofficial storage performance thread
                          ericdaly Enthusiast

                          ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                          TABLE OF RESULTS - VM on 1MB Block Size VMFS (RAID 5) IBM DS4800

                          ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                           

                          SERVER TYPE: VM ON ESX 3.0.2

                          CPU TYPE / NUMBER: VCPU / 1

                          HOST TYPE: IBM x3650, 36GB RAM; 2x XEON 5355 (Quadcore), 2,66 GHz,

                          STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS4800 / 30 x 146GB 15k FC HDD on RAID5

                          VMFS: 500GB LUN, 1MB Block Size

                          SAN TYPE / HBAs : 4GB FC, QLogic QLA2432 Dual HBAs, Dual Cisco MDS 9216i Switches

                           

                          ##################################################################################

                          TEST NAME--


                          Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                          ##################################################################################

                           

                          Max Throughput-100%Read.......___4.97______.......___11343.74___....___354.49____

                           

                          RealLife-60%Rand-65%Read......___7.15_____......._____6450.95___....____50.40____

                           

                          Max Throughput-50%Read........___2.83______.......___17314.57___....___541.08____

                           

                          Random-8k-70%Read.............___6.93_____.......____ 6454.94___....____50.43____

                          • 285. Re: Open inofficial storage performance thread
                            christianZ Virtuoso

                            @dalepa

                             

                            @lasswellt

                             

                             

                             

                            Thanks for joining in.

                             

                             

                             

                            Well I wonder why you (both) can  only reach 500-600 ios/sek by Reallife-Test. I would expect much more???

                             

                             

                             

                             

                             

                             

                             

                            @ericdaly

                             

                             

                             

                            Thanks for that. I begin slowly to regret not to get the DS4800 (it was an alternative for us too) - really brute power in it!!

                            • 286. Re: Open inofficial storage performance thread
                              ericdaly Enthusiast

                              Here are some more test done on new HP EVA 6000 today. These were the exact same tests I ran earlier in the week on a brand new IBM DS4800 (see previous posts). At the time of running tests on both SAN's there was no other I/O accessing the SAN, just 2 x ESX hosts and 1 active VM running I/O meter. The IBM comes in on top. The only major differnences in tests were the disks used.

                               

                              HP EVA 6000 disk group made up of 30 x 300GB 10k FC (500GB VMFS LUN created on this)

                              IBM DS4800 disk group made up of 30 x 146GB 15k FC (500GB VMFS LUN created on this)

                               

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++* **TABLE OF RESULTS - VM on 1MB Block Size VMFS (vRAID 1) HP EVA 6000 **+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ *SERVER TYPE: Windows 2003 STD VM ON ESX 3.0.2

                              CPU TYPE / NUMBER: VCPU / 1

                              HOST TYPE: HP DL380 G5, 32GB RAM; Dual Intel Quad Core 2GHz E5335,

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6000 / 30 x 300gb 10k FC HDD on vRAID1

                              VMFS: 500GB LUN, 1MB Block Size

                              SAN TYPE / HBAs : 4GB FC, HP StorageWorks FC1142SR 4Gb HBA's

                               

                              ##################################################################################

                              TEST NAME--


                              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                              ##################################################################################

                               

                              Max Throughput-100%Read.......___5.84______.......___9684.25___....___302.63____

                               

                              RealLife-60%Rand-65%Read......___10.77_____.......____4488.41___....____35.07____

                               

                              Max Throughput-50%Read........___8.08______.......___5395.06___....___168.60____

                               

                              Random-8k-70%Read.............___10.64_____.......____4587.93___....___35.84____

                               

                              ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++* **TABLE OF RESULTS - VM on 1MB Block Size VMFS (vRAID 5) HP EVA 600 *++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                              SERVER TYPE: Windows 2003 STD VM ON ESX 3.0.2

                              CPU TYPE / NUMBER: VCPU / 1

                              HOST TYPE: HP DL380 G5, 32GB RAM; Dual Intel Quad Core 2GHz E5335,

                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6000 / 30 x 300gb 10k FC HDD on vRAID5

                              VMFS: 500GB LUN, 1MB Block Size

                              SAN TYPE / HBAs : 4GB FC, HP StorageWorks FC1142SR 4Gb HBA's

                               

                              ##################################################################################

                              TEST NAME--


                              Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                              ##################################################################################

                               

                              Max Throughput-100%Read.......___5.12______.......___10790.57___....___337.21____

                               

                              RealLife-60%Rand-65%Read......___11.81_____.......____3870.55___....____30.24____

                               

                              Max Throughput-50%Read........___25.30______.......___1990.08___....___62.19____

                               

                              Random-8k-70%Read.............___11.59_____.......____3940.41___....___30.78____

                               

                               

                               

                               

                              I was suprised to see the differnence on the Max Throughput-50%Read tests that were run on RAID1 disks:

                              HP EVA 6000 = Max Throughput-50%Read........___8.08______.......___5395.06___....___168.60____

                              IBM DS4800 = Max Throughput-50%Read........___2.49______.......__17374.72___....___542.96____

                               

                              I was more suprised to see the differnence on the Max Throughput-50%Read tests that were run on RAID5 disks:

                              HP EVA 6000 = Max Throughput-50%Read........___25.30______.......___1990.08___....___62.19____

                              IBM DS4800 = Max Throughput-50%Read........___2.83______.......___17314.57___....___541.08____ (WOW!)

                              • 287. Re: Open inofficial storage performance thread
                                larstr Virtuoso vExpert

                                I have now also done some more testing. One interesting thing I found now was that enabling cache on the disks will not give much performance difference regarding IOs, but it seems to give you less cpu load in the VM.

                                 

                                This time I'm testing RAID 1+0. My two previous tests were using only RAID 1.

                                 

                                SERVER TYPE: Physical Windows 2003R2sp2

                                CPU TYPE / NUMBER: 2x quad core

                                HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 4xSAS 15k rpm / raid 1+0 / 128KB stripe size / default vmfs 1MB

                                TEST NAME

                                Av. Resp. Time ms

                                Av. IOs/sek

                                Av. MB/sek

                                Max Throughput-100%Read.

                                2.95

                                19932

                                622

                                RealLife-60%Rand-65%Read

                                46

                                1209

                                9.4

                                Max Throughput-50%Read

                                5

                                11272

                                352

                                Random-8k-70%Read.

                                39

                                1391

                                10.8

                                 

                                 

                                 

                                SERVER TYPE: Virtual Windows 2003R2sp2 on ESX 3.0.2. Descheduled time service enabled

                                CPU TYPE / NUMBER: VCPU / 1

                                HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 4xSAS 15k rpm / raid 1+0 / 128KB stripe size / default vmfs 1MB

                                TEST NAME

                                Av. Resp. Time ms

                                Av. IOs/sek

                                Av. MB/sek

                                Max Throughput-100%Read.

                                4.3

                                9976

                                311

                                RealLife-60%Rand-65%Read

                                30

                                1439

                                11

                                Max Throughput-50%Read

                                5.5

                                8779

                                274

                                Random-8k-70%Read.

                                30

                                1431

                                11

                                EXCEPTIONS: CPU Util. 92% 46% 89% 45%

                                 

                                 

                                SERVER TYPE: Virtual Windows 2003R2sp2 on ESX 3.0.2. Descheduled time service enabled. Cache on individual disks enabled.

                                CPU TYPE / NUMBER: VCPU / 1

                                HOST TYPE: HP DL360G5, 4 GB RAM; 2x XEON E5345, 2,33 GHz, QC

                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: P400i 256MB 50% read cache / 4xSAS 15k rpm / raid 1+0 / 128KB stripe size / default vmfs 1MB

                                TEST NAME

                                Av. Resp. Time ms

                                Av. IOs/sek

                                Av. MB/sek

                                Max Throughput-100%Read.

                                5.4

                                9681

                                302

                                RealLife-60%Rand-65%Read

                                34

                                1353

                                10.5

                                Max Throughput-50%Read

                                6.1

                                8763

                                273

                                Random-8k-70%Read.

                                35

                                1412

                                11

                                EXCEPTIONS: CPU Util. 71% 40% 72% 33%

                                • 288. Re: Open inofficial storage performance thread
                                  christianZ Virtuoso

                                   

                                  @ericdaly

                                   

                                   

                                  @larstr

                                   

                                   

                                  I like your deeper  analyzes and comparisons.Thanks for that.

                                   

                                   

                                  The DS4800 seems to be one of the best (performance) systems in midrange IMHO.

                                   

                                   

                                  • 289. Re: Open inofficial storage performance thread
                                    cmanucy Hot Shot

                                     

                                    I just wanted to whet everyone's appetite... I'm in the midst of some rather in-depth testing on some iSCSI solutions, and have been able to produce some very interesting data.

                                     

                                     

                                    For example: differences between PCI-E & PCI-X cards, using dual-port vs. 2x single-port NICs, and the impacts these (and other) decisions make on CPU overhead.  A little sprinkle of AMD-vs-Intel as well.

                                     

                                     

                                    I hope to have some good stuff to post shortly.  If anyone has any other ideas/requests, I'll see if I can cram it in while I still have the ability to test as well... it's not often you can take production systems and pull cables out just to see "what will happen" to the units...

                                     

                                     

                                     

                                     

                                     

                                    1 18 19 20 21 22 Previous Next