1 19 20 21 22 23 Previous Next 574 Replies Latest reply on Jun 1, 2018 5:44 AM by larstr Go to original post
      • 300. Re: New  !! Open unofficial storage performance thread
        captainflannel Novice
        SERVER TYPE: HP Proliant DL360 G7 
        CPU TYPE / NUMBER: Intel 5660 x2 @2.8GHz / 96GB RAM
        HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1
        STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10

        Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to
        this VMFS Datastore.  Connected via HP Proliant 2910al-24G Switch.  VMFS hosted via iSCSI

        Virtual Machine System Drive (C:)
        Hosted via 2TB VMFS Volume, 2 Storage Paths, RoundRobin
        Test name (System 2TB)
        Latency Avg iops Avg MBps cpu load
        Max Throughput-100%Read16.2636781146%
        RealLife-60%Rand-65%Read12.0747043628%
        Max Throughput-50%Read34.521741545%
        Random-8k-70%Read11.9248263728%

         

         

        SERVER TYPE: HP Proliant G7 
        CPU TYPE / NUMBER: Intel 5660 x2 96GB RAM
        HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1
        STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10


        Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to
        this VMFS Datastore.  Connected via HP Proliant 2910al-24G Switch.  VMFS hosted via iSCSI

        Virtual Machine Data Drive (D:)
        Hosted via 300GB VMFS Volume, 2 Storage Paths, RoundRobin
        cpu load seems off perhaps...
        Test name Latency Avg iops Avg MBps cpu load
        Max Throughput-100%Read19.043136981%
        RealLife-60%Rand-65%Read16.8833202525%
        Max Throughput-50%Read14.2142541321%
        Random-8k-70%Read17.253200251%

         

         

        SERVER TYPE: Dell R310
        CPU TYPE / NUMBER: Intel Xeon x3323 @ 2.5GHz / 24GB RAM
        HOST TYPE: Server 2008 64bit/ 24GB RAM /  Direct Attached iSCSI Volume / 1 Path
        STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell MD3000i / 9 1TB 7200 / RAID5

        Jumbo Frames Enabled.  Connected by Dell Powerconnect 6248 Framework
        Test name Latency Avg iops Avg MBps cpu load
        Max Throughput-100%Read28.272116661%
        RealLife-60%Rand-65%Read129.60338216%
        Max Throughput-50%Read21.172751851%
        Random-8k-70%Read148.8330529%
        • 301. Re: New  !! Open unofficial storage performance thread
          qwerty22 Novice

          CaptainFlannel,

           

          Here is what I am getting.  I am using a VNXe3100 with 10 300GB SAS disks in two RAID5 arrays.  I am using iSCSI with a 512GB VMFS.  I have not turned on jumbo frames yet nor have I set up multipath I/O, just using a single 1GB ethernet port.  The VM is a Windows 2003 R2 server with 4 GM memory and 2 vCPUs.

           

          I am troubled by the real world performance numbers, where your VNXe clearly outperforms mine. The random numbers seem low to me also.  What I find interesting is that the Max throughput on my box is the only number that is much better than yours.  I wonder why?

           

          Best Regards.

           

          SERVER TYPE:Dell R710
          CPU TYPE / NUMBER: Xeon X5660 / 2 Processors
          HOST TYPE: Windows 2003 R2 32bit
          STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3100 / 10 300 GB 15K SAS / RAID 5
          No Jumbo frame
          Test nameLatencyAvg iopsAvg MBpscpu load
          Max Throughput-100%Read18.04334810413%
          RealLife-60%Rand-65%Read31.821681130%
          Max Throughput-50%Read18.78325410112%
          Random-8k-70%Read30.301730130%
          • 302. Re: New  !! Open unofficial storage performance thread
            captainflannel Novice

            Interesting to compare.  At the moment I am just looking at my tests performed on our D: drive tests.  With that our numbers are similar, however with our 12 disks in a single RAID10 it looks like we are getting increased I/O with the 50% Read Tests.  Is your RAID5 two different Volumes? or a single?  I would have really thought the RAID10 compared to RAID5 would have much different numbers with similar amount of disks used.

             

            Actually in further looking at your numbers the Read IO seems very similar, but when writes are involved I do see the increased IO available in the RAID10.

             

            Interesting why your 100% read is a little faster.  What kind of networking equipment are you using.

             

            Message was edited by: captainflannel

            • 303. Re: New  !! Open unofficial storage performance thread
              captainflannel Novice
              SERVER TYPE: HP Proliant DL360 G7
              CPU TYPE / NUMBER: Intel 5660 x2 @2.8GHz / 96GB RAM
              HOST TYPE: Server 2008 x64bit / 4vCPU / 16GB RAM / hosted by ESXi 4.1
              STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10
              MultiPath set for 2 1GB ethernet links via RoundRobin.  Jumbo Frames Enabled
              IOMeter Tests run on an unformated Virtual Disk added to this Host
              Test name Physical
              Latency Avg iops Avg MBps cpu load
              Max Throughput-100%Read1.015645317642%
              RealLife-60%Rand-65%Read25.422221171%
              Max Throughput-50%Read13.22453914119%
              Random-8k-70%Read17.653131241%
              • 304. Re: New  !! Open unofficial storage performance thread
                PinkishPanther Novice

                My company has spent a significant amount of time on iSCSI / VMWare benchmarks over the last few months.

                 

                Using a Dell R710 connected to an MD3220i with 500GB 7.2K drives in the first shelf and 600GB 15K drives in the second shelf.

                We originally only had the 7.2K drives and configured them with RAID6 and RAID10 (equal number of disks).

                 

                Max Throughput-100%Read and Random-8k-70%Read tests were the same - approximately 128MB/s and 135MB/sec throughput respectively  even with round robin configured.

                 

                RAID6 on these drives for RealLife-60%Rand-65%Read and Random-8k-70%Read throughput was about 8.7 and 8.8 MB/sec

                RAID10 was 17 and 15 MB/sec.

                RAID10 on the 15K drives was basically double the 7.2K at 31 and 33 MB/sec (we did briefly see 37 and 42 but are unable to repeat it).

                 

                The most interesting thing we discovered was that the iops need to be optimized for this array when using round robin.

                The command is esxcli nmp roundrobin setconfig --type "iops" --iops=3 --device (your lun ID).

                 

                Once this command was run against our LUN the Max Throughput-100%Read and Random-8k-70%Read tests hit the limit of the NIC's, if we have 3 x 1Gbit NICs we get over 300 and 315 MB/sec.

                • 305. Re: New  !! Open unofficial storage performance thread
                  Gabriel Chapman Enthusiast

                  I've been using a beta tool for testing storage IO on our esxi boxes and our storage array and getting some pretty good results that are a little more real world oriented then the 4 tests run here. I've run mixed workloads of Exchange 2003/2007, SQL, OLTP, Oracle,  and various other tests from VM's in tandem as to emulate  real world heavy transactional loads that can simulate impact of prolonged storage IO workloads instead of just single tests that try to achieve a "best result". Speak with your VMWare rep about the Storage IO Analyzer beta which has a very good set of workloads you can run from a VM to simulate multiple workloads. One caveat is that I've found that windows VM's are just not efficient enough to really tax a real Tier 1 storage system. Running 1 workload from each attached host tends to work better at least in my case when trying to truly hammer our boxes. I've managed to garner 80k IOPS from several VM's running in parallel with max throughput rates of around 1.3 GBps. 

                  • 306. Re: New  !! Open unofficial storage performance thread
                    captainflannel Novice

                    Interesting, when switched our iops from the default 1,000 to 3 we definetly see an increase in the maxthroughput tests.  Significant MBps and IOPS increases.

                     

                    SERVER TYPE: HP Proliant G7 
                    CPU TYPE / NUMBER: Intel 5660 x2 96GB RAM
                    HOST TYPE: Server 2008 64bit / 4vCPU / 16GB RAM /  hosted by ESXi 4.1
                    STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe3100 (2 SP) / 12 600GB 15k / RAID10
                    
                    
                    Jumbo Frames Enabled, Netflow Enabled, Using dual 1GB nics for a total of 2GB of available iSCSI bandwidth to 
                    this VMFS Datastore.  Connected via HP Proliant 2910al-24G S
                    iops = 3
                    Test name Latency Avg iops Avg MBps cpu load
                    Max Throughput-100%Read11.0252831651%
                    RealLife-60%Rand-65%Read17.1932412525%
                    Max Throughput-50%Read11.3753391661%
                    Random-8k-70%Read17.573139241%
                    • 307. Re: New  !! Open unofficial storage performance thread
                      qwerty22 Novice

                      captainflannel,

                       

                      I let the VNXe autoconfigure the pools and it created two RAID 5 (4+1) groups under the performance pool, one hot spare and one unused drive.  From what I understand, the data store uses all 10 of the drives.  As far as a switch is concerned, I am using a Cisco SGE2000 which is a small business product that has 24 GB ports. 

                       

                      I tried using a NFS datastore, but the performance was very poor, the numbers were about 4 times less on the real world test.  I've been working with EMC Tech Support for almost two months, but have not really made any progress.  Did you try NFS and is so, how were your numbers?

                       

                      Best Regards

                      • 308. Re: New  !! Open unofficial storage performance thread
                        JaFF Novice

                        Hi,

                         

                        I am currently on paternity leave  until the 13/08/2011.

                        If you require assistance, please call our helpdesk on 1300 10 11 12 begin_of_the_skype_highlighting            1300 10 11 12      end_of_the_skype_highlighting.

                        Alternatively, email service@anittel.com.au

                         

                        Regards,

                         

                        James Ackerly

                        • 309. Re: New  !! Open unofficial storage performance thread
                          gokart Lurker

                          First off I just wanted to thank everyone for their contribution to this thread. It helped me immensely when planning my virtualization project and I really appreciate all the time people spend benching their storage. It really pushed me in the Equallogic direction, and based on my performance in the new setup, I'm very glad indeed.

                           

                          So my setup consists of a PS400VX-600, 2 stacked PowerConnect 6248s, and three R610 ESXi hosts (the Dell show, basically). My SAN is configured with jumbos end-to-end, flow control ON, STP and unicast control disabled on the switches. Each host has four active links to the SAN, sadly the PS4000 is limited to two active links but I less worried about that now after looking at the numbers.

                           

                          Firstly, I benched the array by using an RDM from my B2D box using Dell's MPIO initator:

                           

                          SERVER TYPE: Dell NX3100

                          CPU TYPE / NUMBER: Intel 5620 x2 24GB RAM

                          HOST TYPE: Server 2008 64bit

                          STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS4000XV-600 14 * 600GB 15K SAS @ R50

                           

                           

                          Access Specification NameIOPSMBps (binary)Avg. Response Time (ms)
                          Max Throughput-100%Read670620917
                          RealLife-60%Rand-65%Read 429833.522.5
                          Max Throughput-50%Read795624814
                          Random-8k-70%Read42323323.5

                           

                          Then I got brave and configured one of my ESXi hosts with the Dell MEM plug-in for iSCSI, using the software VMware iSCSI initiator and threw a bare-bones Win2k8R2 Guest on there:

                           

                           

                          Access Specification NameIOPSMBpsAvg.Response Time (ms)
                          Max Throughput-100%Read71632238.3
                          RealLife-60%Rand-65%Read45163511.4
                          Max Throughput-50%Read69012158.4
                          Random-8k-70%Read44153411.9

                           

                          So I was quite pleased with that, but then I noticed I only had two active links to the storage, but I've got 4 links on my host - so I upped the membersessions to 4 as outlined here: http://modelcar.hk/?p=2771

                           

                           

                          Acess Specification NameIOPSMBps (binary)Avg. Response Time (ms)
                          Max Throughput-100%Read71952248.3
                          RealLife-60%Rand-65%Read43753411.9
                          Max Throughput-50%Read77132417.6
                          Random-8k-70%Read421732.912.3

                           

                          Gave me a pretty good bost on my sequential, but lessened my random-ish workloads and increased my latency a little bit, not sure which I'll go with... But overall very pleased so far!

                          • 311. Re: New  !! Open unofficial storage performance thread
                            needmorstuff007 Novice

                            EMC VNX5500, 200gb fast cache 4x100 efd raid1)

                            Pool of 25x300gb 15k disks

                             

                            Cisco UCS blades

                             

                            Acess Specification Name IOPS MBps (binary) Avg. Response Time (ms)
                            Max Throughput-100%Read------ 16068 --- 502 ----- 1.71
                            RealLife-60%Rand-65%Read----- 3498 ---- 27 ----- 10.95
                            Max Throughput-50%Read-------- 12697 ---- 198 ---- 0.885
                            Random-8k-70%Read---------------- 4145 ----- 32.38 --- 8.635

                            • 312. Re: New  !! Open unofficial storage performance thread
                              andy0809 Novice

                              qwerty22, here's an NFS datastore on a VNXe3300, is this comparable to what you saw on your VNXe3100?

                               

                              SERVER TYPE: HP DL360 G5
                              CPU TYPE / NUMBER: Intel X5450 x2 32GB RAM
                              Host Type: Windows 2008 R2 64bit / 1vCPU / 4GB RAM / ESXi 4.1
                              STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3300 / 21x600GB SAS 15k / RAID 5

                               

                              Access   SpecificationIOPsMB/sAvg IO response   time
                              Max Throughput 100% read342810718
                              RealLife-60% Rand/60% Read5965101
                              Max Throughput 50% read31839919
                              Random-8k 70% Read5624107
                              • 313. Re: New  !! Open unofficial storage performance thread
                                andy0809 Novice

                                iSCSI results

                                 

                                SERVER TYPE: HP DL360 G5
                                CPU TYPE / NUMBER: Intel X5450 x2 32GB RAM
                                Host Type: Windows 2008 R2 64bit / 1vCPU / 4GB RAM / ESXi 4.1
                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: VNXe3300 / 21x600GB SAS 15k / RAID 5

                                 

                                Access   SpecificationIOPsMB/sAvg IO response   time
                                Max Throughput 100% read350210917
                                RealLife-60% Rand/60% Read37382914
                                Max Throughput 50% read578318110
                                Random-8k 70% Read36022815
                                • 314. Re: New  !! Open unofficial storage performance thread
                                  qwerty22 Novice

                                  Andy0809,  Yes that is close to what I was seeing with NFS, very poor IOPs and high latency.   I understand that EMC has found and corrected the NFS issue and a new software release has been posted to correct the problem. I am out of the office currently so I haven't had the opportunity to install and test it, but at least one other person has and has reported back numbers similar to iSCSI.  Best Regards.

                                  1 19 20 21 22 23 Previous Next