1 15 16 17 18 19 Previous Next 574 Replies Latest reply on Jun 1, 2018 5:44 AM by larstr Go to original post
      • 240. Re: First iSCSI Implementation
        Dr.Virt Enthusiast

        Long time FC shop decided to deploy iSCSI for remote office. iSCSI performing surprisingly well.

         

        Physical assets: 2x HP P4300G2 (Lefthand) ver. 8.5, 2x HP 2910AL, 2x HP DL360G6

         

         

        Network: All 1GbE, 2 Ports each

         

         

         

         

         

         

         

         

         

        ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

        TABLE oF RESULTS

        ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

         

        SERVER TYPE: VM Windows 2008 R2 Standard (Version 7), 1x vCPU, 4 GB RAM, 2x 40GB VMDKs

         

         

         

        HOST TYPE: HP Proliant DL360 G6, 36GB RAM, 2x X5650

         

         

         

        VMWARE: ESXi 4.1 on USB Drive, Software iSCSI Intiator on 2x VMkernel

         

        SAN Type: HP P4300G2 (LeftHand) / Disks: 450GB 10k SAS / RAID LEVEL: Raid5 / 8 Disks

         

         

         

         

         

         

        ##################################################################################

         

         

         

         

         

        Test Name

        Average Response Time (ms)

        Average I/O per Sec.

        Average MB per Sec.

        Max Throughput - 100% Read

        1.75

        25,916.19

        809.88

        Real Life - 60% Random 65% Read

        35.93

        1,426.11

        11.14

        Max Throughput - 50% Read

        17.48

        3,333.33

        104.17

        Random - 8K 70% Read

        15.59

        1,627.12

        18.15

         

         

        • 241. Re: New  !! Open unofficial storage performance thread
          larstr Virtuoso
          vExpert

          Christian,

          For automatic interpretation of iometer results.csv I've setup a web page here:

          http://vmktree.org/iometer/

           

          I also put the .iso file from the older thread in there.

           

          Lars

          • 242. Re: New  !! Open unofficial storage performance thread
            pinkerton Enthusiast

            Bis zum 13.12. bin ich nicht im Büro. Bitte wenden Sie sich in dieser Zeit an support@mdm.de

             

            Best Regards

            Michael Groß

            • 243. Re: New  !! Open unofficial storage performance thread
              larstr Virtuoso
              vExpert
              SERVER TYPE: Windows Server 2003R2x32 VM running on ESXi 4.1
              CPU TYPE / NUMBER: AMD Opteron 6174
              HOST TYPE: DL385G7
              STORAGE TYPE / DISK NUMBER / RAID LEVEL: Local disks, P410i, 8x15k rpm SAS, RAID5
              
              Test nameLatencyAvg iopsAvg MBpscpu load
              Max Throughput-100%Read1.972852589171%
              RealLife-60%Rand-65%Read24.3213941059%
              Max Throughput-50%Read2.332452376655%
              Random-8k-70%Read25.4713921056%
              • 244. Re: New  !! Open unofficial storage performance thread
                JTravers Lurker

                Here are my results.  I can't determine why our performance is degraded when using jumbo frames on our MD3000i.  We're using (2) PowerConnect 5424's for our dedicated iSCSI network and jumbo frames are enabled from end-to-end.

                 

                SERVER TYPE:  Windows Server 2003R2x32 VM running on ESXi 4.1

                CPU TYPE / NUMBER: VCPU x 1, JUMBO FRAMES, MPIO RR

                HOST TYPE: PowerEdge R710, 24GB RAM, 2xE5620

                STORAGE TYPE / DISK NUMBER / RAID LEVEL: MD3000i, 300GB, 6x15k RPM SAS, RAID 10
                
                Test nameLatencyAvg iopsAvg MBpscpu load
                Max Throughput-100%Read19.6130369420%
                RealLife-60%Rand-65%Read19.2425581937%
                Max Throughput-50%Read19.6230619519%
                Random-8k-70%Read18.3427282137%

                 

                 

                SERVER TYPE:  Windows Server 2003R2x32 VM running on ESXi 4.1

                CPU TYPE / NUMBER: VCPU x 1, JUMBO FRAMES, MPIO RR

                HOST TYPE: PowerEdge R710, 24GB RAM, 2xE6520

                STORAGE TYPE / DISK NUMBER / RAID LEVEL: MD3000i, 300GB, 7x15k RPM SAS, RAID 5
                
                Test nameLatencyAvg iopsAvg MBpscpu load
                Max Throughput-100%Read19.1131229720%
                RealLife-60%Rand-65%Read22.9420271540%
                Max Throughput-50%Read19.3031059719%
                Random-8k-70%Read21.2821031643%

                 

                 

                 

                SERVER TYPE:  Windows Server 2003R2x32 VM running on ESXi 4.1

                CPU TYPE / NUMBER: VCPU x 1, NO JUMBO FRAMES, MPIO RR

                HOST TYPE: PowerEdge R710, 24GB RAM, 2xE5520

                STORAGE TYPE / DISK NUMBER / RAID LEVEL: MD3000i, 300GB, 6x15k RPM SAS, RAID 10
                
                Test nameLatencyAvg iopsAvg MBpscpu load
                Max Throughput-100%Read15.20394212318%
                RealLife-60%Rand-65%Read19.0225141939%
                Max Throughput-50%Read17.09358511215%
                Random-8k-70%Read18.9425581938%

                 

                SERVER TYPE:  Windows Server 2003R2x32 VM running on ESXi 4.1

                CPU TYPE / NUMBER: VCPU x 1, NO JUMBO FRAMES, MPIO RR

                HOST TYPE: PowerEdge R710, 24GB RAM, 2xE5520

                STORAGE TYPE / DISK NUMBER / RAID LEVEL: MD3000i, 300GB, 7x15k RPM SAS, RAID 5
                
                Test nameLatencyAvg iopsAvg MBpscpu load
                Max Throughput-100%Read15.13395712318%
                RealLife-60%Rand-65%Read23.9520901632%
                Max Throughput-50%Read16.60368211515%
                Random-8k-70%Read22.8121071636%

                 

                ===========================

                 

                Interface  Port Group/DVPort             IP Family       IP Address             Netmask               MAC Address           MTU     Type        VMotion
                vmk0       Management Network            IPv4           192.168.10.1           255.255.255.0        xx:xx:xx:xx:xx:xx    1500    STATIC      Disabled
                vmk1       iSCSI1                                  IPv4          192.168.100.10        255.255.255.0        xx:xx:xx:xx:xx:xx    9000    STATIC      Disabled
                vmk2       iSCSI2                                  IPv4          192.168.101.10        255.255.255.0        xx:xx:xx:xx:xx:xx    9000    STATIC      Disabled

                 

                 

                ===========================

                Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks

                vSwitch0                 128                    4                    128               1500      vmnic4,vmnic0

                 

                   PortGroup Name                VLAN ID   Used Ports      Uplinks

                   Management Network            0                 1               vmnic0

                 

                Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks

                vSwitch1                  128                   6                  128                 1500    vmnic9,vmnic8,vmnic3,vmnic2

                 

                   PortGroup Name                VLAN ID   Used Ports      Uplinks

                   VM Network                          0                  1               vmnic9,vmnic8,vmnic3,vmnic2

                 

                Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks

                vSwitch2                  128                    5                  128                9000    vmnic5,vmnic1

                 

                   PortGroup Name                VLAN ID   Used Ports      Uplinks

                   iSCSI2                                    0                     1               vmnic5

                   iSCSI1                                    0                     1               vmnic1

                ============================

                • 245. Re: New  !! Open unofficial storage performance thread
                  s1xth Expert
                  VMware Employees
                  SERVER TYPE: W2K8 32bit on ESXi 4.1 Build 320137 1vCPU 2GB RAM
                  CPU TYPE / NUMBER: Intel X5670 @ 2.93Ghz
                  HOST TYPE: Dell PE R610 w/ Broadcom 5709 Dual Port w/ EQL MPIO PSP Enabled
                  NETWORK: Dell PC 6248 Stack w/ Jumbo Frames 9216
                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS4000X 16 Disk Raid 50
                  Test name Latency Avg iops Avg MBps cpu load
                  Max Throughput-100%Read8.12741023129%
                  RealLife-60%Rand-65%Read10.6533472659%
                  Max Throughput-50%Read7.19786124534%
                  Random-8k-70%Read11.3733872655%
                  • 246. Re: New  !! Open unofficial storage performance thread
                    fbonez Expert
                    vExpert

                    Sono fuori ufficio.

                    Rientrerò il 10/01/2011.

                     

                    Per urgenze contattare l'assistenza tecnica allo 045 8738738.

                     

                    Francesco Bonetti

                    RTC SpA

                    --
                    If you find this information useful, please award points for "correct" or "helpful". | @fbonez | www.thevirtualway.it
                    • 247. Re: New  !! Open unofficial storage performance thread
                      pinkerton Enthusiast

                      Bis zum 03.01. bin ich nicht im Büro. Bitte wenden Sie sich in dieser Zeit an support@mdm.de.

                       

                      I'm out of office until January 3rd. Please contact support@mdm.de instead.

                       

                      Mit freundlichem Gruß / Best Regards

                      Michael Groß

                      MDM IT Abteilung / MDM IT department

                      • 248. Re: New  !! Open unofficial storage performance thread
                        JonWeatherhead Enthusiast

                        My results have been crammed into a spreadsheet that I sent around to some of the guys I work with, I'm not going to take the time to format them the same way everyone else has been doing... sorry for that.

                         

                        Background:

                        SERVER TYPE: Windows 2003 Standard R2 (aligned to 1024K)
                        CPU TYPE / NUMBER / Memory: VCPU / 2 / 1024M
                        HOST TYPE: Dell PowerEdge R510, 32GB RAM, 2 x Intel L5520, 2.266GHz, QuadCore, Running ESXi 4.1
                        STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS PERC H700 / (6) 300GB SAS 15k / RAID 10

                        STRIPE UNIT SIZE: Default 64k, 256k, 512k, 1024k

                        VMFS ALIGNMENT: Default 64k, 256k, 512k, 1024k

                        VMFS BLOCK SIZE: 8M for all except the first one that was run looking at all the default settings and a misaligned guest

                         

                        I've attached my results as a spreadsheet.

                         

                        I was interested in seeing that the stride used in my RAID array had a way bigger effect on performance than what guest alignment did.  Although I suspect that alignment of the guests would be a bigger deal as I start adding more VMs.  Also, back in ESX3 they had a best practice of setting your Windows guests up to use a FAU (File Allocation Unit) of 32k.  This just doesn't seem to have a big effect at all on disk I/O.

                         

                        So for our setup where we are going to use DAS instead of a SAN, I believe we will:

                        1. Run our RAID strides to 1024k

                        2. Align any new VMs but disregard alignment on existing ones (maybe... jury is still out)

                        3. Not bother with setting regular (non DB) VMs up to use an FAU of 32k

                        4. Not bother with re-aligning the VMFS volumes to match our strides (it helps but not that much)

                        5. Continue to use 8M blocks on datastores since that is what we have already been doing for a while.

                         

                        Has anyone else noticed any real benefit to setting the guest OS to use FAU of 32k?

                        • 249. Re: New  !! Open unofficial storage performance thread
                          cbseydler Lurker

                          Hi everybody,

                           

                          here are my test results. We are using a NetApp FAS 3140 and the datastores sit in a volume that's made available via NFS. The ESXi 4 host has two 1Gbps connections to the filer. The maximum throughput is somewhere near the limit of a 1 Gbps connection - as one would expect. The random throughput is near 20 MB/s.

                           

                          These numbers look normal, or what do you think?

                           

                          ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

                          TABLE oF RESULTS

                          ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


                          SERVER TYPE: ESXi 4, VM Windows 2003 Standard

                          CPU TYPE / NUMBER: VCPU / 1

                          HOST TYPE: HP ProLiant BL460c G6, 48GB RAM; 2x Intel Xeon E5540, 2.53 GHz, QC

                          STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp FAS3140 / 13 Disks (2 Parity) / RAID DP

                          SAN TYPE: Storage Link 2xEthernet 1Gb / NFS / NO JUMBOFRAMES / NO FLOWCONTROL

                           


                          ##################################################################################

                          TEST NAME--


                          --------------------------------------------------------------------------------
                          Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

                          ##################################################################################


                          Max Throughput-100%Read........__17.5395__.........._3431.58_........._107.24__


                          RealLife-60%Rand-65%Read......__19.9659__.........._2588.61_........._20.22__


                          Max Throughput-50%Read..........__12.2198__.........._4921.79_........._153.81__


                          Random-8k-70%Read.................__19.6347__.........._2489.96_........._19.45__


                          EXCEPTIONS: CPU Util.-%;


                          ##################################################################################

                           

                          Thanks in advance for your hints.

                           

                          Alex

                          • 250. Re: New  !! Open unofficial storage performance thread
                            ryanmaclean Lurker

                            A quick test of a recent Nexenta build.

                            Dedupe and compression both turned off.

                            Using direct-connect iSCSI with software initiator on Win7 64bit client prior to testing same box with vSphere.

                             

                            SERVER TYPE: Desktop Test
                            CPU TYPE / NUMBER: Core i7 920, 4 cores
                            HOST TYPE: Physical whitebox, 12GB RAM, 2x gbit NICs
                            NETWORK: Direct-attached iSCSI with 1500 MTU, 32k block size on storage pool and NTFS formatted LUN.
                            STORAGE TYPE / DISK NUMBER / RAID LEVEL: Nexenta Enterprise, 3x 1.5TB Seagate 7200.11 SATA in ZPool, 1x 55GB OCZ Revodrive ZIL, 1x 55GB Revodrive Cache
                            Test name Latency Avg iops Avg MBps cpu load
                            Max Throughput-100%Read16.7435711110%
                            RealLife-60%Rand-65%Read7.136187480%
                            Max Throughput-50%Read12.3444821400%
                            Random-8k-70%Read6.506198480%
                            • 251. Re: New  !! Open unofficial storage performance thread
                              Gabriel Chapman Enthusiast

                              SERVER  TYPE:  W2K3  R2 SP2 x32 1vCPU 2GB RAM

                              CPU TYPE  2 X Nehalem 7560 8Core  @2.27GHz

                              HOST TYPE:   IBM X3960 X5 128GB RAM (2)8GB FC - (2)10GB  Eth

                              STORAGE TYPE / DISK NUMBER /  RAID LEVEL:   XIV/180/NORAID

                               

                               

                              Test TypeLatencyAvg IOPSAvg MBpsCPU Load
                              Max Throughput: 100% Read12.03532016616%
                              Real Life 60% Rand-Read3.961617712643%
                              Max Throughput 50% Read3.651754513747%
                              Random 8k-70% Read11.05579318119%
                              • 252. Re: New  !! Open unofficial storage performance thread
                                Adrian.Buchmann Lurker
                                SERVER TYPE: ESXi 4.10 / Windows Server 2008 R2 x64, 2 vCPU, 4GB RAM
                                CPU TYPE / NUMBER: Intel Xeon X5670 @ 2.93GHz
                                HOST TYPE: HP ProLiant BL460c G7
                                STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp FAS6280 Metrocluster, FlashCache / 80 Disks / RAID DP
                                Test name Latency Avg iops Avg MBps cpu load
                                Max Throughput-100%Read4.071156236163%
                                RealLife-60%Rand-65%Read1.67229011781%
                                Max Throughput-50%Read3.931168436561%
                                Random-8k-70%Read1.45255091991%
                                • 254. Re: New  !! Open unofficial storage performance thread
                                  pinkerton Enthusiast

                                  After having some trouble with storage performance as described in http://communities.vmware.com/message/1680855#1680855 and moving the Heavy Hitter to a different set of disks, we additionally decided to buy four more drives and move the VMFS LUNs on our HP EVA 4400 from RAID5 to RAID1.

                                   

                                  Here are my results:

                                   

                                   

                                  SERVER TYPE: VM Windows 2008 R2, 4 GB RAM, LSI Logic SAS

                                  CPU TYPE / NUMBER: 2 VCPU

                                  HOST TYPE: ESXi 4.1 320137, HP DL380 G6, 64GB RAM, 2x E5520, 2,27 GHz QC

                                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 4400 / 32/36x FC 10k 300GB / RAID5 and RAID10

                                  SAN TYPE / HBAs : FC, HP FC1142SR QLogic HBA, HP StorageWorks 8/8 SAN Switches

                                   

                                  #################################################################################################

                                   

                                   

                                  Vraid5 / 32 HDDs

                                   

                                   

                                  Test NameResponse TimeIOPSMB/s

                                  Max Throughput-100%Read

                                  2,6710.809337,78

                                  RealLife-60%Rand-65%Read

                                  8,293.05523,87

                                  Max Throughput-50%Read

                                  34,031.53047,81

                                  Random-8k-70%Read

                                  8,642.86622,39

                                   

                                  Vraid1 / 36 HDDs

                                   

                                  Test NameResponse TimeIOPSMB/s

                                  Max Throughput-100%Read

                                  5,0911.686365,20

                                  RealLife-60%Rand-65%Read

                                  10,844.28433,47

                                  Max Throughput-50%Read

                                  6,635.113159,78

                                  Random-8k-70%Read

                                  10,414.78337,37

                                   

                                  As you can see, especially test 3 and 4 run MUCH faster with Vraid1 instead of Vraid5! It actually seems that the RAID Level is more important for performance than the amount of disks!

                                  1 15 16 17 18 19 Previous Next