1 17 18 19 20 21 Previous Next 574 Replies Latest reply on Jun 1, 2018 5:44 AM by larstr Go to original post
      • 270. Re: New  !! Open unofficial storage performance thread
        ngerasim Enthusiast

        Can someone confirm my test is setup correctly?

         

        Version 2006.07.27
        'TEST SETUP ====================================================================
        'Test Description
        'Run Time
        ' hours      minutes    seconds
        0          5          0
        'Ramp Up Time (s)
        0
        'Default Disk Workers to Spawn
        NUMBER_OF_CPUS
        'Default Network Workers to Spawn
        0
        'Record Results
        ALL
        'Worker Cycling
        ' start step step type
        1 5 LINEAR
        'Disk Cycling
        ' start step step type
        1 1 LINEAR
        'Queue Depth Cycling
        ' start end step step type
        8 128 2 EXPONENTIAL
        'Test Type
        NORMAL
        'END test setup
        'RESULTS DISPLAY ===============================================================
        'Update Frequency,Update Type
        4,WHOLE_TEST
        'Bar chart 1 statistic
        Total I/Os per Second
        'Bar chart 2 statistic
        Total MBs per Second
        'Bar chart 3 statistic
        Average I/O Response Time (ms)
        'Bar chart 4 statistic
        Maximum I/O Response Time (ms)
        'Bar chart 5 statistic
        % CPU Utilization (total)
        'Bar chart 6 statistic
        Total Error Count
        'END results display
        'ACCESS SPECIFICATIONS =========================================================
        'Access specification name,default assignment
        Max Throughput-100%Read,ALL
        'size,% of size,% reads,% random,delay,burst,align,reply
        32768,100,100,0,0,1,0,0
        'Access specification name,default assignment
        RealLife-60%Rand-65%Read,ALL
        'size,% of size,% reads,% random,delay,burst,align,reply
        8192,100,65,60,0,1,0,0
        'Access specification name,default assignment
        Max Throughput-50%Read,ALL
        'size,% of size,% reads,% random,delay,burst,align,reply
        32768,100,50,0,0,1,0,0
        'Access specification name,default assignment
        Random-8k-70%Read,ALL
        'size,% of size,% reads,% random,delay,burst,align,reply
        8192,100,70,100,0,1,0,0
        'END access specifications
        'MANAGER LIST ==================================================================
        'Manager ID, manager name
        1,WPDMA392
        'Manager network address
        10.66.66.250
        'Worker
        Worker 1
        'Worker type
        DISK
        'Default target settings for worker
        'Number of outstanding IOs,test connection rate,transactions per connection
        64,ENABLED,500
        'Disk maximum size,starting sector
        8000000,0
        'End default target settings for worker
        'Assigned access specs
        'End assigned access specs
        'Target assignments
        'Target
        C:
        'Target type
        DISK
        'End target
        'End target assignments
        'End worker
        'End manager
        'END manager list
        Version 2004.07.30

        • 271. Re: New  !! Open unofficial storage performance thread
          ngerasim Enthusiast

          Question. I set the queue depth on the ESX hosts, however didnt change the queue depth in Windows Server. Is there a specific setting I need to "set" to get more of a queue depth or better performance? This is for the ESXi Server 2003 VMs running on ESXi 4.1 u1

          • 272. Re: New  !! Open unofficial storage performance thread
            Enthusiast

            Hey All,

            I need some help with my IOmeter Test Setup.  The servers I am using are in a special corporate network, so there is no easy way for me to upload the config file.

             

            Basically I am getting TOO MANY IOPS back from IOmeter based on my setup.  IOmeter is reporting 1650 IOPS during a 3 minute test on this hardware.

             

            Thank you for any feedback, I am calculating theoretical IOPS at 700-900, 1650 seems way too high.  MB/s seems to be around 5-7.


            Drew

             

            HP P2000

            • LUN:
              • 6 Disks, RAID 10 (300GB 10,000RPM SAS 6G Dual Port)
              • Chunk Size: 64K
            • Dual controllers, cache is 2GB/2GB
            • 8 Gb Fibrechannel

             

            Blade

            • HP BL460 G6, Dual 6-core CPU, 144GB RAM
            • Local disks are 72GB 15,000 RPM in a RAID 10

             

            VM

            • VMFS Block Size: 2MB
            • Vista SP1, 150GB C:\
            • 200 GB drive used as a PHYSICAL device in IOMETER (although the results were still good when formatted with NTFS)
            • 1 CPU
            • 1024 MB RAM (hard set in ESX not to go over)

             

            IOmeter Settings:

            • 1 Worker
            • Using physical drive, 64,000,000 sectors (although I didn't see a difference with 8,000,000 or 32,000,000)
            • 32 Outsanding I/O's
            • Test Connection Rate (not checked)
            • Transfer Request Size: 0MB, 4KB
              • Vista Block Size was 4096, so this seemed right.
            • 100% Random I/O
            • Reply Size (no reply)
            • 30% Write, 70% Read
            • Transfer Delay (0 ms)
            • Burst Length: 1 I/O
            • 273. Re: New  !! Open unofficial storage performance thread
              SteveEsx Novice

              I think I get strange random performance because I use thin provisioned disk in this test? 18.49 latency on local server disks on "random-8k" test with only 2658 iops seems very wrong?

              No other load or any other VM on the server

              iometer version: 2006.07.27

               

               

              SERVER TYPE:Dell PowerEdge R710
              CPU TYPE / NUMBER: 2 x 6 core Intel
              HOST TYPE: VM w2k3 r2 enterprise x64 thin disk, paravirt scsi
              STORAGE TYPE / DISK NUMBER / RAID LEVEL: local 6 x sas 15k on perc raid controller
              
              Test nameLatencyAvg iopsAvg MBpscpu load
              Max Throughput-100%Read2.452369074036%
              RealLife-60%Rand-65%Read16.952896222%
              Max Throughput-50%Read1.3639750124253%
              Random-8k-70%Read18.492658203%
              Max Throughput-100%Read1.3339542123555%
              RealLife-60%Rand-65%Read16.842921226%
              Max Throughput-50%Read1.3540095125253%
              Random-8k-70%Read18.482663207%

               

               

              Old style VMTN communities table:

              SERVER TYPE:Dell PowerEdge R710
              CPU TYPE / NUMBER: 2 x 6 core Intel
              HOST TYPE: VM w2k3 r2 enterprise x64 thin disk, paravirt scsi
              STORAGE TYPE / DISK NUMBER / RAID LEVEL: local 6 x sas 15k on perc raid controller
              
              |*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
              |*Max Throughput-100%Read*|2.45|23690|740|36%|
              |*RealLife-60%Rand-65%Read*|16.95|2896|22|2%|
              |*Max Throughput-50%Read*|1.36|39750|1242|53%|
              |*Random-8k-70%Read*|18.49|2658|20|3%|
              |*Max Throughput-100%Read*|1.33|39542|1235|55%|
              |*RealLife-60%Rand-65%Read*|16.84|2921|22|6%|
              |*Max Throughput-50%Read*|1.35|40095|1252|53%|
              |*Random-8k-70%Read*|18.48|2663|20|7%|
              
              • 274. Re: New  !! Open unofficial storage performance thread
                Dyr Lurker
                SERVER TYPE: Windows 2008 Server, 16Gb RAM, iSCSI via 4x1Gb ethernet MPIO RR, not under virtualization
                CPU TYPE / NUMBER: 2x X5620
                HOST TYPE: Supermicro X8DTU, Intel E1G44HT Quad Gigabit NIC
                STORAGE TYPE / DISK NUMBER / RAID LEVEL: NexentaStor, 8x2Tb Hitachi SATA in RAIDZ2, SSD for cache, 16Gb RAM
                Test name Latency Avg iops Avg MBps cpu load
                Max Throughput-100%Read0.00130104060%
                RealLife-60%Rand-65%Read6.352215170%
                Max Throughput-50%Read315.48192676021%
                Random-8k-70%Read5.772344180%

                 

                I'm slightly confused by results. Probably it's because testfile was as big as RAM (32 000 000 sectors ~= 15,2Gbytes).

                • 275. Re: New  !! Open unofficial storage performance thread
                  m1kkel84 Enthusiast
                  SERVER TYPE: ESX 4.1 / VM:Server 2008 R2 - 4 GB mem
                  CPU TYPE / NUMBER: 1
                  HOST TYPE: Fujitsu Siemens RX300S4 20 GB mem With brocade 815 HBA
                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA P2000G3 8GB FC / 12 Disks SAS 10K 300 GB / Raid5
                  

                   

                   

                  SERVER TYPE: ESX 4.1 / VM:Server 2008 R2 - 4 GB mem
                  
                  CPU TYPE / NUMBER: 1
                  
                  HOST TYPE: Fujitsu Siemens RX300S4 20 GB mem With brocade 815 HBA
                  
                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA P2000G3 8GB FC / 12 Disks SAS 10K 300 GB / Raid5
                  
                  
                  Test name Latency Avg iops Avg MBps cpu load
                  Max Throughput-100%Read3.061916059865%
                  RealLife-60%Rand-65%Read153.71253154%
                  Max Throughput-50%Read4.361337041747%
                  Random-8k-70%Read97.74266275%

                   

                   

                  Old style VMTN communities table:

                  SERVER TYPE: ESX 4.1 / VM:Server 2008 R2 - 4 GB mem
                  
                  CPU TYPE / NUMBER: 1
                  
                  HOST TYPE: Fujitsu Siemens RX300S4 20 GB mem With brocade 815 HBA
                  
                  STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP MSA P2000G3 8GB FC / 12 Disks SAS 10K 300 GB / Raid5
                  
                   
                  |*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
                  |*Max Throughput-100%Read*|3.06|19160|598|65%|
                  |*RealLife-60%Rand-65%Read*|153.71|253|1|54%|
                  |*Max Throughput-50%Read*|4.36|13370|417|47%|
                  |*Random-8k-70%Read*|97.74|266|2|75%|
                  

                   

                   

                   

                   

                  Kind of weird that CPU load on my VM is so high, and amount of IO is so high in the read, and so low in the write test...

                  • 276. Re: New  !! Open unofficial storage performance thread
                    Dyr Lurker

                    I've made few test to investigate performance of iSCSI-exported storage. To do it, I've tried next configurations: Standalone (Physical) installed Windows 2008 Server with iSCSI initiator in MPIO mode.  VmWare ESXi 4.1 with iSCSI-exported storage in MPIO mode (round-robin policy iops=3), with previous standalone Win2008 Server, converted to virtual, so I can test storage in two modes: As virtual disk from esxi's iSCSI-connected datastore As "direct" connection to SAN vSwitch, so I could use native Windows iSCSI initiator from virtual machine.     So results are below, and I hope it will be interesting for all:

                    SERVER: Supermicro 6016T-NTRF, X8DTU-F, Xeon E5620, 16Gb RAM, Intel E1G44HT I340-T4 Quad Gigabit NIC (82580) 
                    STORAGE: Supermicro CSE-836E16-R1200B, X8DTH-iF, 2x Xeon E5506,  16Gb RAM,  8x2Tb Hitachi 7k2000 SATA, 80Gb Intel X25-M SSD, Intel E1G44HT I340-T4 Quad Gigabit NIC (82580)
                    STORAGE: TrinityNAS (based on NexentaStor),  RAIDZ2 pool, SSD cache
                    
                    Jumbo frames is on everythere.
                    
                    Tested on Iometer 1.1.0 with OpenPerformance32.icf pattern.
                    
                    

                     

                    Standalone Windows 2008 Server, 16Gb RAM, iSCSI via 4x1Gbe MPIO round-robin policy
                    
                    Test name Latency Avg iops Avg MBps cpu load
                    Max Throughput-100%Read0.00130104060%
                    RealLife-60%Rand-65%Read6.352215170%
                    Max Throughput-50%Read315.48192676021%
                    Random-8k-70%Read5.772344180%

                     

                     

                     

                     

                    Vmware ESXi 4:  Windows 2008 Server, converted from standalone; 4Gb RAM, iSCSI via 4x1Gbe MPIO least queue depth policy
                       
                    Test name Latency Avg iops Avg MBps cpu load
                    Max Throughput-100%Read0.0060851900%
                    RealLife-60%Rand-65%Read2.2277160%
                    Max Throughput-50%Read159.6897513040%
                    Random-8k-70%Read1.5361940%

                     

                     

                     

                     

                    Vmware ESXi 4:  Windows 2008 Server, converted from standalone; 4Gb RAM, Vmware iSCSI via 4x1Gbe MPIO with roundrobin iops=3
                       
                    Test name Latency Avg iops Avg MBps cpu load
                    Max Throughput-100%Read0.00126263940%
                    RealLife-60%Rand-65%Read3.09107380%
                    Max Throughput-50%Read327.01199796240%
                    Random-8k-70%Read2.1587560%

                    Any comments?

                    • 277. Re: New  !! Open unofficial storage performance thread
                      SteveEsx Novice

                      Table over results:

                       

                       

                      Physical   parameters

                      Virtual   Parameters

                      Max   Throughput-100% Read

                      RealLife-60%-Rand-65%   Read

                      Storage

                      Raid

                      Phys   Host

                      vscsi   type

                      vmdk

                      Latency

                      Avg   iops

                      Avg   MBps

                      cpu   load

                      Latency

                      Avg   iops

                      Avg   MBps

                      cpu   load

                      Local   disks

                      Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10

                      Dell   PowerEdge R710

                      LSI   Logic SAS

                      40   gb thick

                      2,66

                      22429

                      700

                      88

                      16,18

                      2957

                      23

                      80

                      Local   disks

                      Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10

                      Dell   PowerEdge R710

                      LSI   Parallel

                      40   gb thick

                      3

                      19621

                      613

                      0

                      16,04

                      3018

                      23

                      27

                      Local   disks

                      Perc H700 with 6 x 600gb SAS 15k 3.5"   - Raid 10

                      Dell   PowerEdge R710

                      Vmware   Paravirtual

                      40   gb thick

                      2,15

                      27769

                      867

                      32

                      15,9

                      3011

                      23

                      7

                      iSCSI   SAN

                      Md3000i   4 disk dg/vd - Raid 5

                      Dell   PowerEdge R710

                      LSI   Logic SAS

                      40   gb thick

                      15,38

                      3907

                      122

                      16

                      36,71

                      1108

                      8

                      45

                      iSCSI   SAN

                      Md3000i   4 disk dg/vd - Raid 5

                      Dell   PowerEdge R710

                      LSI   Parallel

                      40   gb thick

                      15,32

                      3904

                      122

                      0

                      35,71

                      1131

                      8

                      25

                      iSCSI   SAN

                      Md3000i   4 disk dg/vd - Raid 5

                      Dell   PowerEdge R710

                      Vmware   Paravirtual

                      40   gb thick

                      15,14

                      3967

                      123

                      1

                      35,07

                      1119

                      8

                      18

                      iSCSI   SAN

                      Md3000i   2 disk dg/vd - Raid 1

                      Dell   PowerEdge R710

                      LSI   Logic SAS

                      40   gb thick

                      15,17

                      3958

                      123

                      17

                      52,25

                      902

                      7

                      34

                      iSCSI   SAN

                      Md3000i   14 disk dg/vd - Raid 10

                      Dell   PowerEdge R710

                      LSI   Logic SAS

                      40   gb thick

                      17,14

                      3520

                      110

                      16

                      15,45

                      3696

                      28

                      18

                      iSCSI   SAN

                      Md3000i   14 disk dg/vd - Raid 5

                      Dell   PowerEdge R710

                      LSI   Logic SAS

                      40   gb thick

                      17,06

                      3535

                      110

                      16

                      19,49

                      2542

                      19

                      29

                      Local   SSD

                      no raid - ESB2 intel - Crucial RealSSD   C300 2,5" 128gb

                      Dell   Precision T5400

                      n/a

                      n/a

                      7,15

                      8243

                      257

                      11

                      6,68

                      8629

                      67

                      9

                      Local   SSD

                      no raid - ICH9 intel - Intel 80gb G2 M

                      Dell   Latitude E6400

                      n/a

                      n/a

                      9,31

                      6402

                      200

                      35

                      16,26

                      3305

                      25

                      56

                      Local   disks

                      Perc 5/i - 4 disks 300gb sas 15k raid 5

                      Dell   PowerEdge 2950

                      n/a

                      n/a

                      3,64

                      17175

                      536

                      5

                      37,42

                      1197

                      9

                      3

                       

                       

                      Hosts used in test:

                       

                      Host

                      Model

                      Cpu

                      Memory

                      i/o   controller

                      Local   disk(s)

                      OS

                      NIC

                      vSphere   server

                      Dell   PowerEdge R710

                      2 x Intel   Xeon X5680 3.33 Ghz 6 core, 12M cache, 6.40 GT/s QPI 130W TDP, Turbo, HT

                      96 GB   memory for 2 cpu (12 x 8 GB Dual Rank RDIMMs) 1333 MHz

                      Perc H700   Intergrated, 1 GB NV Cache, x6 Backplane

                      1 x   SDcard

                      6 x 600   GB SAS 6 Gbps 15K 3.5” , raid 10

                      VMware   ESXi 4.1.0 build 348381 on SDcard

                      Embedded   broadcom Gbe LOM with TOE and iSCSI offload (4 port)

                      &

                      Intel   Gigabit ET Quad port server adapter PCIe x4

                      &

                      Intel   X520 DA 10GbE Dual Port PCIe x8

                      Workstation

                      Dell   Precision T5400

                      1 x Xeon   E5440 2,83 Ghz quad core

                      16 GB   Memory fully buffered dimm

                      Intel   5400 chipset (intel ESB 2 Sata raid controller)

                      1 x   Crucial RealSSD C300 2.5” 128GB Sata 6 gb/s

                      Windows 7   Enterprise x64

                      Broadcom   57xx & Intel Pro 1000 PT dual SA

                      Laptop

                      Dell   Latitude E6400

                      1 x 2.53   Ghz Intel Core 2 Duo

                      4 GB   Memory

                      Intel   ICH9

                      1 x Intel   80gb SSD gen2 M

                      Windows 7   Enterprise x64

                      Intel   82567

                      Physical   server

                      Dell   PowerEdge 2950

                      2 x Intel   Xeon 5150 - 2.66 ghz dual core 4mb L2 cache

                      16 gb   memory 533 mhz

                      Perc 5/i

                      4 x 300   gb 15k sas, raid 5

                      Windows   2008 R2 Enterprise

                      Broadcom   BCM5708C NetExtreme II & Intel Pro/1000 PT Dual Port SA

                       

                      iSCSI SAN used in test:

                       

                      Dell PowerVault MD3000i – 15 x 600gb sas 15k (one global hotspare)

                      2 x PC5424 Dell PowerConnect switches (2 isolated iscsi subnets as recommended for MD3000i)

                       

                      LAN switches:

                       

                      Cisco 2960 series and Nexus 5010

                       

                      Virtual Machines used:

                       

                      VM

                      OS

                      vcpu

                      scsi

                      vmdk

                      Memory

                      NIC

                      VM HW   vers

                      Iometer01

                      Windows   2008 R2 SP1 (x64)

                      1

                      (default)

                      LSI Logic   SAS

                      (default)

                      40 gb   thick

                      4 gb

                      Vmxnet 3

                      7

                      Iometer02

                      Windows   2003 R2 Sp2 x64

                      2

                      LSI Logic   Parallel

                      40 gb   thick

                      8 gb

                      Vmxnet 3

                      7

                      Iometer03

                      Windows   2008 R2 SP1 (x64)

                      2

                      Paravirtual

                      40 gb   thick

                      8 gb

                      E1000

                      (default)

                      7

                       

                       

                      Comparison between virtual scsi types and guest OS iometer performance

                       

                      Comparison of VM Iometer01,02,03 running on local server disks to see if there is a noticeable difference on different guest OS and virtual scsi adapter type:

                       

                      SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      2.66

                      22429

                      700

                      88%

                      RealLife-60%Rand-65%Read

                      16.18

                      2957

                      23

                      80%

                      Max Throughput-50%Read

                      1.38

                      42340

                      1323

                      63%

                      Random-8k-70%Read

                      17.52

                      2745

                      21

                      38%

                       

                      SERVER TYPE: VM iometer02 - W2K3 R2 SP2 x64 -    LSI Parallel

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      3.00

                      19621

                      613

                      0%

                      RealLife-60%Rand-65%Read

                      16.04

                      3018

                      23

                      27%

                      Max Throughput-50%Read

                      1.34

                      39659

                      1239

                      0%

                      Random-8k-70%Read

                      17.56

                      2751

                      21

                      26%

                       

                      SERVER    TYPE: VM iometer03 - W2K8 R2 SP1 x64 - VMware Paravirtual

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc H700 with 6 x 600GB SAS 6gbps 15k    3.5" - Raid10

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      2.15

                      27769

                      867

                      32%

                      RealLife-60%Rand-65%Read

                      15.90

                      3011

                      23

                      7%

                      Max Throughput-50%Read

                      1.22

                      48797

                      1524

                      48%

                      Random-8k-70%Read

                      17.50

                      2738

                      21

                      7%

                      Logs: iometer01-local-01, iometer02-local-01, iometer03-local-01


                      Comparison of same VMs on MD3000i SAN:

                       

                      SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      15.38

                      3907

                      122

                      16%

                      RealLife-60%Rand-65%Read

                      36.71

                      1108

                      8

                      45%

                      Max Throughput-50%Read

                      12.40

                      4816

                      150

                      17%

                      Random-8k-70%Read

                      40.56

                      1103

                      8

                      41%

                       

                      SERVER TYPE: VM iometer02 - W2K3 R2 SP2 x64 -    LSI Parallel

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      15.32

                      3904

                      122

                      0%

                      RealLife-60%Rand-65%Read

                      35.71

                      1131

                      8

                      25%

                      Max Throughput-50%Read

                      16.82

                      3644

                      113

                      0%

                      Random-8k-70%Read

                      40.50

                      1107

                      8

                      27%

                       

                      SERVER TYPE: VM iometer03 - W2K8 R2 SP1 x64 -    VMware Paravirtual

                      CPU TYPE / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      15.14

                      3967

                      123

                      1%

                      RealLife-60%Rand-65%Read

                      35.07

                      1119

                      8

                      18%

                      Max Throughput-50%Read

                      12.44

                      4791

                      149

                      1%

                      Random-8k-70%Read

                      41.34

                      1105

                      8

                      12%

                      Logs: iometer01-san-01, iometer02-san-01, iometer03-san-01

                       

                       

                      Comment:

                      Different windows server OS and virtual scsi adapter types does not change the performance in a dramatic way (adding disks or changing raid systems has a much larger impact)

                      However it looks like LSI Logic SAS uses a lot more cpu than the other types of virtual scsi adapters, I only did each test once so maybe more tests are needed to confirm that.

                      Local server disks are much faster than the cheap MD3000i iscsi SAN for single VM performance

                      Note: SAN diskgroup and virtual disk was only using 4 disks on the SAN box so the results cannot be directly compared to server disks

                       

                       

                       

                      Dell PowerVault Md3000i iSCSI SAN iometer performance with different configurations

                       

                      Comparison to see the effect with various raid and diskgroups:

                       

                      First tested with a small diskgroup & virtual disk/lun of 4 x drives using raid 5. Then I also tested on a 2 drive raid 1 lun on the iscsi SAN.

                      After that I tested with a 14 drive disk group (the virtual disk does not fill all space then because of the VMware 2 tb limit). The 14 drive disk group I tested with raid 5 and raid 10.

                       

                      1mb block on 4 spindles r5 iscsi san:

                       

                      SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 4 disks using Raid 5 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      15.18

                      3954

                      123

                      17%

                      RealLife-60%Rand-65%Read

                      34.52

                      1141

                      8

                      47%

                      Max Throughput-50%Read

                      12.45

                      4798

                      149

                      17%

                      Random-8k-70%Read

                      41.05

                      1114

                      8

                      38%

                       

                      Raid 1

                       

                      SERVER TYPE: VM iometer01 - W2K8 R2 SP1 x64 -    LSI Logic SAS

                      CPU TYPE / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 2 disks using Raid 1 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      15.17

                      3958

                      123

                      17%

                      RealLife-60%Rand-65%Read

                      52.25

                      902

                      7

                      34%

                      Max Throughput-50%Read

                      12.40

                      4803

                      150

                      17%

                      Random-8k-70%Read

                      59.63

                      919

                      7

                      23%

                       

                      Raid 10 – 14 disks

                       

                      SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 14 disks using Raid 10 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      17.14

                      3520

                      110

                      16%

                      RealLife-60%Rand-65%Read

                      15.45

                      3696

                      28

                      18%

                      Max Throughput-50%Read

                      14.29

                      4144

                      129

                      17%

                      Random-8k-70%Read

                      13.66

                      3936

                      30

                      24%

                       

                      Raid 5 – 14 disks

                       

                      SERVER    TYPE: VM iometer01 - W2K8 R2 SP1 x64 - LSI Logic SAS

                      CPU TYPE    / NUMBER: 2 x Intel X5680 3.33GHz

                      HOST    TYPE: Dell PowerEdge R710

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell PowerVault MD3000i iscsi SAN,

                      diskgroup    & virtual disk 0 with 14 disks using Raid 5 (database i/o type 128k    segment)

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      17.06

                      3535

                      110

                      16%

                      RealLife-60%Rand-65%Read

                      19.49

                      2542

                      19

                      29%

                      Max Throughput-50%Read

                      14.38

                      4112

                      128

                      17%

                      Random-8k-70%Read

                      16.16

                      2754

                      21

                      38%

                       

                      Comment:
                      As expected raid 10 gives better performance at the cost of less space. Random i/o sees a big improvement with more disks added to the raid.

                      Note: I also tested with different block sizes and with and without storage i/o control, but these settings did not seem to have much of an impact on performance. Storage I/O control should only kick when doing multiple VM I/O loads so that is as expected I think. I will use a default block size of 8MB on all my datastores.

                      Note:

                      All tests are done without using jumbo frames on the iscsi traffic

                       

                       

                      Physical test results for comparison

                       

                      SERVER    TYPE: Physical Dell Prec T5400 - Win 7 Enterprise x64

                      CPU TYPE    / NUMBER: 1 x Intel Xeon E5440

                      HOST    TYPE: Phyiscal Dell Prec T5400

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Crucial RealSSD C300 2.5" 128gb sata

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      7.15

                      8243

                      257

                      11%

                      RealLife-60%Rand-65%Read

                      6.68

                      8629

                      67

                      9%

                      Max Throughput-50%Read

                      11.13

                      5067

                      158

                      11%

                      Random-8k-70%Read

                      5.45

                      10427

                      81

                      30%

                       

                      SERVER    TYPE: Physical Dell Latitude E6400 - Windows 7 Enterprise x64

                      CPU TYPE    / NUMBER: 1 x 2.53 Ghz intel core duo 2

                      HOST    TYPE: Physical Dell Latitude E6400 - Windows 7 Enterprise x64

                      STORAGE TYPE / DISK NUMBER / RAID LEVEL: Intel    80gb SSD gen2 M

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      9.31

                      6402

                      200

                      35%

                      RealLife-60%Rand-65%Read

                      16.26

                      3305

                      25

                      56%

                      Max Throughput-50%Read

                      62.80

                      903

                      28

                      21%

                      Random-8k-70%Read

                      10.58

                      4996

                      39

                      37%

                       

                      SERVER    TYPE: Physical Dell PowerEdge 2950 - W2K8 R2 x64

                      CPU TYPE    / NUMBER: 1 x 2.53 Ghz intel core duo 2

                      HOST    TYPE: Physical Dell PowerEdge 2950 - W2K8 R2 x64

                      STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Perc 5/i - 4 x 300 gb sas 15k - raid 5

                      Test name

                      Latency

                      Avg iops

                      Avg MBps

                      cpu load

                      Max Throughput-100%Read

                      3.64

                      17175

                      536

                      5%

                      RealLife-60%Rand-65%Read

                      37.42

                      1197

                      9

                      3%

                      Max Throughput-50%Read

                      4.91

                      12721

                      397

                      3%

                      Random-8k-70%Read

                      40.15

                      1161

                      9

                      1%

                       

                      Comment:

                      As expected a server with raid of many disks is faster than single SSD on sequential throughput but slower on random I/O.

                       

                      Note: These are physical tests and since virtualization has some overhead they are usually faster than the virtualized servers (iometer01-03) in similar configurations. Also note that I have tested some mainstream SSD disks here which is not usually used in servers (server ssd costs a lot more), however it is still an interesting comparison when for example a developer has to choose between running a virtual machine in vmware workstation on an SSD laptop/workstation or use a shared VMware LabManager server with SAN storage.  The PE2950 server tested is a generation 9 Dell server and is much older than the generation 11 R710 servers, but that is one of the advantages of virtualization that you can buy new servers each year and move virtual servers to new hosts to upgrade the speed (then over time virtualization might actually be faster than the old model of buying a dedicated server for a solution and running it for 4 years).

                      • 278. Re: New  !! Open unofficial storage performance thread
                        fbonez Expert
                        vExpert

                        Sono fuori ufficio e con limitato accesso alle email.

                        Rientrerò il 21/03/2011.

                         

                        Per urgenze contattare l'assistenza tecnica allo 045 8738738.

                         

                        Francesco Bonetti

                        RTC SpA

                        --
                        If you find this information useful, please award points for "correct" or "helpful". | @fbonez | www.thevirtualway.it
                        • 279. Re: New  !! Open unofficial storage performance thread
                          Gabriel Chapman Enthusiast

                          SteveEsx:

                           

                          I'm calling serious BS on your DAS results. Its physically impossible to get 22k IOPS out of a 6 disk RAID 10 config. Also, some of your results have 0% processor utilization, exactly how does that work?

                          • 280. Re: New  !! Open unofficial storage performance thread
                            SteveEsx Novice

                            Hi Gabriel,

                             

                            I don't use DAS anywhere I have tested local server disks and SAN boxes. I guess with DAS you are refering to the internal raid with 6 disks on the R710 server?  Is 22k iops not normal for internal raid 10 disks with 6 spindles? I don't have enough experience with this test to know that, I would love to know if I have done something wrong in the test

                             

                            I have followed what I think is the normal procedure here, created a virtual machine with different versions of windows server and installed iometer 2006 then run the test. The numbers are from the logfiles parsed by the web page http://vmktree.org/iometer/

                             

                            I am not posting here to have the "best numbers" I want to find what is normal performance with a test that is repeated by other people so I can use that test as a reference for finding issues with i/o at other sites.

                             

                            This is the procedure I have done:

                             

                            1. created 3 virtual machines "iometer01", "iometer02", and "iometer03" (using 3 different VM scsi types)
                            2. installed windows server and patched them with windows update
                            3. installed iometer-2006.07.27.win32.i386-setup.exe (I'm not sure if this is the version everyone is using or not?)
                            4. Opened the configuration file that contains the tests, added them to "assigned access specifications"
                            5. Clicked the green flag to start test and created log file
                            6. Parsed the logfile with  http://vmktree.org/iometer/

                             

                            If there are some other tests you would like me to do to verify the numbers I'm happy to do so.

                             

                            Why some tests say 0% cpu load I have no idea I'm not an iometer expert, I have attatched the logfile from one test "iometer02-san01.csv" which is the test that seems to have the lowest cpu load so maybe you can see if something is not right there?

                             

                            Thanks for your input, interresting to know if the test numbers are sane.

                            • 281. Re: New  !! Open unofficial storage performance thread
                              EllettIT Hot Shot

                              My last day with Ellett Brothers will be March 25th, please call 803-345-3751 to be directed to the appropriate person.

                               

                              Thanks!

                              • 282. Re: New  !! Open unofficial storage performance thread
                                larstr Virtuoso
                                vExpert

                                Steve,

                                When using iometer inside a VM there are several things that may affect your results. One is disk and disk controller cache that may affect the results and give higher numbers. Using a larger iometer test file will solve this problem.

                                 

                                Another thing that may affect the results is if the clock inside the guest is not correct. This happens especially with high cpu load. In 3.5 the solution to this issue was the descheduled time service. This service is however not available anymore on newer ESX versions.

                                 

                                You're also running your tests on the local C: drive which typically is the drive where the OS is installed. It would be better to use a dedicated disk drive.

                                 

                                Regarding the cpu load, by studying the csv file we can see that the load is indeed higher than 0%, but since you're using multiple workers (2 cpus), only the load of the second cpu is reported by that page.

                                 

                                Lars

                                • 283. Re: New  !! Open unofficial storage performance thread
                                  EllettIT Hot Shot

                                  My last day with Ellett Brothers will be March 25th, please call 803-345-3751 to be directed to the appropriate person in the IT department.

                                   

                                  Thanks!

                                  • 284. Re: New  !! Open unofficial storage performance thread
                                    SteveEsx Novice

                                    Hi Lars,

                                     

                                    Yes I agree with all you are saying but I think this is the most efficient way to have a simple iometer reference test (a "standard" test that it is easy to teach others to do). I would follow your suggestions if my goal were to have a competition between SAN products in performance, but my goal is simply to see systems that have major issues with either random or sequential i/o performance at other sites. My company frequently install heavy database solutions on old SANs that often perform very badly which is a waste of time and money, nice to have some numbers to show whats wrong in an efficent way.

                                     

                                    I guess one option would be to have a virtual machines and add extra VMDK disks on other LUNs to test them, that would also be efficient. But what do most people do when they post results here I wonder? I did my first tests here on a server with no other VMs on a LUN with no other VMs, my goal was to get the best result on a non busy environment so I can see the effects later when the environment is busy.

                                     

                                    I think the test file is large enough in this test, the VM has 8 gb of ram and the PERC controller and MD3000i controllers does not have more than 1 gb cache. However I'm rather new at using iometer before I used simpler tools like HDtune, HDtach. Those tools are not so good at showing random i/o performance as I think iometer is, and it is very nice to have some reference numbers and comments from this community. It might also have an effect on my own storage purchases in the future I guess

                                     

                                    Do you think 22k iops is impossible with 6 drives like Gabriel says here? For comparison I did a test on a physical PE2900 host with 8 drives and it got these numbers:

                                     

                                    W2k8 r2 enterprise

                                    Raid 10

                                    8 x SAS Seagate 300 gb sas 15k

                                    Perc 6/i

                                    SEP11 AV installed

                                    SERVER TYPE:    Dell PowerEdge 2900 III

                                    CPU TYPE    / NUMBER:

                                    HOST TYPE: Dell PowerEdge 2900 III

                                    STORAGE TYPE / DISK NUMBER / RAID LEVEL: R10    8xSAS 15K Seagate 3.5" Perc 6/i

                                    Test name

                                    Latency

                                    Avg iops

                                    Avg MBps

                                    cpu load

                                    Max Throughput-100%Read

                                    2.98

                                    18170

                                    567

                                    2%

                                    RealLife-60%Rand-65%Read

                                    15.79

                                    3079

                                    24

                                    0%

                                    Max Throughput-50%Read

                                    3.06

                                    19046

                                    595

                                    3%

                                    Random-8k-70%Read

                                    17.55

                                    2813

                                    21

                                    0%

                                     

                                    However on this host the test file might have been too small I guess as the server has 32gb of ram and w2k8r2 does a lot of strange i/o caching.

                                    Just for fun I also tested a PE2900 server with mainstream intel 80gb g2 ssd drives and then it had an awesome performance in the random I/O as expected:

                                     

                                    W2k8 r2 ent

                                    Raid 5

                                    8 x intel ssd 80gb gen2

                                    Perc 6/i

                                    no AV

                                    SERVER    TYPE: Dell PowerEdge 2900 III

                                    CPU TYPE    / NUMBER:

                                    HOST TYPE: Dell PowerEdge 2900 III

                                    STORAGE TYPE / DISK NUMBER / RAID LEVEL: R5    8xIntel 80gb SSD gen2 Perc 6/i

                                    Test name

                                    Latency

                                    Avg iops

                                    Avg MBps

                                    cpu load

                                    Max Throughput-100%Read

                                    3.04

                                    19950

                                    623

                                    2%

                                    RealLife-60%Rand-65%Read

                                    4.59

                                    12480

                                    97

                                    1%

                                    Max Throughput-50%Read

                                    3.12

                                    18678

                                    583

                                    3%

                                    Random-8k-70%Read

                                    5.21

                                    10396

                                    81

                                    0%

                                     

                                    I know its impossible to make the perfect test but at least I hope to avoid doing some really bad mistakes that makes the numbers meaningless. And I'm hoping these numbers can be compared to others in this forum.

                                     

                                    Didn't know about the results only showing one cpu thats interresting.

                                     

                                    Thanks for your input,

                                    S

                                    1 17 18 19 20 21 Previous Next