VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
mikeyb79
Enthusiast
Enthusiast

We saw similar numbers on our PS4000E until we ran the configuration script included with the multipathing extension module on each of the hosts. High latency and low IOPS across the board. Afterwards nearly half the latency and double the IOPS. Have the results at the office.

Reply
0 Kudos
xevrebyc
Contributor
Contributor

@mikeyb79

Thanks for the tip.  I will try this very soon.  I plan on installing an addtional 4 pNICs into each host this week.  I guess this was perfect timing.

I will post the results when finished.

Reply
0 Kudos
mikeyb79
Enthusiast
Enthusiast

Here are the numbers we saw from our PS4000E using the standard testing:

Dell EqualLogic PS4000E (No MEM Installed)

Access Specification Name

IOps

MBps (Binary)

Average Response Time

Max Throughput-100%Read

3493.163964

109.161374

17.178869

RealLife-60%Rand-65%Read

932.360549

7.284067

50.225846

Max Throughput-50%Read

4878.569157

152.455286

12.162984

Random-8k-70%Read

839.927753

6.561936

56.769025

Dell EqualLogic PS4000E (MEM Installed)

Access Specification Name

IOps

MBps (Binary)

Average Response Time

Max Throughput-100%Read

6729.109541

210.284673

8.874006

RealLife-60%Rand-65%Read

970.692039

7.583532

47.552126

Max Throughput-50%Read

6419.513468

200.609796

9.181681

Random-8k-70%Read

837.034044

6.539328

56.821859

Reply
0 Kudos
xevrebyc
Contributor
Contributor

Do you have a post MEM results?

Reply
0 Kudos
mikeyb79
Enthusiast
Enthusiast

Wow, that formatting turned out horrible. Both non-MEM and MEM results are there, in the read results there were large improvements. Random is fairly even between then two, with slightly better performance on the Real Life bench on the MEM-enabled system.

Reply
0 Kudos
fredlr
Contributor
Contributor

Hi all,

I'm getting strange results with http://vmktree.org/iometer/ and IOMeter 1.1.0

While analyzing the .csv, it seems that the csv output isn't interpreted as expected, the CGI taking the 9th column "Write MBps (Decimal)" instead of the 12th "Average Response Time" for the latency.
As an easy way to see that, I'm getting 0ms for the 100% read test.
I'm suspecting too that the CPU load isn't either picked where it should be. Could someone check on its side ?

Here attached is one of my CSV with faulty interpretation.

attached CSV

Solved ! IOMeter 1.1.0 (devel) doesn't have the expected output format. Use the regular versions

Reply
0 Kudos
zgz87
Contributor
Contributor

a

Reply
0 Kudos
xevrebyc
Contributor
Contributor

Added the Dell MEM and upgraded the host ti ESXi 5.1

SERVER TYPE:Windows 2008 R2 CPU TYPE / NUMBER: Xeon E5620 @ 2.40GHz, 2 HOST TYPE: Dell R710 STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000E + PS6100E, 36 disks, RAID 50
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read8.5068962153%
RealLife-60%Rand-65%Read22.7319491528%
Max Throughput-50%Read7.2779522487%
Random-8k-70%Read24.9118051427%

Previous results

SERVER TYPE:Windows 2008 R2 CPU TYPE / NUMBER: Xeon X5677 @ 3.47GHz, 2 HOST TYPE: Dell R710 STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000E + PS6100E, 36 disks, RAID 50
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read31.321852574%
RealLife-60%Rand-65%Read21.4020501617%
Max Throughput-50%Read21.022793872%
Random-8k-70%Read23.9118381418%

****New test was on a different host.

Reply
0 Kudos
zgz87
Contributor
Contributor

Dear sender,

Thanks for your e-mail. I am currently on holidays with no e-mail access. I will be back the 8th of January

In urgent cases send me an sms to 00491743194824.

Feliz Navidad y prospero año nuevo!

Manuel

Reply
0 Kudos
oparcollet
Enthusiast
Enthusiast

SERVER TYPE: VM Windows 2008 R2
CPU TYPE / NUMBER: Xeon E5645 @ 2.40GHz, 2
HOST TYPE: Dell R610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: 1 XtremIO node / 16 SSD disks / RAID 1
Av Resp TimeAv IO/SekAv MB/sek% CPU
Max Throughput-100%Read0,4712456,25389,452,75
RealLife-60%Rand-65%Read0,7262790,45486,749,37
Max Throughput-50%Read0,9619924,08622,284,23
Random-8k-70%Read0,7962002,10484,369,39

HBA Qlogic 2462, Qdepth : 256

XtremIO is a full flash plateform with inline Dedup.
Reply
0 Kudos
fredlr
Contributor
Contributor

These results might show some kind of interests. They exercise the  current 60 slots Engenio (now NetApp) base controllers Dell MD3660,  NetApp E2660, SGI IS5000, IBM DSC3700 Base.

Here is a Dell MD3660F, dual controller, dual FC 8GB attachement, 2GB/controller.

1 Worker

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5 4+1 (5 discs)/ALUA Round Robin / Cache block size 4k / Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.622240870069%
RealLife-60%Rand-65%Read23.812152166%
Max Throughput-50%Read4.391360442550%
Random-8k-70%Read25.281966156%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5 4+1 (5 discs)/ALUA MRU/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.632239369968%
RealLife-60%Rand-65%Read24.382086167%
Max Throughput-50%Read4.251409244051%
Random-8k-70%Read25.421931157%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm /  RAID5   4+1 (5 discs)  /ALUA Round Robin / 1  Workers/ Cache  block size 8k / Dynamic cache prefetch disabled

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.60231637231%
RealLife-60%Rand-65%Read25.672034151%
Max Throughput-50%Read2.89207476482%
Random-8k-70%Read27.0418811415%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5 8+1 (9 discs)/ALUA Round Robin/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.622240770069%
RealLife-60%Rand-65%Read13.723544278%
Max Throughput-50%Read4.231417644351%
Random-8k-70%Read14.2031852410%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5 8+1 (9 discs)/ALUA MRU/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.622240170070%
RealLife-60%Rand-65%Read14.133430266%
Max Throughput-50%Read4.291395943652%
Random-8k-70%Read14.093215259%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID6 DDP (11 discs)/ALUA Round Robin/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.622242670069%
RealLife-60%Rand-65%Read14.173743297%
Max Throughput-50%Read4.051477946153%
Random-8k-70%Read14.393466278%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 2vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / NLSAS 2TB 7200rpm / RAID6 8+2 (10 discs)/ALUA Round Robin/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.642224969511%
RealLife-60%Rand-65%Read29.2315051133%
Max Throughput-50%Read4.28140104377%
Random-8k-70%Read28.4814201137%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE  TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300GB 15krpm /  RAID10 11+11 (22 discs)/ALUA Round Robin/ Cache block size 4k/ Dynamic  cache prefetch / 2x8Gb FC attachements

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.702199168712%
RealLife-60%Rand-65%Read8.176688528%
Max Throughput-50%Read2.992010762812%
Random-8k-70%Read8.026583517%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE   TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / NLSAS 2TB 7k2rpm /   RAID10 11+11 (22 discs)/ALUA Round Robin/ Cache block size 4k/ Dynamic   cache prefetch/ 2x8Gb FC attachements

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.712199168713%
RealLife-60%Rand-65%Read15.453450267%
Max Throughput-50%Read2.912078464912%
Random-8k-70%Read14.763511277%

2 Workers

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5 8+1 (9 discs) + RAID5 4+1 (5 discs) /ALUA Round Robin / 2 Workers/ Cache block size 4k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.9440411126225%
RealLife-60%Rand-65%Read18.3854054210%
Max Throughput-50%Read5.602127266427%
Random-8k-70%Read19.574915389%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE  TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5  8+1 (9 discs) + RAID5 4+1 (5 discs) /ALUA Round Robin / 2 Workers/ Cache  block size 8k/ Dynamic cache prefetch

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.8441844130726%
RealLife-60%Rand-65%Read19.5950883910%
Max Throughput-50%Read4.232805187627%
Random-8k-70%Read19.894794379%

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE   TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm / RAID5   8+1 (9 discs) + RAID5 4+1 (5 discs) /ALUA Round Robin / 2 Workers/ Cache  block size 8k / Dynamic cache prefetch disabled

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.8541649130126%
RealLife-60%Rand-65%Read19.3051534010%
Max Throughput-50%Read3.962953992327%
Random-8k-70%Read19.914821379%

Cache  block size 8k / Dynamic cache prefetch disabled clearly shows a better usage of the limited cache of the controller for this bench

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610
STORAGE    TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm /  RAID5  14+1 (15 discs) + RAID5 14+1 (15 discs) /ALUA Round Robin / 2  Workers/ Cache  block size 8k / Dynamic cache prefetch disabled

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read0.8943114134730%
RealLife-60%Rand-65%Read11.0990157015%
Max Throughput-50%Read3.733074496028%
Random-8k-70%Read10.7086566712%

3 Workers

SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610 x 3
STORAGE      TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / NLSAS 2TB 7k2rpm /    RAID6  8+2 (10 discs) : 3 volumes /ALUA Round Robin / 3   Workers/ Cache  block size 8k / Dynamic cache prefetch disabled / Only 2 FC8 attachements

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read2.7646675145816%
RealLife-60%Rand-65%Read35.443909309%
Max Throughput-50%Read4.2833055104613%
Random-8k-70%Read36.11380729.610%


6 Workers


SERVER TYPE: VM Windows 2008 R2 64bits / ESX 4.1U3 / 4vCPU
CPU TYPE / NUMBER: X5680 / 2
HOST TYPE: M610 x 3
STORAGE     TYPE / DISK NUMBER / RAID LEVEL: Dell DS3660F / SAS 300 15krpm /   RAID5  4+1 (5 discs) : 6 volumes /ALUA Round Robin / 6   Workers/ Cache  block size 8k / Dynamic cache prefetch disabled / Only 2 FC8 attachements

TEST NAMEAvg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read7.0850364157332%
RealLife-60%Rand-65%Read25.871212094.815%
Max Throughput-50%Read9.4538939121615%
Random-8k-70%Read26.57117429115%

FC bandwidth limitation on 100% Read

Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Bis zum 07.01.13 bin ich nicht im Büro. Bitte wenden Sie sich in dieser Zeit an support@mdm.de. I'm out of office until January 7th 2013, please contact support@mdm.de instead.

Reply
0 Kudos
tim_k
Contributor
Contributor

Hello all,

Here are some results from our Netapp C-Mode system currently being put through it's paces.  So far I am pretty happy with the performance, but there is still some tweaking to do before we go into production with it.

First up, results with 2x SAS 600GB 15k shelves (38 data disks):

SERVER TYPE: Win2008 R2, 1GB mem

CPU TYPE / NUMBER: 1 vCPU E5540, 2.5GHz

HOST TYPE: HP DL380G6, 2x E5540, 2.5GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL:

Netapp 3240, ONTAP 8.1.2 C-Mode/ 38X 600GB SAS 15K,    RAID-DP

10G NFS through Nexus 5010

Test    name

Latency

Avg    iops

Avg    MBps

cpu    load

Max Throughput-100%Read

5.30

11347

354

0%

RealLife-60%Rand-65%Read

3.70

11658

91

70%

Max Throughput-50%Read

5.10

11476

358

0%

Random-8k-70%Read

3.33

12590

98

73%

Next, results using SATA drives + SSD in a Flash Pool (SSD cache in front of the SATA drives).  Only 12 SATA drives in this aggregate, but will likely increase that to the full 24 drive shelf, so that should help the Real Life test result.

SERVER TYPE: Win2008 R2, 1GB mem

CPU TYPE / NUMBER: 1 vCPU E5540, 2.5GHz

HOST TYPE: HP DL380G6, 2x E5540, 2.5GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL:

Netapp 3240, ONTAP 8.1.2 C-Mode/ 12X 3TB Sata 7.2 + 6x    100GB SSD (Flash Pool), RAID-DP

10G NFS through Nexus 5010

Test    name

Latency

Avg    iops

Avg    MBps

cpu    load

Max Throughput-100%Read

4.66

12890

402

42%

RealLife-60%Rand-65%Read

7.53

5279

41

64%

Max Throughput-50%Read

4.85

12066

377

42%

Random-8k-70%Read

2.83

17612

137

66%

Thanks,

Tim

Reply
0 Kudos
sscheller
Contributor
Contributor

Sehr geehrte Damen und Herren,

ich befinde mich im Urlaub.

Bei drigenden Fragen wenden sie sich bitte an

Herr Becker Tel.: 06027409030.

Mit freundlichen Grüßen

Sven Scheller

Reply
0 Kudos
lakekeman
Contributor
Contributor

Server Type:Win 2008R2 on ESX 5.1


Host Type:Dell R320


Storage Type:LOCAL PERC H310 RAID 10 4xSAS 600GB 15k








Avg Resp. Time msAvg IOs/secAvg MB/sec% cpu load
Max Throughput-100%Read13448214026
RealLife-60%Rand-65%Read64891718
Max Throughput-50%Read678612718
Random-8k-70%Read63907718

These values are very low for a local storage test, right? Especially the 50% test? What could be the problem? Thanks!

Reply
0 Kudos
abirhasan
Enthusiast
Enthusiast

This is nice..

Reply
0 Kudos
jb42
Contributor
Contributor

SERVER TYPE: Windows Server 2008 R2 VM CPU TYPE / NUMBER: Intel Xeon X5672 @ 3.20GHz / 2 HOST TYPE: Dell PowerEdge R510. Essentials Plus ESXi 5.1 w/Dell MEM STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6100E / 24 (2xSpare) / RAID 10
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read8.7066552076%
RealLife-60%Rand-65%Read14.7431912450%
Max Throughput-50%Read9.60602718853%
Random-8k-70%Read16.0629012258%

Dell mem is officially unsupported on Essentials Plus license. Going to switch over to Round Robin. Logging this for comparison.

Reply
0 Kudos
jb42
Contributor
Contributor

SERVER TYPE: Windows Server 2008 R2 VM CPU TYPE / NUMBER: Intel Xeon X5672 @ 3.20GHz / 2 HOST TYPE: Dell PowerEdge R510, ESXi 5.1 Essentials Plus, Round Robin STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6100E / 24 (2xSpare) / 10
Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read8.53670420910%
RealLife-60%Rand-65%Read12.7335352718%
Max Throughput-50%Read8.70660620629%
Random-8k-70%Read12.9135432717%

Switched from Dell Multipath Module to Round Robin because of Enterprise licensing requirement for 3rd party multipathing. Improvements across the board (see above!) But one test either way probably doesn't count as evidence.

jb

Reply
0 Kudos
JonT
Enthusiast
Enthusiast

Seems to me you didnt get that large of an increase but any improvement helps. Try setting the round robin policy limits to IOPs, and set the limit to 1. Re-run your tests and see the difference Smiley Happy

This is what I run from PowerCLI to make the changes to the entire cluster and all the LUN's:

$Cluster = Read-host "Enter Cluster Name"
foreach ($vmhost in get-cluster $Cluster | get-VMHost){
        Connect-VIServer -Server $vmhost.Name -User root -Password <yourpasswd>
        $esxcli = Get-EsxCli -VMHost $vmhost.Name
       
        $esxcli.storage.nmp.satp.set($false,"VMW_PSP_RR","VMW_SATP_SYMM")
       
        foreach ($lunpath in $esxcli.storage.nmp.device.list() | where {$_.device -like "naa.*"}){
          
                   
                    $esxcli.storage.nmp.device.set($null,$lunpath.Device, "VMW_PSP_RR")
                    $esxcli.storage.nmp.psp.roundrobin.deviceconfig.set($null,$lunpath.Device,1,"iops",$null)
               }
               }

Reply
0 Kudos