VMware Cloud Community
tdubb123
Expert
Expert
Jump to solution

iometer netapp and emc

I ran some iometer tests on both netapp vmdk and a cx380 vmdk and the netapp outperforms the cx380 vmdk by far. I am not sure why this is.

I do not use powerpath on out esxi servers so which there is 4 paths active, only one shows as i/o active.

on hte netapp it is set to round robin and both paths have active i/o.

but why would the netapp outperform the emc by so much?

Reply
0 Kudos
1 Solution

Accepted Solutions
EdWilts
Expert
Expert
Jump to solution

the netapp is a 3210 on an aggregate with 14 disks dp.

the cx380 has 2 luns on 2 rgs

1 is a sata R5, 7 disks

2 is a FC R5 5 disks

both are using FCP

The number of IOPS you can do are directly proportional to the number of spindles you throw at it.  The NetApp should bury the CX380 simply because the CX380 doesn't have enough spindles.  This is doubly true if the NetApp aggregate is using SAS disks.

Assume that a 7.2K SATA disk is capable of 100 IOPS and a 15K SAS drive is capable of 180 IOPS.

If your 3210 has an aggregate made up of SATA disks,  it's going to top out at 12*100, 1,200 IOPS.  If they're SAS, the high end is 2,160 IOPS.

Your CX380 RG1 will top out at 600 and RG2 will top out at 720.

It's not a fair fight of the controllers since the controllers are capable of far more I/O but you've constrained the testing by the number of available spindles.

.../Ed (VCP4, VCP5)

View solution in original post

Reply
0 Kudos
9 Replies
vmroyale
Immortal
Immortal
Jump to solution

Hello.

Note: This discussion was moved from the VMware ESXi 5 community to the VMware vSphere™ Storage community.

What NetApp are you using? What storage protocols? How many disks are backing the datastores? Which firmwares are in use on each storage system? The answers to any of these might help explain, as there are many reasons that could lead to this.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
Reply
0 Kudos
tdubb123
Expert
Expert
Jump to solution

the netapp is a 3210 on an aggregate with 14 disks dp.

the cx380 has 2 luns on 2 rgs

1 is a sata R5, 7 disks

2 is a FC R5 5 disks

both are using FCP

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Well, the single active path would definitly impact that. 

If you configure the pathing properly so you have equivlant paths, and then also put a similar number of disks behind the LUNs, you'll see performance get closer.

What you've asked is:  "I have a Corolla with a V6 and a Maxima with a inline 4 - why is the Corolla so much faster".

--Matt VCDX #52 blog.cowger.us
EdWilts
Expert
Expert
Jump to solution

the netapp is a 3210 on an aggregate with 14 disks dp.

the cx380 has 2 luns on 2 rgs

1 is a sata R5, 7 disks

2 is a FC R5 5 disks

both are using FCP

The number of IOPS you can do are directly proportional to the number of spindles you throw at it.  The NetApp should bury the CX380 simply because the CX380 doesn't have enough spindles.  This is doubly true if the NetApp aggregate is using SAS disks.

Assume that a 7.2K SATA disk is capable of 100 IOPS and a 15K SAS drive is capable of 180 IOPS.

If your 3210 has an aggregate made up of SATA disks,  it's going to top out at 12*100, 1,200 IOPS.  If they're SAS, the high end is 2,160 IOPS.

Your CX380 RG1 will top out at 600 and RG2 will top out at 720.

It's not a fair fight of the controllers since the controllers are capable of far more I/O but you've constrained the testing by the number of available spindles.

.../Ed (VCP4, VCP5)
Reply
0 Kudos
tdubb123
Expert
Expert
Jump to solution

I am using mru (vmware) to the cx380. Is this the best option? I do not have powerpath

Reply
0 Kudos
tdubb123
Expert
Expert
Jump to solution

thank you for the clarification. that helps a lot

Reply
0 Kudos
EdWilts
Expert
Expert
Jump to solution

I am using mru (vmware) to the cx380. Is this the best option? I do not have powerpath

MRU is rarely the best option although I'm not a VMware/EMC expert.  Try Round Robin instead.

MRU says to use the same path over and over again until it failus.  Round Robin spreads the I/O across your available paths.

.../Ed (VCP4, VCP5)
Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Round robin is NOT supported on the CX3.

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
tdubb123
Expert
Expert
Jump to solution

I am doing a similar test on a vmdk that sits on a cx340 6TB RG that has 14Disks (SATA, 4GBps) R5

ran iometer with 16K 75% Read

and the performance is still bad. compared to netapp.

total io/s is about 500

total mbs is 8.6

avg io res time is 58

max i/o response time 2266

terrible

the 6 tb rg is carved up into 10 other luns but the other luns are very minimally used. pretty muych empty luns.

on the netapp

I get

over 20000io/s

320mb/s

1.5ms av resp time

max io resp time 115

there is such a huge difference

Reply
0 Kudos