VMware Cloud Community
thebloc
Contributor
Contributor

Expected iometer performance on a PS5000/vsphere 5

Hello all, I know there are a whole spate of storage peformance threads as as people troubleshooting performance issues so I aplolgize if starting a new thread is a bit prosumptious as I'm not really trying to troubleshoo. but I couldn't quite find the approproate spot. I'm not doing everything 100% optimally as I'm in an existing network that prevents me doing a few things I would like (like enabling jumbo frames)

bottom line, the absolute best performance I have seen is about 5500-6000 IOPS and about 165 MB / sec.

is this about what I should expect? would enabling jumbo frames (which in my case would be distruptive be a substantial improvement)

setup:

Single PS5000 array with 7200 RPM drives, 64 mb cache. - all 3 interfaces dedicated for storage - running 5.2.1 - raid 50

all ports connected to a single blade on a cisco 4006 switch with a sup IV running relativly crusty code.

Eq, Vsphere 5 host and VMK ports all on same subnet

no Jumbo frames (so I understand there is some hit there)

Vsphere 5 all patched up, dell MEM plugin configured with dell's perl script (1:1 VMK to Physical Adapter) - 4 x on a single quad Intel 82575GB

windows 2008 R2 Guest pointing to a brand new VMFS-5 LUN with Eager Zero thick (just a 40 gb disk for iometer testing)

iometer set up for 1 worker, 64 threads, 32k, 50/50 r/w / 50/50 random/sequential.  10 burst I/Os

tried various setting in ehcmd.conf

nothing else on the array of the esx host.

on the array I see data counts incrementing about evenly from all the vmkernal IPs (as well as my management IP on my other vswitch so I explicitly denied that on the array)

.

thanks very much

0 Kudos
5 Replies
thebloc
Contributor
Contributor

as a slight addedum because I was curious I did a quick test on this array.

I disabled 2 of the 3 network interfaces and after letting everything sit for a few minutes found virtually no different the same iometer performance results. it's clear from both SANHQ and the array itself that all the interfaces are being used but it simply won't use them beyond a certain point if that makes any sense. I have an otherwise identical setup on cisco 3750 switches (also no jumbo) with virtually identical performance.

thanks

0 Kudos
AndreTheGiant
Immortal
Immortal

On vSphere, at one specific time, one host will use only one path to reach a volume.

So you just test the speed of you interface.

To make a aggregate test you can use the MEM multipath modules.

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
thebloc
Contributor
Contributor

I am using MEM, both the array, vsphere and and sanHQ shows about equal distribution of traffic on all 3 interfaces but the throughput never goes higher than about 160MB/sec

0 Kudos
AndreTheGiant
Immortal
Immortal

Is already good ... you cannot expect much more on a single system.

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
joshuatownsend
Enthusiast
Enthusiast

It's all about the IOPS.  Your 7200k drives can't do much more - check out my blog posts here: http://vmtoday.com/2009/12/storage-basics-part-ii-iops/ to learn how to do the math on what you can expect.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Please visit http://vmtoday.com for News, Views and Virtualization How-To's Follow me on Twitter - @joshuatownsend
0 Kudos