ruddy001
Contributor
Contributor

Horrible SAN i/o

When we created vmdk files on local storage we see speeds of 150-200 mbs. When we create vmdk's on our san we see speeds of 30mbs.

We used esxtop to get these stats. Any ideas as to why we see the speed of the SAN is so horrible?

Note: theres seems to be no issues on the storage backend. We have min 100 spindles in each Raid group and have presented volumes from 100-500 gig.

We see no queue issues. Get the same speeds if we dd if=/dev/zero.

Thanks

0 Kudos
3 Replies
Dave_Mishchenko
Immortal
Immortal

Your post has been moved to the Performance forum

Dave Mishchenko

VMware Communities User Moderator

0 Kudos
ejward
Expert
Expert

What type of SAN? We get very different speeds with different SANs. Our EMC Clariion gets rougly 40 mbs. Our Equallogic iSCSI gets about 50mbs, our EMC DX gets about 10mbs.

0 Kudos
williambishop
Expert
Expert

Definately, I would look at what type, how you're connected, etc. What are you connected through? What kind of cache do you have on the array? What's the hit rate on the cache? If you're FC through an old storageworks switch, obviously it's not going to live up to a new brocade or MDS switch. 100 spindles in a raid group doesn't sound good, it's actually way overkill and performance could dip to the other side.

Lots of factors to consider. Give us more detail, then we'll start breaking down into what to check for.

Ita feri ut se mori sentiat
0 Kudos