When we created vmdk files on local storage we see speeds of 150-200 mbs. When we create vmdk's on our san we see speeds of 30mbs.
We used esxtop to get these stats. Any ideas as to why we see the speed of the SAN is so horrible?
Note: theres seems to be no issues on the storage backend. We have min 100 spindles in each Raid group and have presented volumes from 100-500 gig.
We see no queue issues. Get the same speeds if we dd if=/dev/zero.
Definately, I would look at what type, how you're connected, etc. What are you connected through? What kind of cache do you have on the array? What's the hit rate on the cache? If you're FC through an old storageworks switch, obviously it's not going to live up to a new brocade or MDS switch. 100 spindles in a raid group doesn't sound good, it's actually way overkill and performance could dip to the other side.
Lots of factors to consider. Give us more detail, then we'll start breaking down into what to check for.