I have asked other, more pointed questions, around this general topic that lead me to where I am with this question. Sorry for the bad subject but that just about sums it up.
I have been working toward why we see poor performance within a 4.0.0 ESX cluster with 8 nodes all connected to an Equallogic iscsi SAN with four members. Here are some performance highlights.
san avg read latency: 12 ms
san avg write latency: < 1 ms
san avg iops 2200
esx physical device read latency: 20-30 ms
Here's what I see in read/writes from guest depending on the underlying disk:
Initiator in guest OS
read: 45 MB/s
write: 223 MB/s
read: 17.9 MB/s
write: 118 MB/s
I am concerned about the large difference in the guest initiator vs. RDM/VMDK and the fact that my read performance lags so far behind. I already have a gap where round robin path selection is not being used but not convinced that would cause this. Have any hints to where I should start focusing my attention?
What kind of network do you have between the ESX host and the iSCSI SAN? Gigabit? What model of switches? Vlans?
How are the vmkernel port configured on the host? How many pNICs and what load balancing policy?