VMware Cloud Community
SupportBDBC
Contributor
Contributor

Write speed faster than read? Equallogic iSCSI

Hi All,

Hoping someone can help me.

I have created a new ESXi server (4.1 U2) with different iSCSI networking to our other hosts.

The networking is standard on a vSwitch and trunk ports to our 6500's.

Our iSCSI has 6 access ports on a Port-Channel connected to two dell switches in a stack. The portchannel and ethernet ports are jumbo frame enabled.

The VM i am testing on has a dedicated nic for iSCSI with an MTU of 1500. If i set this to 9000 it cripples the performance on small block sizes for some reason. (tested with Atto)

The write speed on an 8mb block with MPIO on the initiator is fantastic (396mb), but the read is only 214mb.

MPIO enabled on the initiator: Write: 266mb, read 304mb.

(See attached screenshots)

The Equallogic pool it is running from consists of two 6000's (one E, one X) in raid 50.

The loading isn't particularly high on this pool, so i would expect decent read speeds.

The VM's run from a 6500.

The tests on the C: drive of a VM yeilds almost the same results.

Appreciate any advise

Thanks

Ian

0 Kudos
4 Replies
Realitysoft
Enthusiast
Enthusiast

Hi,

Not had this type of experience unfortunately but is the virtual nic configured differnently or restricted in some way? Are you reading many files or a single large file and MTV settings match across the links.  If windows OS nothing such as anti virus involved as well by any chance or read / write purely at host level to confirm speeds....

Not ideal answer just trying to think areas of bottleneck.

Thanks, Jim
0 Kudos
SupportBDBC
Contributor
Contributor

Hi,

Thanks for the reply.

The vnic is has an MTU of 1500, if i use 9000 it runs slower on both read and write.

If i use MPIO i get a good write, slow read.

If i don't use MPIO i get good read, slow write.

The only difference in the settings is the Dell switches use 9218 MTU as their standard, and the rest of the path is 9000.

I may have to setup a physical host with load balanced nics to confirm speeds. It just seems odd that changing small settings on the VM are making such a difference in performance.

Thanks

Ian

0 Kudos
SupportBDBC
Contributor
Contributor

I have built up a physical Windows 2008 R2 box with 4 nics in the same iSCSI switches as our VMWare hosts.

With jumbo frames enabled i get 450mbps write, 430mbps read, which i consider pretty much perfect.

With VMWare i get 235mbps write, 190mbps read.

The vswitchis setup with 6 1gb nics in a port channel, with LACP of and mtu as 9216.

Any ideas?

Thanks

0 Kudos
rickardnobel
Champion
Champion

SupportBDBC wrote:

The write speed on an 8mb block with MPIO on the initiator is fantastic (396mb), but the read is only 214mb.

That you get higher write speed could be "caused" by the cache on the SAN, which could collect your write IO and send back an instant acknowledgement before the writing to the actual disks has taken place.

My VMware blog: www.rickardnobel.se
0 Kudos