VMware Cloud Community
tijz
Contributor
Contributor

iSCSI in ESXi 5.0 performance poor, MS iSCSI initiator fast

Hi all,

I suppose i'm not the only one posting performance issues. I really did try a search, but couldn't find anything usefull.

So, here it goes.

I have a ESXi 5.0 box configured with a software iSCSI adapter (bonded to a Intel T340 (82580) NIC)
Starwind iSCSI target is installed on a physical box with Windows 2008 R2 SP1.
I created a standard virtual disk target on a RAID 5 array.

Using the vSphere client I copy a VM to the iSCSI datastore, I get about 150Mbps (18MB/s) to the Starwind server.

I also have another physical box with windows 2008 R2. Using the MS iSCSI initiator I connected to the same iCSCSI target, formatted as NTFS, and copied an ISO image. Now I get 800Mbps!
Also using CIFS to the same box (and to the same RAID 5 volume) I get almost 1Gpbs throughput.

Why is vSphere so slow? Is there something basic I forgot?
Switch statistics don't show any errors or collisions. Jumbo frames enabled or disabled doesn't change anything. (I configured jumbo on windows NICS of the starwind server, the switch and the vSwitch and vmkernel port).
I didn't even bother with jumbo frames with the MS iSCSI initiator test which ran super fast.

I also mounted a VMDK to windows VM. the VMDK was stored on the iSCSI LUN. I copied an ISO file within the VM to the iSCSI volume, still getting only 150Mbps throughput...

Any idea's? Thanks in advance.

Tags (3)
Reply
0 Kudos
4 Replies
Linjo
Leadership
Leadership

You should not measure performance by using the upload feature in vSphere client, this is not designed to be a high-speed transfer function.

There is more things going on then the filetransfer like SSL encryption in the vSphere-client etc.

You should use some tool like VMmwark or IOmeter to measure disk performance properly.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
Reply
0 Kudos
tijz
Contributor
Contributor

Yes you're probably correct.

But as I wrote I'm also getting slow performance to a VMDK stored on the LUN.

I'm getting the EXACT same throughput as when copy-ing using the vsphere client.

also, copying to other datastores with the vsphere client is much faster.

Reply
0 Kudos
rickardnobel
Champion
Champion

tijz wrote:

I'm getting the EXACT same throughput as when copy-ing using the vsphere client.

This VM, has it several VMDK disks? Where are the ISO stored that you copied within the VM?

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
tijz
Contributor
Contributor

The VM has two VMDK's, one on local storage and one on the SAN.

I copy the ISO's from the VMDK on the local storage.

I did found something strange though:

I have a redundant connection from the ESX host to the san. When I pull one of the cables during a copy, the throughput increases to about 250Mbps.

Than, when I reconnect the cable, speed drops to a steady 150Mbps again..

I got this idea from this post

http://communities.vmware.com/thread/253625

However, a solution is not posted.. Also a much older version of ESX of course..  But it still seems strange why see an increase in throughput when using only one path.

I have to vSwitchs with each one vmkernel and only one vnic.

vmkernels are bonded to the sofware iSCSI adapter.

I use no multipath round robin for now.

Reply
0 Kudos