VMware Cloud Community
sahinsu
Contributor
Contributor

Poor network performance with iSCSI

Hi Experts,

We've just installed esxi 6.0 (esssentials kit) on hp dl380 g9 with 6 nics (4x1GBit, 2x10Gbit) 2 xCPUs with 10 cores each. We have a QNAP 431 with dual GBit Network adapters attached to same physical network. I have mounted NAS as a separate datastore on esxi via iSCSI software adapter. Now I have two datastores, one for iScsi (vmds1) and one for local scsi array (vmds2). I have set MTU on both esxi vswitch and NAS to 9000 (jumbo frames).

When I copy files from vmds1 to vmds2 the performance is quite less than I expected. It takes an hour or more to copy a 100GByte file. I've been trying different configurations on networking but can not go any higher than 65 MBit/s.  Just to compare performance I have created an iScsi storage on the same NAS and mount it to my personal windows computer (I set MTU to 9000 on my PC too) and I was able to see 300MBit/s time to time when I copy files.


On ESXi I have two vswithces.
vswitch0 is for vm networking and management network .

vswitch1 is for iscsi adapter

* first, I have assigned only one Nic (1GB/s) to vswitch1. therefore I'll be able to see what is the throughput of a single nic. I was able to see average of 65 MB/s when I check performance statistics via vsphere client->Performance->Network/Realtime (and selected only the nic I've assigned to vswithc1). I'm also reading network utilization on QNAP interface.
Even though for a single nic the performance is very low but I didn't give up and at least tried to double it by nic teaming on vswitch

*then I have assigned second nic (other 1GB/s) to vswitch. I have choosen various options, created single vmkernel port and assigned both nics to this one and selected different load balancing options. then created second vmkernel port, assigned each nic to either of the vmkernel ports (by putting unused nics to "unused adapters" section).
No matter what I do, the total throughput for the single or dual nics never goes above average of 65-70 MB/s.

Can you think of anything, what am i doing wrong ? I'm using the same cabling, same physical switch, same infrasturcture to connect QNAP with my PC and esxi. bandwith between my pc and QNAP is 5 times better than the one between esxi and qnap.

Thanks in advance,

Sahin

0 Kudos
3 Replies
DavoudTeimouri
Virtuoso
Virtuoso

Hi friend,

I think, there is no wrong.

Because your device can't achieved more than 75 MB/s: https://www.qnap.com/i/uk/product_x_performance/product.php?II=156

Maybe LACP will help your on your switch to give more bandwidth to your ESXi network ports.

-------------------------------------------------------------------------------------
Davoud Teimouri - https://www.teimouri.net - Twitter: @davoud_teimouri Facebook: https://www.facebook.com/teimouri.net/
0 Kudos
sahinsu
Contributor
Contributor

Thank you Davoud,

My point is same NAS giving more output to my PC but almost the half with esxi.

But your link helped .

Thank you

0 Kudos
cesprov
Enthusiast
Enthusiast

Two things that I seemingly have to disable with pretty much every ESXi/iSCSI implementation:

1.)  DelayedAck

2.)  VAAI

Turning either off will largely depend on the storage manufacturer.  But in my experience, no one seems to implement these right and having either enabled results in slow performance. I would try disabling DelayedAck first as it doesn't require a host reboot to shut it off whereas disabling the three VAAI settings does.

0 Kudos