VMware Cloud Community
sdeshpande
Contributor
Contributor

iSCSI throughput and Network bonding/teaming IP hash

Hello,

I would like to ask if I am making fundamental understanding mistake. I have Dell PowerEdge server, bit old but fortunately still works very fine. it has on-board Gigabit network interface and 2 nos Dual Gigabit network cards (Broadcom 5709 Dual-Port), thus all together 5 Gigabit interfaces connected to 8 port HP Procure Gigabit switch. Then my Synology NAS has Dual port Gigabit interface connected to the same HP Procurve switch.  I have configured network bonding for Synology (LCAP) and same has been marked as Trunc on HP Procurve switch.

I have 3, 500 GB iSCSI configured on Synology which are connected to ESXi as VMFS datastores. Then I have created 3 vSwitches as

vSwicth1- ) 1 on board network interface used for management network

vSwitch2 -) 2 interlaces from one card are used for virtual machines network

vSwitch3 -) 2 interlaces from second card are used for vmKernel for iSCSI  (with route based IP hash teaming), no redundancy, so I assume, it will act as network bonding.

Then, a Linux CentOS VM which has 3 disks configured on iSCSI VMFS datastore.

Question:

When I start writing on this iSCSI datastore disk, I get currently upto 105 Mbps (max) speed, which is fairly fine for Gigabit interface. My questions is, when my Synology has bonding, my iSCSI has 2 network interfaces configured as route based IP hash teaming, why aren't my both network cards from vSwitch 3 active at the same time providing me 200+ Mbps writing speed?

Shouldn't I expect 2 gigabit ports active at the same time giving me double speed? I see only first network interface from vmKernel active giving max speed of 105 Mbps.

I would appreciate if you please clarify my doubt.

Thanks in advance

Sameer

Reply
0 Kudos
5 Replies
scott28tt
VMware Employee
VMware Employee

How many IP connections (unique combinations of source and destination addresses) do you have with that network and storage configuration?


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
berndweyand
Expert
Expert

it is always the misunderstanding that lacp will double the speed. LACP will not split packets across multiple interfaces for a single stream. For example a single TCP stream will always send/receive packets on the same NIC.

Reply
0 Kudos
sdeshpande
Contributor
Contributor

I just have one single network segment 192.168.72.x/24 (255.255.255.0) => max 254 connections.

Reply
0 Kudos
sdeshpande
Contributor
Contributor

Thanks for clarification, however, even if start multiple writing streams of large data, I still see one interface active. Its perfectly fine even if LACP will not split "single" stream, why I do not see other parallel sessions using other interface. It all happens from same source host, I just start writting in background jobs. Do I also need bonding on source? I guess when I use VMFS disk via iSCSI datastore, shouldn't my interfaces on hypervisor handle this?

Reply
0 Kudos
scott28tt
VMware Employee
VMware Employee

That wasn't what I meant.

I'm referring to IP address <> IP address connections between ESXi and your storage device.

Have a look at this thread: Increase throughput for iSCSI with multiple nics


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos