5 Replies Latest reply on Feb 12, 2020 1:24 AM by scott28tt

    iSCSI throughput and Network bonding/teaming IP hash

    sdeshpande Novice

      Hello,

       

      I would like to ask if I am making fundamental understanding mistake. I have Dell PowerEdge server, bit old but fortunately still works very fine. it has on-board Gigabit network interface and 2 nos Dual Gigabit network cards (Broadcom 5709 Dual-Port), thus all together 5 Gigabit interfaces connected to 8 port HP Procure Gigabit switch. Then my Synology NAS has Dual port Gigabit interface connected to the same HP Procurve switch.  I have configured network bonding for Synology (LCAP) and same has been marked as Trunc on HP Procurve switch.

       

      I have 3, 500 GB iSCSI configured on Synology which are connected to ESXi as VMFS datastores. Then I have created 3 vSwitches as

       

      vSwicth1- ) 1 on board network interface used for management network

      vSwitch2 -) 2 interlaces from one card are used for virtual machines network

      vSwitch3 -) 2 interlaces from second card are used for vmKernel for iSCSI  (with route based IP hash teaming), no redundancy, so I assume, it will act as network bonding.

       

      Then, a Linux CentOS VM which has 3 disks configured on iSCSI VMFS datastore.

       

      Question:

       

      When I start writing on this iSCSI datastore disk, I get currently upto 105 Mbps (max) speed, which is fairly fine for Gigabit interface. My questions is, when my Synology has bonding, my iSCSI has 2 network interfaces configured as route based IP hash teaming, why aren't my both network cards from vSwitch 3 active at the same time providing me 200+ Mbps writing speed?

       

      Shouldn't I expect 2 gigabit ports active at the same time giving me double speed? I see only first network interface from vmKernel active giving max speed of 105 Mbps.

       

      I would appreciate if you please clarify my doubt.

       

      Thanks in advance

       

      Sameer