VMware Cloud Community

iscsi traffic is not load balancing

I have hosts with Dell QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (iSCSI) CNA's in them which are connected to two separate Juniper EX4600 switches.  That then connects into some Netapp storage array's. 

Reviewing this old guide https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-multipathing-conf...

I have a vswitch setup with 2 separate port groups attached to the vswitch and the physical adapters

vswitch and PGvswitch and PG

I have both adapters tied to the port groups with 1 active and 1 standby.  Each active is not the same in the two port groups.

PG configPG config

In the storage adapters I just have 1 vmk attached to 1 vmhba.  This is the only part of the configuration that differs from the guide but I have 2 actually vmhba's which the guide does not cover it only covers using the software adapter.  Is this where my configuration is wrong and causing all traffic to go over 1 NIC at a time?  If this is would adding the other vmk to both vmhba's cause any issues if done in production?  Would this causing any issues since I have separate SAN's ie SAN A and SAN B?  If there are any guides on this kind of setup ie 2 SANs I would appreciate links to those.  All I can find are old single SAN setup documents. 

storage adapters.PNG


Labels (1)
Tags (4)
0 Kudos
1 Reply


Read this best practice document: Best Practices For Running VMware vSphere On iSCSI | VMware

You need to bind iSCSI to multiple VMK for multipathing and load-balancing.

Davoud Teimouri - https://www.teimouri.net - Twitter: @davoud_teimouri Facebook: https://www.facebook.com/teimouri.net/