Hello there,
I have a stretched cluster deployment with 4+4 nodes, each one with 2 * 10Gb cards.
By design, even using load balance based on physical nic load, each node can use only 1 card at a time : 1 mac can't be on both nics.
I don't want to use link aggregation like LACP, it adds unwanted overhead and have been seen here to cause very strange issues.
So I created another vmknic for vsan trafic, on different subnet and vlan.
I adjusted policies on my vDS so vmk1 is using uplink1 (uplink 2 in "unused") and vmk2 is using uplink2 (uplink 1 in "unused").
I have all my ESXi which have vsan enabled on vmk1 (192.168.0.0/24, vlan 1) and vmk2 (192.168.1.0/24, vlan 2)
Vlan 1 and vlan 2 are present on both sites, replication is wanted to be L2.
Connectivity between esxis using vmk1 and vmk2 are ok.
But so far, I can't get trafic to go to both vmknics at the same time. I don't understand why.
It seems possible (https://webcache.googleusercontent.com/search?q=cache:rd2gFU3NDQgJ:https://vtricks.com/working-with-... ) (sorry for webcache link, but main site not working ATM) but not easy to do / bogus (read red lines at the end).
Thanks for any ideas.
Hi,
please check you configuration and notes regarding this KB: VMware Knowledge Base
I also recommend to read https://storagehub.vmware.com/t/vmware-vsan/vmware-r-vsan-tm-network-design/advanced-nic-teaming/
Please note that vSAN has no load balancing mechanism to differentiate between multiple vmknics. Thus the vSAN IO path chosen is not deterministic across physical NICs.
A simple I/O test performed in our labs:
–120 VMs
–70:30 read/write ratio
–64K block size
–four node all flash vSAN cluster
It can be clearly see vSAN makes no attempt to balance the traffic....
---------------------------------------------------------------------------------------------------------
Was it helpful? Let us know by completing this short survey here.
Hi,
please check you configuration and notes regarding this KB: VMware Knowledge Base
I also recommend to read https://storagehub.vmware.com/t/vmware-vsan/vmware-r-vsan-tm-network-design/advanced-nic-teaming/
Please note that vSAN has no load balancing mechanism to differentiate between multiple vmknics. Thus the vSAN IO path chosen is not deterministic across physical NICs.
A simple I/O test performed in our labs:
–120 VMs
–70:30 read/write ratio
–64K block size
–four node all flash vSAN cluster
It can be clearly see vSAN makes no attempt to balance the traffic....
---------------------------------------------------------------------------------------------------------
Was it helpful? Let us know by completing this short survey here.
Hello,
Thank you for sharing this, I also achieved, like you, to have a small part of vsan trafic going via the second vmknic.
I wonder if any multipathing implementation is coming for vsan that would let us use multiple vmknic with load balancing. I want to avoid link aggregation at all costs!
Regards