Bayu,
These will be the network loads for each set of vmnics:
| Port Type and Port # | vmnic# | Physical switch | Networks |
|---|---|---|---|
| Copper Port#1 | vmnic2 | Enterasys 7100 | vMotion, Fault Tolerance, vSAN and NFS storage |
| Copper Port#2 | vmnic4 | Enterasys 7100 | vMotion, Fault Tolerance, vSAN and NFS storage |
| Fibre port#1 | vmnic0 | Extreme 670 | 5 different VM networks, ESX management, backup storage |
| Fibre port#2 | vmnic1 | Extreme 670 | 5 different VM networks, ESX management, backup storage |
Separate port groups will be made for each network, with vmkernel ports created for vMotion, FT, ESX management, vSAN, and NFS storage (not tied to a specific service).
All ports are 10GB. I am not planning any link aggregation on the physical ports, only VLAN trunking.
Right now, my concern is the NFS traffic. I read through the information you pointed me towards, as well as several other sources. I found a series of whitepapers on NFS on vSphere Part 2 - Technical Deep Dive on Same Subnet Storage Traffic - Wahl Network
If I am to stick strictly with LBT (route based on physical NIC load) and no link aggregation, I realize that NFS traffic will not be shared among the 2 physical host ports to the 7100 switch stack. Considering we have two 10GB copper ports on each of the 4 new hosts, and a 4x10GB lag from the 7100 to the storage, I am not sure that this would be an issue with 20GB bandwidth on each host to the 7100 stack. I am assuming that the distributed vSwitch will move workloads other than NFS to the other vmnic, if one is saturated. Is this a correct understanding?
Is there any reason I should consider route based on originating virtual port, rather than on physical NIC load, in our opinion?
I have yet to dig into vSAN, but have some time since this will be a future use case. Though I hope that this architecture would be sufficient.
What are your thoughts?
Thanks,