Reply to Message

View discussion in a popup

Replying to:
TheKenApp
Contributor
Contributor

Bayu,

These will be the network loads for each set of vmnics:

Port Type and Port #vmnic#Physical switchNetworks
Copper Port#1vmnic2Enterasys 7100vMotion, Fault Tolerance, vSAN and NFS storage
Copper Port#2vmnic4Enterasys 7100vMotion, Fault Tolerance, vSAN and NFS storage
Fibre port#1vmnic0Extreme 6705 different VM networks, ESX management, backup storage
Fibre port#2vmnic1Extreme 6705 different VM networks, ESX management, backup storage

Separate port groups will be made for each network, with vmkernel ports created for vMotion, FT, ESX management, vSAN, and NFS storage (not tied to a specific service).

All ports are 10GB. I am not planning any link aggregation on the physical ports, only VLAN trunking.

Right now, my concern is the NFS traffic. I read through the information you pointed me towards, as well as several other sources. I found a series of whitepapers on NFS on vSphere Part 2 - Technical Deep Dive on Same Subnet Storage Traffic - Wahl Network

If I am to stick strictly with LBT (route based on physical NIC load) and no link aggregation, I realize that NFS traffic will not be shared among the 2 physical host ports to the 7100 switch stack. Considering we have two 10GB copper ports on each of the 4 new hosts, and a 4x10GB lag from the 7100 to the storage, I am not sure that this would be an issue with 20GB bandwidth on each host to the 7100 stack. I am assuming that the distributed vSwitch will move workloads other than NFS to the other vmnic, if one is saturated. Is this a correct understanding?

Is there any reason I should consider route based on originating virtual port, rather than on physical NIC load, in our opinion?

I have yet to dig into vSAN, but have some time since this will be a future use case. Though I hope that this architecture would be sufficient.

What are your thoughts?

Thanks,

Reply
0 Kudos