VMware Cloud Community
ikt
Enthusiast
Enthusiast

Team ESXi 6.5 NICs with HPE 1820 and 1910 switches....

Hi,

I'm struggling to get NIC teaming (aggregating) between a ESXi 6.5 host and HPE switches.

Here are my VSwitch settings (interfaces inherit their settings from vswitch)

Link discovery, mode: Listen

Link discovery, protocol: Cisco discovery protocol

Security, Promiscuous mode: Reject

Security, MAC address changes: Reject

Security, Forged transmits: Reject

NIC teaming, Load balancing: Route based on IP hash

NIC teaming, Network failover detection: Link status only

NIC teaming, Notify switches: Yes

NIC teaming, Failback: Yes

NIC teaming, Failover order:

  vmnic4 - 1000Mbps - Active

  vmnic5 - 1000Mbps - Active

  vmnic6 - 1000Mbps - Active

  vmnic7 - 1000Mbps - Active

Traffic shaping: Disabled.

I think my vmware settings are more or less correct, or what?

However I need to know more about how to set up the HP switches, if LACP is used or not etc...

First HP switch: HPE 1920S-24G - JL381A

Menu item: Trunks / Configuration

Have configured three trunks with 4 ports each.  All with a bunch of tagged VLANs.

Here are the default trunk settings:

Admin mode: Enabled

STP Mode: Disabled

Static Mode: Enabled (should it be dynamic to allow LACP?)

Load balance (5 choices, what do I select here??):

Source MAC, VLAN, Ethertype, Incoming port

Destination MAC, VLAN, Ethertype, Incoming port

Source+Destination MAC, VLAN, Ethertype, Incoming port

Source IP and Source TCP-UDP Port fields

Destination IP and Destination TCP-UDP Port fields

Source+Destination IP and Source+Destination TCP-UDP Port fields

Second switch:  HPE V1910-24G

Menu item: Network / Link aggregation

Have configured three trunks with 4 ports each.  All with a bunch of tagged VLANs.

The default setting for each trunk is:

Aggregation interface type: Static

(should I enable LACP for the ports here?)

If I should use LACP, it has its own menu item where I can select which switch ports to enable for LACP and the priority between the system and the LACP ports.

Menu item: VLAN / Modify Port:

Selected all trunks and changed all their ports 'Link type' from 'Access' to 'Trunk' type port.

I have played a lot with this and ended up with no traffic flow at all or only traffic via one of the interfaces...  

Thanks a lot for comments on which settings are correct or if I need to do something else...

Tor

Reply
0 Kudos
10 Replies
daphnissov
Immortal
Immortal

LACP is only possible when using a virtual distributed switch (vDS). That aside, unless you have a very specific requirement and also know in a detailed fashion your network traffic flows, you should use load-based teaming (LBT) instead with a vDS and forget about LAGs.

Reply
0 Kudos
a_p_
Leadership
Leadership

Please take a look at https://kb.vmware.com/s/article/2006129​ for pros and cons as well as a link to a sample configuration with HPE switches.

Based on this you can decide whether it makes sense to use it, or to rather go for with a switch independent configuration, which provides redundancy in case of e.g. a switch failure/reboot.


André

Reply
0 Kudos
scott28tt
VMware Employee
VMware Employee

Moderator: Moved to vSphere vNetwork


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
ikt
Enthusiast
Enthusiast

Thanks for the comments.  However I am not in the position to upgrade so I get vDS now and I have understood that I cannot use LACP unless I use vDS.

I read the links to articles for HP and Cisco switches but they used LACP too...

So isn't there really ANY possibility to do NIC teaming / aggregating between my ESXi 6.5 vSwitch and the HPE switches with the properties I mention in my original post?

If NOT can you suggest other switch model(s) where this feature is possible..:?

Thanks again

Tor

Reply
0 Kudos
daphnissov
Immortal
Immortal

ESXi has native teaming built-in and there is nothing else you need to do. Use of a LAG is not necessary if you want connection sharing and failover abilities.

Reply
0 Kudos
a_p_
Leadership
Leadership

It's basically not a limitation on the hardware switches, but in the software, so that other switches won't help.

Anyway, using multiple uplinks on a Standard vSwitch with the default policy does a round robin assignment for VMs once they are powered on.


André

Reply
0 Kudos
ikt
Enthusiast
Enthusiast

Really, so I don't need to do anything on the switch side, just connect the ESXi teamed NICs to ordinary switch ports and there will be no looping problems or the like...?

That was really good news for me, - I have always thought that teaming required config on both sides of the link... 

regards Tor

Reply
0 Kudos
daphnissov
Immortal
Immortal

Yes, that's correct. ESXi will not create loops naturally regardless of the teaming configuration.

Reply
0 Kudos
ikt
Enthusiast
Enthusiast

Great.  Thanks for that confirmation. 

However do I really get augmented bandwidth benefit with this solution (since it is 'active' only from one side of the link) or does this solution only give me failover benefit?

best regards

Tor

Reply
0 Kudos
a_p_
Leadership
Leadership

With the default configuration the VMs are distributed across all uplinks (i.e. no aggregation in any way). This means that multiple uplinks are active, and in case of an uplink failure, the VMs assigned to this uplink will transparently moved to another one. If needed, you can manage the behavior in the Teaming, and Failover Settings for the vSwitch and/or the Port Groups.

André

Reply
0 Kudos