VMware Cloud Community
HRossvoll
Contributor
Contributor
Jump to solution

HPE ML350 Gen9 4-port 331i Adapter - NIC teaming issue

Setting up a single cable between switch and the server I have no slowness issues or anything.

However, when setting up a standard LAG and connecting two or more cables I immidiatly loose connection to the server.

The odd thing is that when I via console use the management network test sometimes it gives success on all tests, sometimes it can ping 1 out of 2 DNS servers, somtimes all tests fails.

I've seen several mention previous issues with the ntg3 module, but nothing specificially like this. I have tried also to disable the ntg3 module and run the tg3 instead, no change.

Running the same setup on a different server with intel chipset, same configuration in switch and ESXi, no issues there.

Any thoughts?

0 Kudos
1 Solution

Accepted Solutions
HRossvoll
Contributor
Contributor
Jump to solution

Wopsy, my bad Smiley Happy

Seems like I had forgotten to double check the "Manangement Network Port group" default settings.

Once I set those to "inherit from vSwitch" (everything in regards to securty and nic teaming) things started to work.

Discovered this as I decided to move over a VM to this new host and enabling the LAG again.

Immediately I lost management connectivity, but connectivity to the VM kept running.

Issue was that the management network port group by default had vmnic0 set as only active nic, vmnic1 and 2 was set as standby.

Once all 3 was set to active (via inherit from vswitch), voila.

BR
Helge

View solution in original post

0 Kudos
4 Replies
HRossvoll
Contributor
Contributor
Jump to solution

Forgot to mention, the chipset on the nic is BCM5719

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Welcome to the Community,

Running the same setup on a different server ...

Please provide the exact details about the setup, i.e.the physical as well as the virtual configuration.

Are you aware of https://kb.vmware.com/s/article/1004048​?


André

0 Kudos
HRossvoll
Contributor
Contributor
Jump to solution

Pretty simple and straight forward setup on standalone ESXi hosts.

ESXi side:

vSwitch0

     MTU - 1500

     Uplink1-3 - vmnic0-2

     NIC teaming

          Load balanacing - Route based on IP hash

          Network failover detection - Link status only

          Notify switches - Yes

          Failback - Yes

          Failover order - vmnic0, 1, 2

Switch side (HPE/Aruba 2530)

     trunk 3-5 trk5 trunk

then per VLAN I need I have: tagged trk5

"

show trunks

Load Balancing Method:  L3-based (default)

"

The same switch have a 2nd ESXi host with the exact same ESXi version, with same setup on both ESXi and switch side.

Difference is the NIC only, where the problematic one uses HPE 4*1G NIC with broadcom chipset while the non-problematic has an intel I350-T2 NIC.

-Helge

0 Kudos
HRossvoll
Contributor
Contributor
Jump to solution

Wopsy, my bad Smiley Happy

Seems like I had forgotten to double check the "Manangement Network Port group" default settings.

Once I set those to "inherit from vSwitch" (everything in regards to securty and nic teaming) things started to work.

Discovered this as I decided to move over a VM to this new host and enabling the LAG again.

Immediately I lost management connectivity, but connectivity to the VM kept running.

Issue was that the management network port group by default had vmnic0 set as only active nic, vmnic1 and 2 was set as standby.

Once all 3 was set to active (via inherit from vswitch), voila.

BR
Helge

0 Kudos