vSwitch issue with nPAR

Here's a scenario we recently encountered and what was necessary in order to address it. I'm entering this here in the hopes of helping future users.


Dell PowerEdge M710HD

Two Broadcom 57712-k NICs

PowerConnect M8024-k 10GbE switches

Each of the ports on the Broadcom card can be partitioned four times, for a total of eight partitions across two switch ports.

Odd numbered partitions become odd numbered vmnics, connected to switch-01, port tengigabitethernet 1/0/1

Even numbered partition become even numbered vmnics, connected to switch-02, port tengigabitethernet 1/0/1

ESXi 5.x Enterprise Plus


Host ESXHOST-01 has vSwitch0 (Management Network, vmk0 on VLAN100), assigned vmnic0 and vmnic1.

Port group Management-VLAN100 is created in DVSwitch0, assigned vmnic2 and vmnic3.

Guest VCENTER-01 on ESX01 has a single interface in port group Management-VLAN100

At this point, both the host and guest have working interfaces in VLAN100 with fully working configurations; however, VCENTER-01 cannot reach ESXHOST-01 when both are bound to vmnics on the same Broadcom NIC but different partitions (e.g. vmnic1 and vmnic3). When using partitions on different Broadcom NICs (e.g. vmnic0 and vmnic3) the connection is successful.

This occurs because partitions are treated as separate physical links connected to a single physical switch port. When the guest issues an ARP request for the ESX host, that request is broadcast by the switch to all links EXCEPT for the one where it originated, which prevents the host from ever seeing the lookup request. When the partitions are on different physical NICs spanning different physical ports, the ARP broadcast is received.


  1. Place the vCenter server (and/or any other guest used for monitoring ESX hosts) on a different VLAN than the ESX hosts - this forces the need for an extra hop for routing, which direct the traffic to leave the switchport and return.
  2. Add a network to vSwitch0 on each host and place the VMs in this instead of the distributed switch (or any other vswitch), which allows the vCenter to contact the host within the virtual switch.

We selected solution #2. This requires one extra step when defining the management network (adding a VM network alongside the vmkernel).


This issue is not limited to traffic between guests and ESX hosts. Essentially, two guests on a single ESX host would experience the same issue if:

  • Both guests are on the same VLAN


  • Guests are connected to different vSwitches


  • The vmnics for the different vSwitches are partitions on the same physical Broadcom card

It's not likely that anyone would create multiple virtual switches supporting the same VLAN, so this scenario isn't as likely to come up.

0 Kudos
4 Replies

Thanks for the post it saved me some research time. I noticed this behavior this week as I was setting up our blades and getting ready to start deploying vms.

0 Kudos

We're almost 3 years later now, and I stumbled upon the very same issue on brand new HP DL380 Gen9 servers and FlexFabric 534FLR-SFP+ NICs (HP-rebranded ex-Broadcom 57810S, now owned by QLogic).

The funny thing is I chose this NIC on purpose, because it seemed well supported and free of too-new side effects.

None of mbergeron's suggested workarounds works very well for us, because 1) we don't want to add complexity and degrade performance going with external routing, and 2) several VMs need to communicate with the vCenter and then with other VMs as well.

Is anyone else using one of those NPAR NICs, and did you somehow manage to avoid or solve this issue, or is this an hardware bug that cannot be fixed ?

0 Kudos

I know this is really old but I wanted to thank you for posting this! I think I have been having the same issue for quite a long time and I think you have identified the root cause.

For those that might be reading this I would like to list the symptoms of my situation:

We have several Dell M620 ESXi hosts running NPAR on 4 physical adapters for a total of 24 vmnics on the ESXi host. In our case the vCenter VM would begin to vMotion from the source ESXi host to the destination ESXi host and fail at about 65%. The destination ESXi host  would become "disconnected" from vCenter immediately after the vMotion failure. Checking vCenter and the vMotion event would be listed as failed. The vCenter VM actually completes the migration to the destination ESXi host even though it is showing as failed. Checking on the CLI of the destination ESXi host if you tried to ping vCenter it would not respond even though it was running on that host and they were both on same network. A big tell-tale I was also seeing was "(incomplete)" entries in the destination ESXi host's ARP table for the vCenter VM when I checked that. On the vCenter VM, I could not ping the ESXi host it was running on either but it pinged everything else just fine. The vCenter VM could continue to successfully ping and ARP all other ESXi hosts though. We have been using a similar workaround to option #2 described here but now at least I understand why that seems to work.



0 Kudos

Hello mbergeron and others,

Have you found a non-workaround solution to this? I'm considering buying a QLogic 57810 Dual Port 10Gb to use nPAR and have done some research into this.

It looks like in-built functionality to resolve this issue has existed since 2011 - see the following PPT - slide 18 - the QLogic eSwitch should allow traffic flow between PFs


Also the below PDF


So my question now is why didnt this feature resolve your issues. I can think of the following possibilities:

- Broadcom NICs don't have this fuctionality - maybe they do now owned by QLogic?

- Perhaps the eSwitch doesn't pass ARP packets properly - my biggest concern.


0 Kudos