VMware Cloud Community
sToRmInG
Contributor
Contributor
Jump to solution

ESXCLI shows Receive Packets Dropped on vmk0. Is this normal?

Hi all

We are in the middle of troubleshooting connectivity issues and wanted to make sure that our ESXi servers are not the problem.

Our hosts are DLL XC 730 series servers and we are using a lag with LACP which contains both 10GBit NICs.The uptime is appx. 3 months.

Now we are seeing a lot of "Receive Packets Dropped" when checking the vmk0 (around 400'000 each ESXi). However there are no drops on the NICs themselves.

My question is if this is normal or if we should be worried about it.

In our lab environment we are seeing similar behaviour (about 200'000 drops) but we are not seeing any connectivity issues.

Thanks,

Manuel

1 Solution

Accepted Solutions
MKguy
Virtuoso
Virtuoso
Jump to solution

Try this recently released patch which fixes a false drop reporting issue. While it mentions performance charts, it might help with your issue as well:

VMware ESXi 6.0, Patch ESXi-6.0.0-20160804001-standard (2145667) | VMware KB

A newer ixgbe driver version 4.4.1 is available as well:

https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-441&productId=491

You should check for firmware updates while you're at it too.

Doing some math with the port stat numbers shows you have 347,884,869 (Unicast) + 1,214,237 (Broadcasts) = 349,099,106 RX packets total

Meanwhile you have 430,499 dropped RX packets, which equals a drop rate of 0.12%.

My bet is still rather on a reporting issue or filtering of irrelevant broadcast/multicast traffic, but even if it's real, then this should't cause noticeable issues, especially not on VMs which are separate from a vmk interface.

-- http://alpacapowered.wordpress.com

View solution in original post

4 Replies
MKguy
Virtuoso
Virtuoso
Jump to solution

Where are these drops reported? These may be false positives:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=20529...

Do you see any DRPRX in the esxtop network view?

What ESXi version, your physical NIC type, firmware and drivers? Please provide the output of the following commands on the ESXi shell to give us more information:

# vmware -vl

# esxcli network nic list

# esxcli network nic get -n vmnicX


Check the virtual port packet counters of your vmk0 in vsish as described here and post the output as well:

https://www.reddit.com/r/vmware/comments/3l306b/esxi_6_and_receive_packets_dropped_counter/


Even if there were drops on your vmk0, this should technically have no impact on the separated VM network traffic.

-- http://alpacapowered.wordpress.com
sToRmInG
Contributor
Contributor
Jump to solution

Hi MKGuy

DRPRX percentage in esxtop is 0.00% on vmk0.

Here the additional information you requested:

vmware -vl

VMware ESXi 6.0.0 build-3568940

VMware ESXi 6.0.0 Update 1

esxcli network nic list

Name    PCI Device    Driver  Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description

------  ------------  ------  ------------  -----------  -----  ------  -----------------  ----  ---------------------------------------------------------------

vmnic0  0000:01:00.0  ixgbe   Up            Down             0  Half    24:6e:96:08:31:c8  1500  Intel Corporation 82599 10 Gigabit Dual Port Network Connection

vmnic1  0000:01:00.1  ixgbe   Up            Down             0  Half    24:6e:96:08:31:ca  1500  Intel Corporation 82599 10 Gigabit Dual Port Network Connection

vmnic2  0000:0d:00.0  igb     Up            Down             0  Half    24:6e:96:08:31:cc  1500  Intel Corporation I350 Gigabit Network Connection

vmnic3  0000:0d:00.1  igb     Up            Down             0  Half    24:6e:96:08:31:cd  1500  Intel Corporation I350 Gigabit Network Connection

vmnic4  0000:8a:00.0  ixgbe   Up            Up           10000  Full    a0:36:9f:ac:ce:00  1500  Intel Corporation Ethernet 10G 2P X520 Adapter

vmnic5  0000:8a:00.1  ixgbe   Up            Up           10000  Full    a0:36:9f:ac:ce:02  1500  Intel Corporation Ethernet 10G 2P X520 Adapter

esxcli network nic get -n vmnic4

   Advertised Auto Negotiation: false

   Advertised Link Modes: 10000baseT/Full

   Auto Negotiation: false

   Cable Type:

   Current Message Level: 7

   Driver Info:

         Bus Info: 0000:8a:00.0

         Driver: ixgbe

         Firmware Version: 0x800007f5, 17.5.10

         Version: 3.21.4iov

   Link Detected: true

   Link Status: Up

   Name: vmnic4

   PHYAddress: 0

   Pause Autonegotiate: false

   Pause RX: true

   Pause TX: true

   Supported Ports: FIBRE

   Supports Auto Negotiation: false

   Supports Pause: true

   Supports Wakeon: false

   Transceiver: external

   Wakeon: None

esxcli network nic get -n vmnic5

   Advertised Auto Negotiation: false

   Advertised Link Modes: 10000baseT/Full

   Auto Negotiation: false

   Cable Type:

   Current Message Level: 7

   Driver Info:

         Bus Info: 0000:8a:00.1

         Driver: ixgbe

         Firmware Version: 0x800007f5, 17.5.10

         Version: 3.21.4iov

   Link Detected: true

   Link Status: Up

   Name: vmnic5

   PHYAddress: 0

   Pause Autonegotiate: false

   Pause RX: true

   Pause TX: true

   Supported Ports: FIBRE

   Supports Auto Negotiation: false

   Supports Pause: true

   Supports Wakeon: false

   Transceiver: external

   Wakeon: None

Additionally, a part of esxtop's network view:

esxtop -> n

5:52:19pm up 97 days  5:47, 1626 worlds, 86 VMs, 214 vCPUs; CPU load average: 0.14, 0.14, 0.14

   PORT-ID              USED-BY  TEAM-PNIC DNAME              PKTTX/s  MbTX/s   PSZTX    PKTRX/s  MbRX/s   PSZRX %DRPTX %DRPRX

  33554433           Management        n/a vSwitch0              0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  50331649           Management        n/a vSwitchNutanix        0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  50331650                 vmk1       void vSwitchNutanix     1331.33    4.58  451.00    1031.88    0.62   78.00   0.00   0.00

  50331652 2774254:NTNX-55BWGD2       void vSwitchNutanix     1031.88    0.58   73.00    1352.31    4.59  444.00   0.00   0.00

  67108865           Management        n/a DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  67108866        LACP_MgmtPort        n/a DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  67108867                 lag1        n/a DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  67108868                 vmk0      lag1* DvsPortset-0        337.60    6.33 2457.00     228.88    0.54  308.00   0.00   0.00

  67108869               vmnic4          - DvsPortset-0       1297.00    9.94 1004.00    3496.17   30.22 1132.00   0.00   0.00

  67108870     Shadow of vmnic4        n/a DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  67108872               vmnic5          - DvsPortset-0       1033.78    9.39 1190.00    1457.21    8.05  724.00   0.00   0.00

  67108873     Shadow of vmnic5        n/a DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

  67108874 2774254:NTNX-55BWGD2      lag1* DvsPortset-0       2223.97   19.39 1142.00    2464.29   41.41 2202.00   0.00   0.00

  67111572 16432447:troubleshoo      lag1* DvsPortset-0          0.00    0.00    0.00       0.00    0.00    0.00   0.00   0.00

And the actual drops on vmk0 (PORT-ID: 67108868)

esxcli network port stats get -p 67108868

Packet statistics for port 67108868

   Packets received: 349099239

   Packets sent: 440281468

   Bytes received: 1533780114625

   Bytes sent: 324404970501

   Broadcast packets received: 1214237

   Broadcast packets sent: 7834

   Multicast packets received: 133

   Multicast packets sent: 8

   Unicast packets received: 347884869

   Unicast packets sent: 440273626

   Receive packets dropped: 430499

   Transmit packets dropped: 27

Thanks,

Manuel

*EDIT*

vsish output for vmk0:

vsish -e get /net/portsets/DvsPortset-0/ports/67108868/outputStats

io chain stats {

   starts:2987172879

   resumes:0

   inserts:0

   removes:0

   errors:0

   pktsStarted:3264226360

   pktsPassed:3263988094

   pktsDropped:238266

   pktsCloned:3111543

   functions:

        FILTER <Team_ReverseFilterPerList@<None:0x4301a989f220>

                pktsStarted:362298095

                pktsPassed:362299795

                pktsDropped:18446744073709549916

                pktsFiltered:0

                pktsQueued:0

                pktsFaulted:0

                pktsInjected:0

                pktErrors:0

        DVFILTER_DVPORT_OUT_GUEST <dvfilter-generic-vmware:0x43075553cb01>

                pktsStarted:3264174404

                pktsPassed:3264218050

                pktsDropped:18446744073709507970

                pktsFiltered:0

                pktsQueued:0

                pktsFaulted:0

                pktsInjected:0

                pktErrors:0

        DVFILTER_VNIC_OUT_GUEST <ESXi-Firewall:0x43075553e521>

                pktsStarted:3264203826

                pktsPassed:3263973664

                pktsDropped:230162

                pktsFiltered:0

                pktsQueued:0

                pktsFaulted:0

                pktsInjected:0

                pktErrors:0

        TERMINAL <vmk0:0x4307994547c0>

                pktsStarted:3263954386

                pktsPassed:0

                pktsDropped:3263954386

                no client stats maintained

}

0 Kudos
MKguy
Virtuoso
Virtuoso
Jump to solution

Try this recently released patch which fixes a false drop reporting issue. While it mentions performance charts, it might help with your issue as well:

VMware ESXi 6.0, Patch ESXi-6.0.0-20160804001-standard (2145667) | VMware KB

A newer ixgbe driver version 4.4.1 is available as well:

https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-441&productId=491

You should check for firmware updates while you're at it too.

Doing some math with the port stat numbers shows you have 347,884,869 (Unicast) + 1,214,237 (Broadcasts) = 349,099,106 RX packets total

Meanwhile you have 430,499 dropped RX packets, which equals a drop rate of 0.12%.

My bet is still rather on a reporting issue or filtering of irrelevant broadcast/multicast traffic, but even if it's real, then this should't cause noticeable issues, especially not on VMs which are separate from a vmk interface.

-- http://alpacapowered.wordpress.com
sToRmInG
Contributor
Contributor
Jump to solution

Thank you for the quick reply.

Patching isn't possible at the moment but since this is a cosmetic thing we will go ahead and leave it probably until February next year.

Regarding the NIC driver we will take it into consideration. Smiley Happy

Thanks again for your help!

0 Kudos