VMware Cloud Community
TheKenApp
Contributor
Contributor

esxtop and client performance chart discrepancy for receive packets dropped

I am having difficulty analyzing the various data I receive at the host level, versus within the vSphere client. I need this to determine if a change I made to a VM is fixing a specific problem, as well as to find out if there are other issues with the networking.

The VM in question is on an ESXi 5.1 host, and it is a Windows server 2008 R2.

What I did was upgrade the VM hardware from version 7 to version 9, deleted the E1000 NIC and installed the VMXNET3 NIC. This was done because there was a number of receive packets dropped for this VM, and we had been having issues with the DB that runs on it.

After I made the above changes, this VM is showing a rather large number of receive packets dropped while monitoring via the vSphere client in real-time. Compared to packets received, it would appear that there is a 43% loss (average packets received summation = 763, average receive packets dropped summation = 565).

However, when I monitor the host this VM is on with esxtop, and viewing the network statistics after a 10 minute run, all values for %DRPRX = 0.00.

I am trying to evaluate the fix I made to this VM, because prior to that fix, I was seeing an average %DRPRX of about 2.65% as well as the dropped packets in the client.

Can someone explain to me why esxtop would show 0.00% receive packets dropped, while at the same time the performance charts in the vSphere client show what I believe to be about 43% loss?

Lastly, within the VMXNET3 device on the VM, the values for Rx Ring #1 Size, and Rx Ring #2 Size are blank. Should I be stting this to the default of 1024?

Any help would be appreciated. Thanks,

Ken

Reply
0 Kudos
5 Replies
rickardnobel
Champion
Champion

TheKenApp wrote:

Can someone explain to me why esxtop would show 0.00% receive packets dropped, while at the same time the performance charts in the vSphere client show what I believe to be about 43% loss?

One possible explaination is that esxtop shows "real time" data, as in the values of the latest seconds only, and you might look at esxtop at a time where no packets were drop.

Also, 43% packet loss would indeed be a very very high number and if that is accurate I guess the VM is not really working?

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
Murmansk
Contributor
Contributor

We are having discrepancy between esxtop and client performance chart too.

We have some machines showing about 150 receive packets dropped in the network performance chart. But esxtop says 0% dropped for the same machine.

I've found an article with some information about changing failover settings in distributed switches (http://www.null-byte.org/vmware/random-packet-loss-with-vmware-esxi-5-1-virtual-machines-using-vcni/). We do not use distributed switches, but made some tests changing failover order, and the number of receive packets dropped went down to about 10, but started to rise slowly again.

Even when changing failover order, there was a discrepancy between esxtop (0%DRPRX) and the performance chart.

Looked KB:1010071 too ("The output of esxtop shows dropped receive packets at the virtual switch"), but we have no dropped packets at the virtual switch, only on the virtual machine.

If we review the information in the guest OS (RHEL 6) with 'ifconfig', there are no dropped packets. We have one vNIC and two IPv4 addresses, one in eth0, and the second in eth0:1.

We have upgraded our infraestructure to ESXi 5.1.0, 1117900, and reinstalled the vmware tools using --clobber-kernel-modules=vmxnet3. But the issue still persists.

Does someone have any clue?

Reply
0 Kudos
jpgatt
Contributor
Contributor

Hi did you manage to solve this problem??

I am banging my head against this exact same problem!!

Reply
0 Kudos
PhillyDubs
Enthusiast
Enthusiast

I'm also having this same exact issue. My 5.0 hosts do not exhibit this issue, only the 5.1 hosts.

VCP5
Reply
0 Kudos
PhillyDubs
Enthusiast
Enthusiast

VMware support came back to me and said it is a known bug in 5.1 and should be fixed in the future. There is also a KB article for this that looks like it was updated July 24th. Not sure if this  is due to me opening the case or that KB has been out there and I never noticed it -

http://kb.vmware.com/kb/205291


VCP5
Reply
0 Kudos