VMware Cloud Community
vmk2014
Expert
Expert

Packet Loss issues on NIC cards

Hi ALL,

      We were experiencing the packet drops on Microsoft Lync servers and due to this the  quality of  call dropped. We followed below KB articles and

the number of packet drops has been decreased but we are still seeing some drops on the servers.


I have found the following vmware kb articles that describe a very similar issue.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=101007...

  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=203949...

Any suggestion will be much appreciated.

Thanks

vmk2014

0 Kudos
6 Replies
ggautam7741
Enthusiast
Enthusiast

We have similar issue for exchange hub CAS servers in our environment. Case is open with Vmware Tech support. Initial investigation pointed to implement http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=203949... which unfortunately did not resolve the issue.

0 Kudos
Bleeder
Hot Shot
Hot Shot

What version of VMware Tools on the Lync servers?

Also, there is a bug with the vShield component of the VMware tools, which may be related to your issue.  See here: VMware ESXi 5.5 U2 - TcpAckFrequency

0 Kudos
vmk2014
Expert
Expert

Hi Bleeder,

       We are using Lync server 2013 and the VMware tools version is Version 9.4.11,Build-2400950.

Thanks

vmk2014

0 Kudos
ShirinKumar
Enthusiast
Enthusiast

HI,

You can try to check it through ping command with option l (L) in small letter for check the buffer size.

I had the same issue where my server got RTO (Packet drops), then I tried to ping the source server where I had the issue, from another server .

ping IP Address -t -l 64

when I pinged the server with buffer size 64 , packet start decrising dorpping, then again I increased the Buffer size to 512 , it granuarily start decreasing the packet drops, by default it sets 256 bytes in the NIC card property,

lastly I ping it through 1024 buffer size then I did not recieved any packet loss,

then I set the packet buffer size to 1024 from 256 bytes in the NIC Card Properties, you can set it in the below steps.

1. open Nic Card Properties,

2. Go to advanced Tab,

3, Performance Tab (every nic card have different setting)

4. then go to RX buffer size to 1024 from 256 ,

5, OK,

Please reply if its useful.

0 Kudos
vmk2014
Expert
Expert

HI all,

We  increased the RX buffer for Small Rx Buffers and Rx Ring because network driver running out of receive buffers, causing the packets to be dropped between the Virtual Switch and the Guest OS driver

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=203949...

  1. Click Start > Control Panel > Device Manager.
  2. Right-click vmxnet3 and click Properties.
  3. Click the Advanced tab.
  4. Click Small Rx Buffers and increase the value. The default value is 1024  and the maximum is 4096.
  5. Click Rx Ring #1 Size and increase the value. The default value is 512 and the maximum is 8192.  


Microsoft said" we are seeing discarded packets we may encounter issues with poor media quality"

As per this report, the number of packet drops has been decreased but we are still seeing some drops on the servers . Finally Microsoft said to open a case with VMware.


We opened the Case with VMware and there findings were :-

1)We do see discards during snapshot creation and removal operations on the VM (this seems to happen once per day when backups are running).  This occurs when the VM is stunned during snapshot creation / removal (usually 1-2 seconds, can be less or more depending on the environment).


They reviewed the following information in real time:

1. Performance charts and esxtop showed no dropped packets for the VM at the ESXi host level in realtime.  There were periods of receive drops when snapshot removal and creation happened.

2. There is not memory contention on this virtual machine and memory on the VM is not being ballooned or swapped.

3. No issues were observed with storage performance.

4. CPU did not seem constrained or overloaded.

Thanks

vmk2014



0 Kudos
ShirinKumar
Enthusiast
Enthusiast

0 Kudos