VMware Cloud Community
aklassenns
Contributor
Contributor

standard vswitch performance with only internal vms

I have a standard vswitch which connects 3 VMs. Each VM is using vmxnet3 adapters. There is no physical NIC. I am testing sending packets internally from two of the VMs which are installed with Debian to the third VM. I am trying to get the packet rate as high as I can without drops on the receive side. I have the tuning tx ring size set to maximum and tx queue length set to 8333 on the Debian VMs and they have 8 transmit queues. I have the host bios set to virtualization high performance. 

When I send beyond a certain rate, I get transmit packets dropped for the sending vms in the port stats using esxcli. On the receiving VM I do not see any receive packet drops. So it looks like my problem is a transmit side bottleneck. 

In vsish net/portsets for the vswitch and the sending vms I get droppedTx in the stats. If I display the associated vmxnet3 txSummary it does not show any transmit failures or any type of errors that I can tell. Is it significant that the vmxnet3 does not show errors but the higher level does? I have not tried any changes in the hypervisor settings like exposing the iommu or locking the vm memory. I have seen mention of resource pool allocation but that only seems related to distributed switches. 

Thanks for any ideas. 

0 Kudos
0 Replies