First, let me say thank you for all of the knowledgeable people here providing free technical support, it is appreciated.
With that being said I have run into an issue that I have run out of ideas to solve.
It all started with 2 ESXi hosts, 2 Intel DA FA 2 10GB Nics and a direct attach cable between the hosts.
I created my virtual switches with MTUs of 9000 and added those to a VM on each host via the VMXNET3 driver as the nic.
One VM was Windows Server 2012 R2 the other was Windows 10 1703 64bit. Both VMs are capable of at least 250MB/sec read and write but in reality its even much higher than that.
The tests were performed with iperf 3.1.3 and the results were not as expected.
I never once could get the transfer to go over 1.2-1.3 Gbps.
I was sure it was the NICS in question from things I had read across the internet so I purchased 2 new 10gbps nics both of which were SolarFlare SFN5122F. I also purchased a switch to handle all of the 10gbps traffic which was a cisco SG500X with 4 SFP+ 10gbps ports.
Redid the test and expected a much faster speed but was met with 1.2-1.3 Gbps.
That is my story, does anyone have any ideas? I also have tried many different combinations of VMXNET3 Advanced configurations and had no success or increase what so ever.
Any help would be greatly appreciated.
Can you share iperf outputs from your tests.
As well can you provide details about ESXi versions and configurations of VMs (number of vCPUs, RAM).
Have you checked esxtop outputs during your tests? about CPU usage, network speed and packet drops? Can you run tests again and post some screenshots from there?