In my setup, it seems that all Win 10 hosts running under Workstation 16 have degraded Ethernet Uplink performance compared to the host which is also running Win 10. Upload transfers to any other Ethernet connected device in the overall LAN have a max speed of about 300Mbps. All other computers in the network including the host can transfer both ways at nearly line rate (1Gbps) typically about 900Mbps. A CentOS guest serviced by the same instance of VMware on the same host transfers data at full speed -no problem.
Is this typical with Workstation 16? I have tried many different types of network connections on the guest (Bridged, NAT etc) and have done a deep-dive on the TCP/IP settings. No luck. It's tempting to think this is a Win 10 problem but since the host is the exact same version of Win 10 and it is OK. It seems to be an interaction problem between VMware and Win 10.
Remember that network speed is limited by both the virtual network adapter used as well as available host CPU resources.
A lot of the work is done on the CPU.. so if your guest has too much resources assigned then the network will be slow.
Best performance is most likely via:
- using the vmxnet3 virtual adapter.
Hi Wila and All Others... Happy New Year
Can you tell me how to install the vmxnet3 virtual adapter in the guest? The documentation I found is very out of date and I was not successful at installing it.
BTW: The guest is setup with a VMnet0 bridged connection that points to a dedicated (physical) Intel NIC. It also has the complete and latest VMware Tools package installed. The Guest network connection's Ethernet adapter device is labeled as Intel(R) 82574L Gigabit Network Connection and the corresponding device driver is provided by Microsoft (dated 2018). I would like to see if this vmxnet3 adapter works any better than the default adapter.
Additional Notes If You Wish To Keep Reading: The PC I'm using is very robust and is not heavily utilized. (AMD 3900 12/24 core, 128GB RAM, NVME Drives etc). I have done a lot more testing and gave the host and guest each 64GB RAM and 12 cores. All other applications and services were turned off. Still, the guest Ethernet transfer rates are -well... terrible. The host always gets full line rate in both upload and download (usually 950Mbps to 1.1 Gbps). The guest usually gets only 50-70% of rated speed and is very asymmetric. For example sometimes send-rate is 20% line rate and receive-rate is 40% and a moment later the results will show send-rate of 80% and receive-rate of only 5%. I am testing in 3 ways using A) a file transfer benchmark package, B) real-time statistics from a NAS device, C) Internet speed test.
I have also verified the test using CentOS (v8) as a guest on the same host platform. The CentOS results show only 2-5% degraded performance -which seems fantastic.
Finally, there are 4 NICs in this computer and the host NIC and guest NICs have all been interchanged while repeating the tests. It makes no difference what NICs are being used.
Sorry for the long post...
Any suggestions are greatly appreciated.
PS: One final note. Tests were performed with Bridge connection to host IP and also NAT. Results are the same.
The drivers are already there. AFAIK even the vmxnet3 is nowadays even a Windows inbox driver, otherwise it is part of vmware tools.
In order to switch virtual network cards, you'll have to edit the vmx file of your VM.
So with the VM shut down -not suspended- and preferably with VMware Workstation not running, edit the .vmx file of the virtual machine using a simple text editor like notepad.
then locate the line:
ethernet0.virtualDev = "e1000"
and change it into:
ethernet0.virtualDev = "vmxnet3"
If the virtualDev really is the "e1000" and not an "e1000e" then you already have found out why the network is slow as the "e1000" one does not perform well. The "e1000e" is significantly faster, but the vmxnet3 one should give you the best performance.
edit: Oh and Happy New Year to you too. 🙂
[Removed duplicate post].
Thank you Wila...
Here's an update. The configuration was changed from "e1000e" to "vmxnet3". Full testing has not been done but basically, performance seems to be about the same. Win 10 under VMware is not capable of operating Ethernet at 1Gbps. The best it will (sometimes) do is 700Mbps. It usually operates between 300-600Mbps.
I suspect this issue has no solution right now. My system has been this way since it was established in early December. This setup is used for personal purposes. I did not notice the degraded speeds until some very large backup files were taking a VERY long time to transfer to NAS. On the host, I occasionally backup the VMs to Amazon S3 and it takes only a few hours with 1Gig Internet service. Some smaller local backups from the guest to NAS were taking longer to complete. rsync is used in both cases. I wasted much time falsely debugging the NAS as I assumed the ZFS NFS disks were running slow for some reason.
Do you happen to know if all VMware solutions have this problem? If I switch from Workstation to Vsphere with Esxi, will that have the same problem?
Ray (PS: I didn't use notepad to edit the file since I've been using "vi" ever since it replaced "ed") LOL...
PS: PS: I also created another VM and used TCP Optimizer ( SpeedGuide.net :: TCP Optimizer / Downloads ) This increased speeds a tiny bit but not much -maybe 5%.
re. notepad, well you mentioned "Windows 10 host", so if I start suggesting vi / nano / emacs then that might be confusing..
The VM never gets direct access to the real NIC, so it is expected that the performance is degraded in comparison to the host. There should be differences between e1000e and vmxnet3, but they are not as drastic as the difference between e1000 and e1000e.
If it can do 700 Mbps sometimes then I'd argue that it isn't the NIC that is your bottleneck.
It might be:
- available host CPU resources
- disk speed
- guest CPU resources
- something else?
If you're using rsync, are you perhaps also using encryption? If so, is the encryption properly accelerated by the CPU?
re. vSphere. Different platform that does not have the underlying guest OS that can meddle with performance, instead VMware controls the whole platform.
Having said that it also has the same basics and yes -mostly- the same problems. Although on vSphere you have more options for things like TCP offloading and you could for example also use PCIe-passthrough to give the VM access to the NIC directly. Not that I would recommend using either option on vSphere.
Just so you know, I'm not going to lose sleep over this problem. It's an issue of curiosity more than anything else. At this juncture, I'm very curious if other people are having the same issue. That will tell me if this can be solved or not.
I don't think the problem is related to CPU resources or disk speed. The disks are 2TB, NVME drives operating at about 64Gbps. Also, the same test was conducted on the same machine, using a CentOS guest VM. CentOS guest VM Ethernet operates at 90-95% of rated line speed. The Centos guest VM environment is setup identically to the Win 10 guest. It's loaded on the same host, same NVME drives etc. If all my programs were available on Linux, I would never have noticed this problem.
Tests were also performed with 2 temporary Win 10 Guest VMs. One with Win 10 Home and the other Win 10 Pro. These were brand new and fresh with minimal installation options. The same problem exists on them.
It seems that VMware Workstation has 2 fundamental issues with guests. 1) Ethernet performance is only about 50% of what the host can get. 2) For both Linux and Windows, only USB 2.0 can be used if audio is passed over the data path. If USB 3 is used, the audio gets terribly distorted. I guess there is a 3rd issue too. Everyone knows that the SVGA driver introduces about a 40% performance hit in graphic card processing. These issues seem to be the fundamental weakness of VM technology. It's possible that tweaking various interrupt levels or tweaking various RX/TX buffer parameters in the various drivers might improve the situation. I'm not ambitious enough to go down that path.
BTW: I spent a great many years writing in Assembly language and wrote many device drivers (over 35 years ago when pSoS and VxWorks were the prevailing RTOS's). I can certainly understand the difficulties associated with VM technology. If I can find an easy solution, great. Otherwise, I'm happy to let some young hot-shot programmer figure-out what is causing these problems :+).
I hope VMware folks are reading this and hope they are inspired to investigate.
on a Lenovo Thinkpad X1 Yoga 6th Generation with 2GB SSB and fastest processor selectable and 32 GB RAM...
some 800 Mbit/s on a Ubuntu 21.04 (current patch level, 2021-05-12) (using vmxnet3, vmnet8 (nat)
This is FAR too slow as Windows 10 reports 10 Gbit/s connection on the adapter.
There are frequent losses of network connection, starting network manager tasks on the host. This leads to major issues on audio and cam using Teams on the guest. There are x86/split lock detection: #AC: vmx-vcpu-1 messages in the syslog.
There is an issue with VMware somewhere... and as we pay for the licence... they should fix it 😉
To be honest - VMware was dropped from my Macs as Parallels was more performant that time....
I don't think that Workstation has 10Gbit/sec support for its virtual NICs?
At least I've never seen them claim that it has. I know it is available on VMware vSphere, have never seen anyone report they could get that speed in a VM on workstation. I might certainly be wrong on this.
Also beware that the virtual NIC speed is completely depended on available host CPU resources, so make sure to leave enough CPU resources available for the host. IOW, don't assign too much vCPU cores to the guest.
I checked - my Windows 10 Pro states 10/10 Gbps in Windwos with VMware Driver vmxnet3.
The guest has 2 cores from the 8 the physical processor offers, leaving 6 cores to the rest of the machine. Yes, the host is "idle". Guest has 16 GB, host has 32 GB.
Maybe there is an issue with configuring processors: number of processors 1, number of cores: 2, VT and IOMMU ticked, counters not.
Yeah... the vmxnet3 driver is capable of 10Gbit/s on a supported platform, but I'm not convinced that VMware Workstation is that supported platform. AFAIK, the vmxnet3 driver isn't a default virtual NIC on any of the guest OS's either.
Out of curiosity, does your Lenovo Yoga have a 10Gbit NIC?
yes - but I guess, VMware Workstation 16 Pro 16.1.1 is able to deliver 1 Gbit/s and not only 800 Mbit/s ... (yes - I know that Ubuntu 21.04 might not be an officially supported host). I will be happy if 1 Gbit/s is reached.
The Lenovo Yoga does not have a 10Gbit NIC physically - but all my Debian Buster hosts using KVM/qemu have a host-internal "NIC" going well 10Gbit/s and faster between hosts and virtualized guests. On OS X Catalina and Parallels with Windows 10 guest network was never an issue or a reason to be further investigated. I expect the same with VMware on Ubuntu 🙂
The reason using VMware on this laptop is: KVM/qemu is a pain with graphics and usability if you use Windows 10 GUI.
I use vmware-tools - they seem to be the source of the driver. I am happy to use any other driver which is more performant on my setup.
FYI: The engineering department addressed the ticket I had open for this and several other performance issues. After a lot of trouble-shooting on the phone, there are no solutions. Also, the NIC and USB 3.0 performance issues are known problems that will apparently be addressed sometime in the future. For now, only USB 2.0 is reliable for real-time data such as voice or video. Variable network speeds will happen no matter what host OS (Windows or Linux) is used. Certain functions of DirectX are not implemented in the VMGraphics Driver. And so it shall be!
When it comes to virtual machines and device drivers (such as NICs, Video Cards, USB etc) there will always be a performance loss. It's just a bit more than I expected. After a lot of testing on the same hardware (direct comparison with and without VMWare) I'm seeing typical performance differences of 10 to 30% across the board.
There are certain benefits to using VMs that offset the performance hits so, consider that when deciding to use VMs. In my particular case, I've decided to convert my machine back to a Win 10 stand-alone system for all primary purposes (CAD/CAM, Video and Photo Editing) and only use VMs for scratchpad and test environments.
Same issue here and took me days reaching a potential solution
Like most of you, I am running VMware Workstation Pro 16 on a Windows 10 as a host. I am confident to say there is sufficent system resource, I run 2 x E5-2690 V4 with 128 GB Memory, mutliple SSDs
Confirmed 1000e was defined within VMX file. VM NIC always sync to 1 Gbps. Tested all VM NIC Settings: Bridge / NAT / Host-only / Custom
Believe it or not, I only get about 30 to 60 MB/s --- that is about 500 Mbps (as open post described)
I found the following tweaking bring network performance back to 100% for me
Please mind that enabling Jumbo Frame can lead to other issue. "Dumb" network devices may drop oversized packets if they can't handle it.
For an example, I was casting video from my computer to "smart" TV, then I suddenly found screen was frozen but audio playing fine. Yes, I was playing with Jumbo Frame. I disable Jumbo Frame, and casting is back to normal.
* I think * The symptom was also found in VMware Workstation Pro 15, not 100% sure which version it started, but definitely annoying.
Good Luck, everyone!
You don't mention what kind of NIC you have, but if you only have a 1Gbit NIC then the better solution is to use the vmxnet3 adapter.
The virtual intel e1000 adapter is very slow, the virtual e1000e is a bit faster, but not nearly as fast as the paravirtualized vmxnet3 adapter.
If OTOH you have a 10Gbit network adapter and you managed to get 10Gbit line speed then that would be very wonderful.
I thought I did mention vNIC is the 1000e. Sorry, I missed 'e' at the front.
Yes, it is ethernet0.virtualDev = "e1000e" in VMX file
Network Speed Test (iPerf) When VMX set to ethernet0.virtualDev = "vmxnet3"
Host to Guest about 900 Mbps
Guest to Host about 300 Mbps (Jumbo Frame disabled in host's NIC)
Guest to Guest about 1.9 Gbps. (Guest's OS showed NIC neogitated at 10 Gbps)
Other than increasing frame size on both guest's and host's NIC, I don't have other workaround to improve guest to host network performance.
Network Speed Test From Guest to Host Network speed test by iPerf
Jumbo Frame at 1514 about 300 Mbps (similar to Jumbo Frame disabled, as MTU default at 1500)
Jumbo Frame at 4088 about 600 Mbps
Jumbo Frame at 9000 about 1.3 Gbps
Happy New Year Everyone,
How did you all go on this?
Issue persists for me, currently running VMware Workstation 16 Pro (16.2.1 build-18811642).
The issues with degraded Ethernet and USB 3.0 speeds will always exist until PC hardware/architecture fundamentally changes. Because of the way virtualization works, that data needs to be processed by the CPU, then delivered to/from the VMs. There's no CPU fast enough to do that and still maintain proper timing of all the other work it needs to do. About 6 months after I reported the problem, an engineer from VMWare called and confirmed there is no solution in-sight to Video, Ethernet and USB data in real-time.
I eventually gave-up on VMWare and VMs in-general. It's a cool idea that works well primarily for web-server and database applications but, does not work well for individual users that want dedicated desktop power-houses. In my case, CAD/CAM software and video processing applications suffered tremendously.
Thanks for sharing.
For me, tweaking jumbo frame (as mentioned above) restored network performance from guest to host.
I used dedicated NIC on my host and enabled jumbo frame. I also enable jumbo frame on guest. (bridged host network).
No solution for NAT network at this stage. Although I confirmed the NAT VM NIC on host has jumbo frame enabled.
For file sharing files between Windows Guest and Windows Host, I created shared folder and edit guest settings, file copy worked at full disk speed. It seems no network throttling at all.
I can recall this issue was introduced after certain version of VMware workstation, without OS and hardware changes in my host. I suspected it was an artificial nerfing, or by mistake that is never corrected. However, I made a painful choice to stay on latest VMware Workstation version and compromised to live with this issue.
All the best to you all.