I find it difficult to get results from google searches. Has anyone been able to get 10Gb networking to run at a decent performance(>1.5Gbps) in VMware Workstation?
I have two machines with 10Gb HW network adapters connected point-to-point.
1. FreeNAS server with a samba share.
2. Win10 host running WS 15 with another FreeNAS server VM running.
Network is setup as bridged inside the VM, the adapter HW is VMXNET3 set from editing the .vmx file
In normal operation, copying from machine 1 samba share to local disk on machine 2 works with 1.15GB/s. Both machines are reading/writing to SSD media, they are able to saturate the 10Gb link copying to/from shares in the host OS.
Copying data from machine 1 samba share to the guest VM samba share on machine 2 from the host OS results in 100Mb/s at best, average is less than that. Guest OS configures the VMXNET3 card just fine, it comes with vmware-tools preinstalled.
Guest VM has 4 cores allocated and 32Gb of RAM. Host CPU is I7 9800X@3.8Ghz base clock.
I know VMXNET3 on WS is not really supported but I would expect at least higher than 1Gbps speeds.
Any help is greatly appreciated.
On the machine 1 there's stripe over two raid5 arrays backed by s-ata SSDs, machine 2 hosts the VM on a stripe of two M.2 Samsung EVO 970. Like I've said, copying anything from/to the FreeNAS server over samba runs at 1.15Gb/s (close to max 10G).
The VM serves the share from a similar two raid5 striped arrays. 2X3vdisk raid5 striped.
Realize that the networking in a virtual machine is all performed by the host CPU, not offloaded to any network card/chip like the host PC does. The speed of the network is merely a label to the guest.
CPU is a problem if your running into cpu ready issues from running too many vms or very high usage. IF your overall cpu usage is low I doubt this is a problem. I did some research and this seems to be common and may be an undocumented limit. Nothing is technically limiting but other forum posts show the same thing.
The VM is the only one running and I’m not running compute intensive stuff on the host.
The copy operation never goes above ~100Mb/s. This tells me there’s a cap someplace.
I would assume the copy to go bouncing up and down if the CPU would have been a limiting factor.
Bare in mind the CPU is not an atom or a low power part. Should produce throughput, the only hit I would expect to be latency But that is of no concern in this case. I just want to shuffle data, to/from this VM.
I'm able to achieve 4.1Gbit/s from a physical server to a VM over the wire. And 2.1Gbps from the VM to the server direction. Ryzen 1600 @ 3.6 hosts the VM 14 with Win Server 2016. The other machine is physical, also W2016.
Have you correctly bound the virtual NIC to the physical 10g card on the host? What the Status on the virtual NIC says in the VM?
I'm also frustrated I cannot use the full bandwidth (or close) of the 10g link from a VM. I turned On RSS (Receive side scaling) in the VM (it was Off) but it seems to not have had any benefit.
I have similar problems.
I have an I9 host (10cores/20 threads) with a 10G NIC. Machine to Machine (host to host) Iperf shows 9G+ (each machine has 64G of ram and 2TB NVMe SSD).
My Windows 10 guest will only get about 1.5G from Guest to Host.
The only adapter that the Windows 10 guest shows available is the 82547L
The host is Ubuntu 19.04 with vmware tools 10.3.10 installed.
The network devices show as vmnet0, vmnet1 and vmnet8.
Virtualbox did much better but didn't have working 3d graphics drivers.
Hello. you have described an old workstation problem.
it's been years since I've been looking for an answer to this problem, but vmware doesn't want to officially answer your question
every single pro workstation in a single host allows up to 1GB of network bandwidth, and infinite disk access bandwidth
the problem is not the 10GB network cards nor the nic configuration. But the limitation imposed is not documented
"you can't even accuse the CPU or RAM" blames cpu and ram when people don't know what to say
I have computers with 20 cores and 256gb ram and many nic 10GB and ssd 3000MB / s speeds - on windows 10 workstation - vmnic are configured in bridge vmxnet3
-if I copy the data via network between two windows 2016 systems "that work inside the same workstation pro", the data does not even go through the nic but they run in the internal engine of the vmware-workstation and the band stops/limit at 1.0GB
-if I add other vm and I activate the data copy the maximum band of all the vm added does not exceed 1GB / s
-only in one case can you exceed 1GB of bandwidth, and it happens when workstation pro uses the data in the windows 10 ram because it is cached, but when the cache is empty the band immediately goes down to 1GB
1) if you use virtualbox these problems do not exist. if you have a 10GB nic, I assure you that the network goes to 800-900MB/s
2) or install ESXi 6.0 6.5 6.7
this question of yours has been running for years but none of vmware provides an official answer. The usual useless answers are always written to blame the user or the hardware when the problem is a hidden internal limiter. I've been looking for a solution for years without finding it. It is assumed that using virtualbox or ESXI on the same hardware you have no limits and your 10GB cards work 100%
vmware does not want to respond officially to this limit even through SR. They only provide the funniest answers but do not confirm or deny them.
and any network operation you can do you can't exceed 120Mb/s on a single host computer where you use workstation pro. So you can have 1nic 1GB or 8nic 10GB but it doesn't change anything, You can also have a pc with 2 cpu xeon and 256gb ram and don't change anything.
only solutions i use esxi
use virtual box = but virtual box now not support nested intel, support only AMD
just virtual box inserts intel support : "remove vmware"