I come to you because I have a problem of network with Workstation 10 and Windows 8 / 8.1
In fact, if I copy files from or to a VM with my Windows 8 host through Windows shares (SMB / CIFS), it's very very slow, unusable.
Example : My host is a Windows 8 x64, and guest Windows 8 x64 too : transfert speed is about 100 or 200 KB/s, and often 0 KB/s (transfert seems to be in pause).
With a Windows 8.1 guest, same problem.
After many tests, I also tried with Windows PE 5 (same kernel as Windows 8.1) : same problem, unusable.
But, miracle, if I use "MSI-X OFF utility" from Intel, I reboot, and problem disapears.
Someone into that community said he have the same problem, and a workaround was do not use Intel e1000e but e1000 insteed : I admit, it work.
With e1000, speed is about 30 to 60 MB/s and stable.
But, by default, Workstation 10 set e1000e when I create a new Windows 8.1 VM, so it's a problem... I don't wish to manually change it for each new VM and explain that to my students.
Please, can you reproduce that bug and fix it ?
Thank you very much.
I must use the "MSI-X OFF utility" into each Guest (each VM), so it's not very useful...
Also, I tried with 1 CPU core or 2 cores : it don't change anything.
Maybe with 1 core it's slower : 1 MB is sent if I wait 2-3 minutes. The problem is the same.
If I switch from virtual dev e1000e to e1000, even with only 1 CPU core, speed is normal : about 50 MB/s
i can reproduce the problem in host-only mode, NAT or Bridge mode are ok, you can try to use NAT or Bridge mode, or use the share foler to copy files between Host and Guest
I have the problem with Bridge Mode, because I need to be in the physical network.
Also, the VMware share folder is not always possible (with Windows PE for example) and it don't solve the very slow network (slow with all protocols as HTTP, FTP, etc..., not only Windows shares).
I use an AMD A10-5800K CPU : do you wish I test with a Laptop that contains an old Intel CPU (support 32 bits guests) ?
I have news, sorry for the delay.
I tried with 3 differents configurations, and I found more details to obtain the bug.
The 3 PC have VMware Workstation 10.0.1 :
- AMD A10-5800K with 16 GB RAM, and that Host OS : Windows 8 x64
- Intel Core i5-2300 with 8 GB RAM, and that Host OS : Windows 7 SP1 x64
- AMD Athlon II X2 240 with 8 GB RAM, and that Host OS : Windows 7 SP1 x64
With an used or new VM configured with Windows 8 x64 guest OS (into VMware settings, so virtual network e1000e is set, and I use bridge mode), I can run Windows 8 or 8.1 guest OS, or even Windows PE 5 (same Windows core) and I obtain a specific bug.
My Host OS have 192.168.0.2 IP (Windows 7 or 😎
My Guest OS have 192.168.0.3 IP (Windows 8 or PE 5)
A Server on the physical network, with 192.168.0.4 (Windows 7 or Server 2008 R2)
If my Host OS 192.168.0.2 try to use network with Guest OS 192.168.0.3 (any protocol) it's very slow and unusable.
But, between my Guest OS 192.168.0.3 and the physical Server 192.168.0.4, network is OK.
Also, if I change virtual network card from e1000e to e1000, the speed is OK in all cases.
Do you obtain the same results ?
If yes, please can you fix it ?
People go round and round with this easily reproduced issue, and ultimately discover it's a problem with the emulated e1000e.
Might be a nice thing to get fixed.
I wish to get news about that bug... that is always here.
2 month ago (mid december 2013) I opened a support ticket about it, because I had 30 days of support.
About that, thank you to "Rahul Jha" for his help, he call me at home to obtain more informations, and he tells me developers are working on it.
Now, 2 month after that ticket, there is not any new Workstation version that fix that bug, present with 10.0.1
Also, with many tests, I found even between 2 VMs directly, network is very slow with e1000e, very unstable.
I give lessons into a training center and I admit it's a big problem for us, the students don't think systematically to change manually into the VMX file the network virtual device.
I can be a good option to have the ability to change network virtual device directly into the configuration GUI of a specific VM.
I hope VMware will consider and fix that bug. Thank you.
Screening through the postings I can't help but wonder: is this a hypervisor or a os problem? ... especially as I haven't seen a propper fix to it yet.
Would very much like to have it fixed though.
Nice find Speed9!
The more I read about Offloading problems like this, the more I am starting to consider just proactively disabling those features on all of my physical and virtual systems.
Reminds me of this Reddit thread from a few months ago: