Host system is linux debian and the guest system is Winxp using vmplayer.
I am using the bridged networking option and everything works great except for one very annoying thing.
Network transfers moving files from host->guest over the network is incredibly slow something <1KB/s.
This is independant of protocol, I've tried scp, ftp, and samba.
Transfering files using the above protocols from the guest->host is fine getting something in the order of 4000-7000 KB/s
guest->Any other network host is fine
host->Any other network host is fine
Any other network host ->host is fine
Any other network host ->Guest is fine
So now I quite puzzled as to why as if it works fine in one direction but not the other I doubt its a problem with the disk being read and then writtened to straight away.
Any ideas ?
Thx in advance
after running the vmware-config.pl setting up bridged networking should I be able to see vmnet0 as a standard interface using ifconfig ?
Do I also need during vmware-config.pl to setup another virtual interface connecting to lo device of my host system to get decent host->guest speeds.
I have managed to solve it myself despite lack of any help here so far......
For anyone else with a similar problem:
Problem is caused by TCP Segment OffLoading of the intel e1000 network card I'm using (although the problem occurs in other 1gig or 10gb cards). Even though the information I found suggests the relevant bugs were fixed in kernel 2.6.12 (I'm using 18.104.22.168) I got instant results turning it off.
The other thing I had problems with is HOW turn it off in linux. Windows is quite easy look under advanced network properties of the card, its called 'large send' and set to none.
Linux download a useful tool called 'ethtool' and run 'ethtool -K eth0 tso off' where eth0 is whatever network interface u are using.
Doing this will (or should) reduce your network throughput although will greatly increase stablility.
My tests actually show this has improved throughput by 10% on my system ....
Message was edited by:
The hard way.... look though any hardware/software based problem that caused throughput network problems in both windows and linux. Started off assuming it was a MTU problem and worked my way up from there in groups.google.co.uk
Although you listed this in VMware Player, it is not specific to the player---it happens in VMware Workstation too. I first reported the problem here back in November but gave up on it.
Your "fix" works fine for Workstation, too. Thanks for chasing this down....
Thanks very much for this answer. You are an absolute champion and have saved me hours of wasted time I'm sure. I'd already wasted at least 4 hours until I found your solution.
The ethtool command fails with error "[i]Cannot set device tcp segmentation offload settings: Operation not supported[/i]". The driver probably does not support such chage. Any suggestions or workarounds?
The host uses Fedora Core 5 with kernel 2.6.16-1.2080_FC5. The NIC is the nVidia Corporation CK8S Ethernet Controller. The guest system is a Windows XP Pro.
The ethtool command fails with error "[i]Cannot set
device tcp segmentation offload settings: Operation
not supported[/i]". The driver probably does not
support such chage. Any suggestions or workarounds?
The host uses Fedora Core 5 with kernel
2.6.16-1.2080_FC5. The NIC is the nVidia Corporation
CK8S Ethernet Controller. The guest system is a
Windows XP Pro.
Have a look here:
You guys absolutely rock. This has been my biggest barrier to using vmplayer to run WinXp as a guest on my host linux box. I had posted this thread[/url] asking for help, but without success. Installing nvnet, disabling forcedeth, and using the settings in the linked thread worked beautifully.
Unfortunately this did not work for me. Ethtool said for forcedeth:
Cannot set device tcp segmentation offload settings: Operation not supported
The problem came up with moving from 22.214.171.124 to 126.96.36.199 :-(. I had to patch VMware using vmware-any-any-update101 to build the modules, anyway.
It looks like I am facing this problem again. I've just upgraded the box to Fedora Core 6 and I don't think there is a way to get nvnet installed under FC6. Once more, any attempt to communicate between my FC6 host and WinXP guest through the network is cripplingly slow. I was hoping that somewhere between the latest kernel in FC6 and the latest VMWare Player v1.02, this would have gotten fixed.
I looked for the nvnet drivers on nVidia's page. They don't seem to be on nVidia's webpage anymore. In fact, nVidia explicitly suggests and provides the forcedeth drivers now. For my setup (linux host, WinXP guest), any attempt to communicate from the guest to the host drags the guest to an unuseable crawl.
Given the number of nForce boards out there, this would seem to be a serious problem. I am working around it for now with an old dusty NetGear PCI card I hadn't gotten around to throwing away yet, and now the guest/host networking works great! The nVidia networking on my nForce3 board is 10/100 so it's no big deal, but it would really be unfortunate to lose the integrated gigabit port on all of the nForce4 boards.
VMWare, I hope you are listening!
I've always said - you get what you pay for. AMD may have cheaper CPUs, and nVidia is trying to get into other areas besides Graphics cards... but, cheaper cost is often cheaper products. I've been burned by crap hardware too many times - I only buy Intel chips, and reputable motherboards with known chipsets.
As an additional data point, I upgraded to a nForce4 board with the integrated gigabit Ethernet LAN. I see the same problem with this chipset as well. I suppose this indicates the problem is in the software, not the hardware.
Yes! Thanks. This saved the day.
I'm running a FC6 host and a FC6 guest. The problem was immediately solved when applying the solution to my host. No reboot needed.
Thanks a lot Crispy for sharing the solution.