after running the vmware-config.pl setting up bridged networking should I be able to see vmnet0 as a standard interface using ifconfig ?
Do I also need during vmware-config.pl to setup another virtual interface connecting to lo device of my host system to get decent host->guest speeds.
I have managed to solve it myself despite lack of any help here so far......
For anyone else with a similar problem:
Problem is caused by TCP Segment OffLoading of the intel e1000 network card I'm using (although the problem occurs in other 1gig or 10gb cards). Even though the information I found suggests the relevant bugs were fixed in kernel 2.6.12 (I'm using 22.214.171.124) I got instant results turning it off.
The other thing I had problems with is HOW turn it off in linux. Windows is quite easy look under advanced network properties of the card, its called 'large send' and set to none.
Linux download a useful tool called 'ethtool' and run 'ethtool -K eth0 tso off' where eth0 is whatever network interface u are using.
Doing this will (or should) reduce your network throughput although will greatly increase stablility.
My tests actually show this has improved throughput by 10% on my system ....
Message was edited by:
Thank you very much. How did you find out?
This also works for me, though I have a Marvell Yukon card.
Much better, much faster!
The hard way.... look though any hardware/software based problem that caused throughput network problems in both windows and linux. Started off assuming it was a MTU problem and worked my way up from there in groups.google.co.uk
Although you listed this in VMware Player, it is not specific to the player---it happens in VMware Workstation too. I first reported the problem here back in November but gave up on it.
Your "fix" works fine for Workstation, too. Thanks for chasing this down....
Thanks very much for this answer. You are an absolute champion and have saved me hours of wasted time I'm sure. I'd already wasted at least 4 hours until I found your solution.
The ethtool command fails with error "[i]Cannot set device tcp segmentation offload settings: Operation not supported[/i]". The driver probably does not support such chage. Any suggestions or workarounds?
The host uses Fedora Core 5 with kernel 2.6.16-1.2080_FC5. The NIC is the nVidia Corporation CK8S Ethernet Controller. The guest system is a Windows XP Pro.
You should try to disable any firewall settings on your host computer.
The ethtool command fails with error "[i]Cannot set
device tcp segmentation offload settings: Operation
not supported[/i]". The driver probably does not
support such chage. Any suggestions or workarounds?
The host uses Fedora Core 5 with kernel
2.6.16-1.2080_FC5. The NIC is the nVidia Corporation
CK8S Ethernet Controller. The guest system is a
Windows XP Pro.
Have a look here:
You guys absolutely rock. This has been my biggest barrier to using vmplayer to run WinXp as a guest on my host linux box. I had posted url=http://www.vmware.com/community/thread.jspa?messageID=406667this thread[/url] asking for help, but without success. Installing nvnet, disabling forcedeth, and using the settings in the linked thread worked beautifully.
Unfortunately this did not work for me. Ethtool said for forcedeth:
Cannot set device tcp segmentation offload settings: Operation not supported
The problem came up with moving from 126.96.36.199 to 188.8.131.52 :-(. I had to patch VMware using vmware-any-any-update101 to build the modules, anyway.
It looks like I am facing this problem again. I've just upgraded the box to Fedora Core 6 and I don't think there is a way to get nvnet installed under FC6. Once more, any attempt to communicate between my FC6 host and WinXP guest through the network is cripplingly slow. I was hoping that somewhere between the latest kernel in FC6 and the latest VMWare Player v1.02, this would have gotten fixed.
I looked for the nvnet drivers on nVidia's page. They don't seem to be on nVidia's webpage anymore. In fact, nVidia explicitly suggests and provides the forcedeth drivers now. For my setup (linux host, WinXP guest), any attempt to communicate from the guest to the host drags the guest to an unuseable crawl.
Given the number of nForce boards out there, this would seem to be a serious problem. I am working around it for now with an old dusty NetGear PCI card I hadn't gotten around to throwing away yet, and now the guest/host networking works great! The nVidia networking on my nForce3 board is 10/100 so it's no big deal, but it would really be unfortunate to lose the integrated gigabit port on all of the nForce4 boards.
VMWare, I hope you are listening!