VMware Communities
Crispy
Contributor
Contributor

Very slow network throughput Host->Guest but not Guest<-Host

Host system is linux debian and the guest system is Winxp using vmplayer.

I am using the bridged networking option and everything works great except for one very annoying thing.

Network transfers moving files from host->guest over the network is incredibly slow something <1KB/s.

This is independant of protocol, I've tried scp, ftp, and samba.

Transfering files using the above protocols from the guest->host is fine getting something in the order of 4000-7000 KB/s

guest->Any other network host is fine

host->Any other network host is fine

Any other network host ->host is fine

Any other network host ->Guest is fine

So now I quite puzzled as to why as if it works fine in one direction but not the other I doubt its a problem with the disk being read and then writtened to straight away.

Any ideas ?

Thx in advance

Chris

Reply
0 Kudos
34 Replies
Crispy
Contributor
Contributor

after running the vmware-config.pl setting up bridged networking should I be able to see vmnet0 as a standard interface using ifconfig ?

Do I also need during vmware-config.pl to setup another virtual interface connecting to lo device of my host system to get decent host->guest speeds.

Reply
0 Kudos
Crispy
Contributor
Contributor

I have managed to solve it myself despite lack of any help here so far......

For anyone else with a similar problem:

Problem is caused by TCP Segment OffLoading of the intel e1000 network card I'm using (although the problem occurs in other 1gig or 10gb cards). Even though the information I found suggests the relevant bugs were fixed in kernel 2.6.12 (I'm using 2.6.15.4) I got instant results turning it off.

The other thing I had problems with is HOW turn it off in linux. Windows is quite easy look under advanced network properties of the card, its called 'large send' and set to none.

Linux download a useful tool called 'ethtool' and run 'ethtool -K eth0 tso off' where eth0 is whatever network interface u are using.

Doing this will (or should) reduce your network throughput although will greatly increase stablility.

My tests actually show this has improved throughput by 10% on my system ....

Message was edited by:

Crispy

Reply
0 Kudos
Jo_Deisenhofer
Contributor
Contributor

Thank you very much. How did you find out?

This also works for me, though I have a Marvell Yukon card.

Much better, much faster!

Reply
0 Kudos
Crispy
Contributor
Contributor

The hard way.... look though any hardware/software based problem that caused throughput network problems in both windows and linux. Started off assuming it was a MTU problem and worked my way up from there in groups.google.co.uk

Reply
0 Kudos
rbroberts
Contributor
Contributor

Although you listed this in VMware Player, it is not specific to the player---it happens in VMware Workstation too. I first reported the problem here back in November but gave up on it.

http://www.vmware.com/community/thread.jspa?threadID=26962&tstart=0

Your "fix" works fine for Workstation, too. Thanks for chasing this down....

Reply
0 Kudos
HappyGod
Contributor
Contributor

Thanks very much for this answer. You are an absolute champion and have saved me hours of wasted time I'm sure. I'd already wasted at least 4 hours until I found your solution.

Good stuff!

Reply
0 Kudos
jander
Contributor
Contributor

The ethtool command fails with error "[i]Cannot set device tcp segmentation offload settings: Operation not supported[/i]". The driver probably does not support such chage. Any suggestions or workarounds?

The host uses Fedora Core 5 with kernel 2.6.16-1.2080_FC5. The NIC is the nVidia Corporation CK8S Ethernet Controller. The guest system is a Windows XP Pro.

Thanks

Reply
0 Kudos
lurulf
Contributor
Contributor

You should try to disable any firewall settings on your host computer.

Reply
0 Kudos
psyk
Contributor
Contributor

The ethtool command fails with error "[i]Cannot set

device tcp segmentation offload settings: Operation

not supported[/i]". The driver probably does not

support such chage. Any suggestions or workarounds?

The host uses Fedora Core 5 with kernel

2.6.16-1.2080_FC5. The NIC is the nVidia Corporation

CK8S Ethernet Controller. The guest system is a

Windows XP Pro.

Thanks

Have a look here:

http://forums.fedoraforum.org/forum/showthread.php?t=105185&highlight=nvnet

Reply
0 Kudos
chrispitude
Contributor
Contributor

psyk, crispy,

You guys absolutely rock. This has been my biggest barrier to using vmplayer to run WinXp as a guest on my host linux box. I had posted this thread[/url] asking for help, but without success. Installing nvnet, disabling forcedeth, and using the settings in the linked thread worked beautifully.

- Chris

Reply
0 Kudos
deadcow
Contributor
Contributor

Thank you guys!

This should definetively go into the VMware-FAQ!!!

BTW, just to note: I needed to restart the guest[/b] (WinXP) for the change to work.

Cheers,

Mario

Reply
0 Kudos
Harri
Contributor
Contributor

Unfortunately this did not work for me. Ethtool said for forcedeth:

Cannot set device tcp segmentation offload settings: Operation not supported

The problem came up with moving from 2.6.15.7 to 2.6.16.18 :-(. I had to patch VMware using vmware-any-any-update101 to build the modules, anyway.

Reply
0 Kudos
chrispitude
Contributor
Contributor

It looks like I am facing this problem again. I've just upgraded the box to Fedora Core 6 and I don't think there is a way to get nvnet installed under FC6. Once more, any attempt to communicate between my FC6 host and WinXP guest through the network is cripplingly slow. I was hoping that somewhere between the latest kernel in FC6 and the latest VMWare Player v1.02, this would have gotten fixed.

- Chris

Reply
0 Kudos
chrispitude
Contributor
Contributor

Hi all,

I looked for the nvnet drivers on nVidia's page. They don't seem to be on nVidia's webpage anymore. In fact, nVidia explicitly suggests and provides the forcedeth drivers now. For my setup (linux host, WinXP guest), any attempt to communicate from the guest to the host drags the guest to an unuseable crawl.

Given the number of nForce boards out there, this would seem to be a serious problem. I am working around it for now with an old dusty NetGear PCI card I hadn't gotten around to throwing away yet, and now the guest/host networking works great! The nVidia networking on my nForce3 board is 10/100 so it's no big deal, but it would really be unfortunate to lose the integrated gigabit port on all of the nForce4 boards.

VMWare, I hope you are listening! Smiley Happy

- Chris

Reply
0 Kudos
gpshead
Contributor
Contributor

don't blame vmware, blame nvidia for a bad driver.

Reply
0 Kudos
RDPetruska
Leadership
Leadership

I've always said - you get what you pay for. AMD may have cheaper CPUs, and nVidia is trying to get into other areas besides Graphics cards... but, cheaper cost is often cheaper products. I've been burned by crap hardware too many times - I only buy Intel chips, and reputable motherboards with known chipsets.

Reply
0 Kudos
chrispitude
Contributor
Contributor

The problematic driver is open source. The one nVidia had provided worked...

- Chris

Reply
0 Kudos
chrispitude
Contributor
Contributor

As an additional data point, I upgraded to a nForce4 board with the integrated gigabit Ethernet LAN. I see the same problem with this chipset as well. I suppose this indicates the problem is in the software, not the hardware.

- Chris

Reply
0 Kudos
mrcoffee11
Contributor
Contributor

Yes! Thanks. This saved the day.

I'm running a FC6 host and a FC6 guest. The problem was immediately solved when applying the solution to my host. No reboot needed.

Thanks a lot Crispy for sharing the solution.

Reply
0 Kudos