After installing Fusion on my Mac I get very low transfer speeds transferring files from my host computer to another server on the network. It seems to cap at ~9MB/sec going up to the server, but pulling a file from the server I get ~65MB/sec. This is the only machine I have this problem with on the network, and this is the only machine with Fusion (or any other hypervisor) installed on it. Does anyone know what could be causing this? I don't know what info helps, but here is some stuff it case it is needed.
Machine info:
Mac Pro 8-Core @ 2.8 GHz
OS X 10.5.5
ifconfig output:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
ether 00:1f:5b:39:ba:a8
media: <unknown type> status: inactive
supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
inet6 fe80::21f:5bff:fe39:baa9%en1 prefixlen 64 scopeid 0x5
inet 192.168.2.12 netmask 0xffffff00 broadcast 192.168.2.255
ether 00:1f:5b:39:ba:a9
media: autoselect (1000baseT <full-duplex>) status: active
supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
lladdr 00:21:e9:ff:fe:c8:62:4a
media: autoselect <full-duplex> status: inactive
supported media: autoselect <full-duplex>
vmnet8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
inet 172.16.95.1 netmask 0xffffff00 broadcast 172.16.95.255
ether 00:50:56:c0:00:08
vmnet1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
inet 192.168.179.1 netmask 0xffffff00 broadcast 192.168.179.255
ether 00:50:56:c0:00:01
en2: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:22:41:d0:38:a1
media: autoselect status: inactive
supported media: none autoselect 10baseT/UTP <half-duplex>
Route table:
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.2.200 UGSc 145 3 en1
127 localhost UCS 0 0 lo0
localhost localhost UH 1 66 lo0
169.254 link#5 UCS 0 0 en1
172.16.95/24 link#7 UC 1 0 vmnet8
172.16.95.255 link#7 UHLWb 2 42 vmnet8
192.168.2 link#5 UCS 11 0 en1
xserve.edit.oma 0:d:93:9d:de:cd UHLW 27 4521655 en1 1020
192.168.2.11 0:14:51:67:8e:1c UHLW 1 326 en1 228
192.168.2.12 localhost UHS 14 105436 lo0
192.168.2.13 0:14:51:67:8e:21 UHLW 6 5256639 en1 1000
192.168.2.14 0:14:51:11:7f:94 UHLW 4 11025 en1 241
192.168.2.15 0:d:93:58:9f:34 UHLW 1 31489 en1 1000
192.168.2.21 0:14:51:65:ff:48 UHLW 7 6306001 en1 535
192.168.2.22 0:1f:5b:2f:72:b9 UHLW 6 11309593 en1 39
192.168.2.33 0:14:51:67:8d:ea UHLW 7 6572612 en1 227
192.168.2.121 0:c:29:59:47:d6 UHLW 1 25 en1 1192
192.168.2.200 0:1d:7e:43:d0:74 UHLW 146 2167198 en1 1133
192.168.2.255 link#5 UHLWb 2 205 en1
192.168.179 link#8 UC 1 0 vmnet1
192.168.179.255 link#8 UHLWb 2 42 vmnet1
Internet6:
Destination Gateway Flags Netif Expire
localhost link#1 UHL lo0
fe80::%lo0 localhost Uc lo0
localhost link#1 UHL lo0
fe80::%en1 link#5 UC en1
r2.local 0:1f:5b:2f:72:b9 UHLW en1
e2.local 0:1f:5b:39:ba:a9 UHL lo0
ff01:: localhost U lo0
ff02:: localhost UC lo0
ff02:: link#5 UC en1
(Sorry the formatting is so bad)
Try using Bridged mode as a workaround.
I believe the upload slowness is partially fixed in 2.0, but the bug report notes that very fast connections may not see much of an improvement. I think you're sort of near the border; e.g. you should see a noticable improvement in 2.0 but not all the way up to matching the download speed.
Sorry, I should have been more clear. I am using Fusion Version 2.0 (116369). The slow transfer speeds are on the host machine, not the guest machine. The guest is in bridged mode.
I would shut down the guest, quit Fusion, and stop the Fusion networking daemons with sudo /Library/Application\ Support/VMware\ Fusion/boot.sh --stop, then check what upload speeds you're getting. If you still see this slowness, I don't think it's related to Fusion.
This is quite embarrasing, but the problem is not Fusion, although your steps led me in the right direction to find the problem.
As a side-question though: I have two network interfaces on my Mac. Normally I would link aggregate them, but this does not appear to be supported in Fusion for use with bridged networks. My question then is can I dedicate one NIC for my host computer and use the other for bridged connections on my virtual machines? Will this give me any better performance over just using the host's NIC? I transfer large files most of the day (video), so any extra performance I can squeeze out of the system helps.
This is quite embarrasing, but the problem is not Fusion, although your steps led me in the right direction to find the problem.
What was the problem?
As a side-question though: I have two network interfaces on my Mac. Normally I would link aggregate them, but this does not appear to be supported in Fusion for use with bridged networks.
Right, Fusion doesn't currently understand bonded NICs.
My question then is can I dedicate one NIC for my host computer and use the other for bridged connections on my virtual machines?
It's possible to specify which NIC guests use by modifying boot.sh and the .vmx config file. See . Note the host must still have access to the second NIC (or else Fusion, and by extension the virtual machines, won't be able to use it).
Will this give me any better performance over just using the host's NIC?
No idea, but I think it might work.
The problem: The volume I was testing with is in my login items, but I was using an old IP address to mount it with. This IP is now on the 100mb subnet. Using the correct IP I am now using the 1000mb network.
Bonded NICs: Do you know if there are future plans to allow this?
I will take a look into modifying the files. Thanks for the help!