vaxman
Contributor
Contributor

Fusion 13 Intel on ARM and Network Performance on Monterey and Ventura

Despite what is being said on these forums, it is possible to precompile Intel binaries to run as VMs on Apple Silicon with the (now mandatory) macOS built-in hypervisor framework. This means that VMware could theoretically create some sort of converter utility, though it probably would need to rebuild the source VM into a special target rather than allowing it to be passed back and forth between Intel and ARM64 hosts. See also:

https://developer.apple.com/documentation/virtualization/running_intel_binaries_in_linux_vms_with_ro...

But it might not matter at all because the network performance just isn't there in Fusion 13, and network performance is fundamental to almost every task one would resort to using a VM for! As a test, I dedicated a wired GbE Ethernet adapter in the host Intel-based Mac 6, 1 to "Fusion Bridge". (If I first configure it with an IP address in macOS, it can sustain 970Mbps upload and download from within SpeedTest on the Intel Mac host.) If I don't configure it with an IP address (or even if I do) and instead let it be accessed exclusively by Fusion 13's "Fusion Bridge" (that, in turn, uses Apple's vmnet.framework to provision a vmnet running with Apple's Bridged-networking entitlement, or at least root permission, which obviates the need for said entitlement), then the performance on that Ethernet adapter is reduced to about 212Mbps upload and 65Mbps download under the best network conditions. (That's mega-BITS not mega-BYTES!) How does one produce software so bad that the performance drops from 970Mbps to 212Mbps or 65Mbps? Granted, this is likely Apple's fault, but who really knows --maybe the Fusion developers aren't exchanging a full window of  packets per system call (and/or are employing bad algorithms to determine window size), resulting in excessive overhead and kernel mode trips, but we can only speculate and this issue can only be solved by a company with a contract with Apple to use the Bridged entitlement, meaning VMware. About the only thing I can do is try this testing under Parallels (slim chance it will perform better --may suffer the same issue as Fusion or the problem may be down inside Apple's vmnet.framework).

0 Kudos
6 Replies

That's for applications inside a linux guest - not an entire operating system.  Very different requirements and capabilities (it's essentially bridging rosetta inside linux guests) - not at all suited for a virtualization product like Fusion.

The performance and stability of the network stack dropped substantially when Fusion moved from their own stack to Apple's stack.  Parallels still uses kext's and probably uses their own network stack (though that advantage is going to disappear, likely in October when I believe Apple will finally depreciate them).

I've been a Fusion user/supporter/fan since before 1.0, but don't have blinders on.  Fusion on ARM seemed to be somewhat limbo last year - no insider knowledge, but suspect that it's a combination of distractions from broadcom, pandemic staffing, microsoft windows licensing debates (aka their largest market), and the use of shared code (especially guest tools) between Esxi, workstation and Fusion.  If that last guess is correct, it would explain a lot, as both the esxi cash cow and workstation don't have ARM releases yet.  Esxi for ARM is in beta, so if that becomes real, then we should see a rapid catch up.

Parallels bypassed all those things because they had no choice - their bulk of business is windows on mac, and without that, it's game over.  They've got a head start, but honestly that's been the case for years.  Fusion was the rock solid stable option, vs the aggressive (and less configurable/more annoying) competition.  I can tell you using both for Linux workloads on M1 though, that while Parallels has the more seamless install, Fusion is hands-down more stable and scalable (as well as configurable).  One big example - Parallels appears to use Electron, so when a guest hangs, slack, 1Password, and every other electron app on the machine hang with it (including all other parallels guests). Aside - that's one of the (many) reasons I detest electron apps.  More to the point, I've yet to find how many guests I need to run to really impact the host with Fusion (I've run as many as 8), but it's definitely more than Parallels.

That's why I'm hanging in there.  I have hope that we'll get a release in the next few months that closes some of the UX gap.  I'd really like to drop back down to a single hypervisor, and Fusion is my preference hands-down.

0 Kudos
vaxman
Contributor
Contributor

Hi, some notes -


@ColoradoMarmot wrote:

That's for applications inside a linux guest - not an entire operating system.


VMware or its competitors on Apple Silicon could create a VM-compiler, where instead of picking an ISO and then entering the operating system (after it has booted) to select applications, their user would pick a distribution (Debian, Arch, Fedora, etc.) and then pick the applications (KDE, Apache, mySQL, etc.) to run in the distribution (before it has booted). From there, the VM-compiler would likely use the advent of container-images to quickly assemble the desired configuration for ARM and pre-compile all of the Intel apps that have no ARM-port yet to run in it. I hope it is VMware that gets out ahead of this before Docker or someone comes up with it. Whoever puts this together though bloody well use digitally-signed malware-screened containers though.

 


@ColoradoMarmot wrote:

I've been a Fusion user/supporter/fan since before 1.0, but don't have blinders on.


I've run VMware commercially since their pre-1.0 Intel Workstation days and migrated to Fusion after Connectix VirtualPC because it could trade VMXes with the servers. VMware has always had serious technical and security issues (favorite one was ESXi falling out of rev with Apple BSP firmware on the Mac Pro 6,1 turning it into more of a dumpster fire than a trashcan haha), but they had the best hypervisor for a loooong time. I have no allegiance to any of these tech companies, but also no desire to encourage happy hipsters to take red pills.

 

Parallels bypassed all those things because they had no choice

Concerning Parallels 18 on Monterey/Intel as being a possible solution, that's a big Nope. As in no KEXTs are loaded by Parallels 18 on a fresh install of Monterey and it also does not seem to support vmnet.framework which means, whether intended or unintended, it doesn't seem to have Bridged networking anymore. Its performance with Shared Networking though is superb. That said, the EULA and Privacy Policy of Parallels 18 essentially allows them to scrape anything they want off your machine or anything your machine is connected to or even machines that you might remotely login to that aren't connected to it! They make a BFD of their TrustPilot rating (which is only 80%) and the millions of people who use their app (I have visions of a massive data lake at Amazon with all of their private info in it!) which are all red flags. Had to sit there in Murus terminating outbound connections to keep my stuff from floating away during the hours and hours I spent reaching the stated conclusion.

 

Back to performance problem on Fusion 13 on Monterey/Intel, I spun up a Windows 11/Intel vm and saw that the performance on the Bridged network was really bad (like what I was seeing in Linux vms), but it was also obvious that the VMtools that came with Windows 11 were not working, so after updating (using Fusion 13 Reinstall VMtools), I started seeing download speed of 70-200Mbps and upload speeds of 800-970Mbps! Repeated the tests, reconfirmed all the network settings, hardware, etc. Reconfirmed the speeds. So this clearly points to a major network performance problem being with the Linux VMs (all different distributions, under both e1000 and vmxnet3 drivers). Now I'm assuming that the open-vm-tools package included in _AT LEAST_ Debian 11 Bullseye/Intel is not fully compatible with Fusion 13/Intel on Monterey 12.6.2 using e1000 or vmxnet3 with or without IOMMU enabled and with or without Side-channel Mitigations enabled... Nowadays, the device drivers for the funky virtual devices like vmxnet3 are actually part of the Linux kernel, and open-vm-tools is just userland mojo. But clearly, something is up, since Windows 11 Home with VMtools from Fusion 13 is several orders of magnitude faster than Linux vms running 6-series or 3-series kernels. Got to find my hammer.

0 Kudos
Technogeezer
Champion
Champion


@vaxman wrote:

Hi, some notes -


@ColoradoMarmot wrote:

That's for applications inside a linux guest - not an entire operating system.


VMware or its competitors on Apple Silicon could create a VM-compiler, where instead of picking an ISO and then entering the operating system (after it has booted) to select applications, their user would pick a distribution (Debian, Arch, Fedora, etc.) and then pick the applications (KDE, Apache, mySQL, etc.) to run in the distribution (before it has booted). From there, the VM-compiler would likely use the advent of container-images to quickly assemble the desired configuration for ARM and pre-compile all of the Intel apps that have no ARM-port yet to run in it. I hope it is VMware that gets out ahead of this before Docker or someone comes up with it. Whoever puts this together though bloody well use digitally-signed malware-screened containers though.

 


Containers might be the most viable avenue for this kind of compilation. Pre-compiliation of user-mode applications is a more well-defined domain than having to deal with operating systems that dynamically load code and have to deal with all the hardware architecture intricacies.

Apple is already down the path of this IMO with the extension of Rosetta into Linux virtual machines that run under the Virtualization framework. Now if we could only get VMware to get off their high horses and support virtio virtual devices like every other virtualization engines seems to do. But no, compatibility with ESXi and Workstation are way more important - and things that could really be useful to Mac users are pushed aside.

0 Kudos
vaxman
Contributor
Contributor

Apple's docs still say we can have KEXTs, but they made it quite painful...(this is in addition to the user having to authorize the initial load of the kext after boot). They deprecated most of the Networking kernel APIs though, so not going to work out... [I still have some hope after seeing Win11/Intel VM pull 200+/900+ down/up transfers (on a 980/980 down/up wired connection) using Bridged networking on Monterey 12.6.2 with no KEXTs, but am wasting my life away trying different configs to replicate the performance in a Linux/Intel VM.

In macOS 11 and later, the loading of a kext requires the user to take actions outside of your installation package. When loading a kext, tell the user to perform the following one-time setup on Apple silicon:

  1. Reboot your Mac with Apple silicon into Recovery mode.

  2. Set the security level to Reduced security.

  3. Allow the loading of third-party kexts.

  4. Reboot back to macOS.

0 Kudos
Technogeezer
Champion
Champion

Not only has Apple made them harder to load by forcing a downgrade to reduced security, but the dynamic loading and unloading of kexts also seem to be a thing of the past in macOS 11 and beyond, according to Apple's own documentation

"In macOS 11 or later, if third-party kernel extensions (kexts) are enabled, they can’t be loaded into the kernel on demand. They require the user’s approval and restarting of the macOS to load the changes into the kernel, and they also require that the secure boot be configured to Reduced Security on a Mac with Apple silicon."

What Fusion used to do to load and unload kexts when Fusion started would have to be totally rethought. And they would be running whether you were using Fusion or not. 

They obviously do not want developers to use kexts any more. Especially on Apple Silicon.  I am anticipating the day that they announce that kexts will no longer be supported.

0 Kudos
vaxman
Contributor
Contributor

Apple still hasn't come through with any APIs to let devs write filesystems though. They know that's going to kill performance on their machines relative to Linux and Windows (and introduce security issues)...still might not stop them from shutting off Kernel Extensions altogether though. Now that they're really embracing Linux internally, they want all of those use cases off the Mac, because they constrain Apple's ability to put out high-$ services that are critical to making their numbers on Wall Street (ever notice newsD running on your Mac servers rolfmao) and because it indirectly increases their hardware support costs (eg, dude walks into an Apple Store with an under-warranty MacBook that won't boot because it was designed without a cooling system that can throttle fast enough to quiet kernel-mode I/O that's doing things Apple never expected in the Lab). You know, a few years back, guys were velcro-ing rPIs to the back cover of their iPads and I expect people will be doing something similar with Macs before too long.

0 Kudos