VMware Communities
DeltaTango11
Contributor
Contributor

Running X86 Windows in VMWare Fusion in Macbook Pro M1

Hi Guys,

How can I run x86 windows (7 or 10) in my Macbook Pro M1 with the VMWare Fusion. I use my windows VMs for the malware analysis purpose and since almost all malware are targeted for x86 system, the sandbox VM should be x86. Any help would be appreciated.

 

Thanks!

Reply
0 Kudos
35 Replies
scott28tt
VMware Employee
VMware Employee

You can’t. (EDIT: Using only VMware software)

There is a Tech Preview version of Fusion for M1 Macs, but it doesn’t offer emulation.

Windows for ARM is not supported, never mind Windows for x86.

See here for details: https://communities.vmware.com/t5/Fusion-for-Apple-Silicon-Tech/ct-p/3022

Oh, and you should expect a moderator to move your thread to the area for the Tech Preview too, now that I have reported it, since you’re not asking anything about a vSphere security advisory.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
Technogeezer
Immortal
Immortal

Well, I wouldn’t say you can’t.

You definitely can not virtualize and run an x86 operating system on either the Tech Preview or Parallels running on an Apple Silicon (M1) Mac.  If you want to do that, you will have to look at something that emulates an Intel architecture processor such as QEMU or it’s more friendly derivative UTM. Just don’t expect the performance or features to be on a par with either physical Hardware or commercial virtualization solutions. Might be OK for what you want, though. 

- Paul (Technogeezer)
Editor of the Unofficial Fusion Companion Guides
Reply
0 Kudos
ColoradoMarmot
Champion
Champion

With the half-implemented options, you're going to have issues.  It won't be a like-for-like environment, won't have things like vmware tools, and won't be anywhere near performant, stable, or even a full x86 implementation as you probably need for those uses cases.  You really do need an intel machine.

Reply
0 Kudos
Technogeezer
Immortal
Immortal

Take @ColoradoMarmot 's comment seriously. If you need to get serious work done with an Intel operating system, use an Intel CPU.

As a project, I'm in the middle of trying to get Windows 10 x64 up and running on UTM on an M1 Mac.

Yes it installs (not as easily as I'd like, but it does install). Yes it runs. I can change the screen resolution. I can access a folder on my host Mac from Windows.

But wow, It's slow as molasses. It does not run anywhere close to native M1 chip performance. Period. I did not starve UTM VMs or the Mac host for CPU or memory. It's not disk bound either.

And it's indeed a science project.  I've tried every suggested emulation tuning trick posted on the web, and yes I've installed all the QEMU drivers and SPICE tools. That improved the speed, but not to where I consider it usable.

My ancient 12-year old 2 core Dell Core i7 laptop running Windows 10 eats the emulated Windows 10 x64 on the M1 for lunch. 

File this whole experiment under "yes, you can do it, but really, should you if you're serious"?  I think I can safely say that x86 emulation on ARM will leave you sorely disappointed. 

As a side note, this exercise highlights why nobody should expect x86 emulation to be built into virtualization products so that you can continue to run those x86_64 operating systems.  It simply doesn't perform.

 

- Paul (Technogeezer)
Editor of the Unofficial Fusion Companion Guides
Reply
0 Kudos
Romain_Petges
Contributor
Contributor

Maybe you could try if the built-in x64 emulation in Windows 11 ARM works with your malware analysis.

Reply
0 Kudos
Mikero
Community Manager
Community Manager

I don't get this use case... 

If you're doing malware analysis, and you're not on the same architecture as the exploit or analysis tools, how can you guarantee their accuracy?

Let's say I write an exploit that takes advantage of a known memory location of an app or service and can escalate privilege or something. 

Doesn't the address of that memory pointer change when the chipset is emulated?

Wouldn't you just get false-results? Or at the very least, results that you couldn't 100% trust?

And if you're not getting 100% trustworthy results, what's the point of the test at all?

-
Michael Roy - Product Marketing Engineer: VCF
Reply
0 Kudos
Technogeezer
Immortal
Immortal

Flaws in the operating system that grant elevated privileges are one thing. Flaws that are  banking on exploiting an architectural flaw are another. And unless the emulation is accurate to the micro architecture level (thinking things like Spectre here) you may not get what you would in a physical CPU. 

- Paul (Technogeezer)
Editor of the Unofficial Fusion Companion Guides
Reply
0 Kudos
ColoradoMarmot
Champion
Champion

Exactly my thoughts in the earlier post.  Malware analysis is one of those things needs to be as close to real as possible.  A lot of the really sophisticated malware even detects that it's running in a VM and alters its behavior...and that's on native hardware, let alone emulated.

Maybe for a one off with relatively unsophisticated malware it'd work, but for real/corporate/enterprise level analysis, the double hop really isn't going to work out practically.  Just like while you can pour the fake butter from a movie theater into a diesel engine and drive, even if your exhaust smells like popcorn, doesn't mean it's actually popping kernels, let alone good for the car (is that enough mixed metaphors?).

And all that sets aside the horrible performance and stability issues, the faked-out device drivers, and the licensing issues.

 

Reply
0 Kudos
Smith_67
Contributor
Contributor

I understand that right now it's not possible to run a usable Intel based VM on a Mac with a M1 cpu (much to my dismay) but I'd like to know if that is likely to change at some point in the future. Is it something that could happen if software was developed to allow it to happen, or is it just something that can never physically happen because of the differences in hardware?

Reply
0 Kudos
dempson
Hot Shot
Hot Shot


@Smith_67 wrote:

I understand that right now it's not possible to run a usable Intel based VM on a Mac with a M1 cpu (much to my dismay) but I'd like to know if that is likely to change at some point in the future. Is it something that could happen if software was developed to allow it to happen, or is it just something that can never physically happen because of the differences in hardware?


Virtualisation of x86 on ARM is not possible because by definition virtualisation requires the same processor architecture on the host and guest: most CPU instructions in the guest OS and applications are executed directly by the host processor.

If the guest and host have different architectures (as with x86 vs ARM), the closest you can get is if the host implements a software emulation of the full instruction set of the guest CPU, interpreting each instruction as it is executed. This is a lot slower than virtualisation.

It boils down to a question of how fast the software in the VM is expected to run. An Apple Silicon Mac can probably emulate an x86 PC running an OS from the early 2000s at a speed resembling the performance of a real PC of that era. If you want to run a modern OS and applications in the guest (say anything from the last decade) then its performance will be a lot slower than a real PC. Anything time critical will not work, and other software will be painfully slow.

The emulation may also be incomplete, e.g. it may not cover some advanced features of the processor, or some other components in the computer, so applications which depend on those won't work.

VMware have stated they are not intending to provide full x86 emulation for VMware Fusion on Apple Silicon Macs: it would require a lot of development effort and there is not likely to be a sufficient market of people willing to pay for such a feature. (Especially once you factor in the performance limitations.) Parallels and VirtualBox don't seem likely to do this either.

There is at least one existing open source emulation product (UTM) but it appears to be missing functionality and is not stable enough for serious use.

Running an ARM guest OS with x86-to-ARM code translation inside that seems a more viable solution for running arbitrary x86 applications, as this is a simpler problem to solve. Microsoft already provides an implementation of this for ARM Windows 11.

That won't help for use cases such as running older versions of macOS and Windows. For those (my main reason for using VMware Fusion since version 1), I've kept an Intel Mac running VMware Fusion, but it is no longer my primary Mac.

ColoradoMarmot
Champion
Champion

I've tried using QEMU, and the performance is really bad (and it's not stable at all).  Crossover is good performance-wise, but it's not a full guest OS, so not everything runs.

That said, Windows 11 is running quite well, and while that internal emulation isn't perfect, the only thing I've found that doesn't work right are a handful of games (not just performance issues - simply won't run).

Just like you can't pull an EV into a gas station for a quick fillup, and to echo the earlier comment, don't ever expect this to change.  If you really need to run x86 guests, run an x86 host.  

 

Reply
0 Kudos
Technogeezer
Immortal
Immortal

I'll echo what @dempson and @ColoradoMarmot have said. I'll never say never, but there would have to be some kind of technological breakthrough because current emulation technologies just don't overcome the performance issue.

And before you say "well Rosetta does it", remember

  • Rosetta only implements enough of a subset of an Intel instruction set that user mode applications use. It does not implement a full Intel instruction with privileged instructions and other low level architectural features that an Intel operating system would need in order to run.That's a much harder problem to solve.
  • Rosetta cheats a bit. Apple built some Apple Silicon hardware features that make memory access more efficient for Rosetta translated applications. 
  • Rosetta gets most of its performance by performing a one-time translation of Intel code to ARM code. Most applications get translated from Intel to ARM only once - from then on it's ARM code that is running. That's not an easy thing to do for an operating system that's got a lot of moving parts under the hood moving in unpredictable directions.
  • All of the macOS frameworks and system libraries (for example all of the graphical user interface) run natively in ARM code -regardless of if the application has been translated via Rosetta from an Intel binary or a native ARM binary. 
- Paul (Technogeezer)
Editor of the Unofficial Fusion Companion Guides
Reply
0 Kudos
dempson
Hot Shot
Hot Shot


@Technogeezer wrote:
  • All of the macOS frameworks and system libraries (for example all of the graphical user interface) run natively in ARM code -regardless of if the application has been translated via Rosetta from an Intel binary or a native ARM binary. 

I'm pretty sure the above point is not correct - it certainly wasn't how the original Rosetta worked for PowerPC to x86. The developer documentation for the x86 to ARM version uses the same description, e.g.

https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment

Note this bit:

The system prevents you from mixing arm64 code and x86_64 code in the same process. Rosetta translation applies to an entire process, including all code modules that the process loads dynamically.

That means all system library/framework code which is called from an x86 process must also be x86, translated to ARM for execution (at least the system code only needs to be translated once per system update due to caching).

Other parts of the OS which run outside the application process (e.g. the entire kernel and kernel level drivers) are ARM, as can be separate processes invoked from the translated code via methods such as XPC. Drawing code for the GUI is probably x86 translated code, but the rendering of windows to the display is done by WindowServer, which is a separate process and will be native ARM.

As for the question of relying on Rosetta or on Intel Mac support, consider Apple's history in these transitions.

The x86 code in the system is needed both for Rosetta on Apple Silicon Macs and for macOS booting on Intel Macs. Once we reach the point that that a future macOS version drops support for the last Intel Mac models, Apple could drop Rosetta 2 in the same version to save space and development/testing work for the x86 code. If the last Intel Macs are discontinued later this year, the Intel Mac cutoff point might be as little as three years away. Security updates let you stretch either cutoff point another two years.

For the previous PowerPC to Intel transition:

  • August 2006: last PowerMac G5 discontinued
  • November 2006: last Xserve G5 discontinued (final PowerPC model)
  • August 2009: Snow Leopard 10.6 introduced, dropped support for the last PowerPC Macs
  • July 2011: Lion 10.7 introduced, dropped Rosetta
  • September 2013: final security update for Snow Leopard (and systems including Rosetta)

There was a gap of three years between the last mainstream PowerPC model being discontinued and a new macOS version not running on that model (less than three years for the Xserve). I'd regard that as a minimum, given AppleCare and hardware support timeframes.

There was a gap of one version and almost two years between PowerPC hardware support and Rosetta being dropped, but we can't assume Apple will follow a similar pattern this time.

Reply
0 Kudos
Smith_67
Contributor
Contributor

Thanks for the explanation. I ordered a 2019 Intel based MacBook Pro yesterday as I figured this was going to be the answer. I now also have 2 MacBook Pro machines, one with M1 Max and one with Intel core i7. Not ideal, or a cheap solution, but it has solved my problem.

Reply
0 Kudos
SvenGus
Expert
Expert

BTW, I still remember Connectix/Microsoft Virtual PC for Mac, in the PowerPC days (x86 guest emulation on PPC host): certainly, not a supersonic Concorde, nor an A380/B747; but IIRC it was anyway almost quite decent with productivity applications, which most people use. Today, with Mx processors which should be far more evolved than PPC, wouldn’t it be possibile to at least equal the ex Virtual PC for Mac performance…? UTM, while quite positively evolving, of course depends on QEMU (still not so Mac-friendly) - but Oracle, Parallels and VMware could perhaps develop their (partly) own, more powerful solutions, if only it were economically viable (which for some reason doesn’t seem to be the case, and thus we return to virtualisation-only)…? Wishful thinking…

Reply
0 Kudos
Technogeezer
Immortal
Immortal


@dempson wrote:

@Technogeezer wrote:
  • All of the macOS frameworks and system libraries (for example all of the graphical user interface) run natively in ARM code -regardless of if the application has been translated via Rosetta from an Intel binary or a native ARM binary. 

I'm pretty sure the above point is not correct - it certainly wasn't how the original Rosetta worked for PowerPC to x86. The developer documentation for the x86 to ARM version uses the same description, e.g.

https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment

You could be right, although I thought I had read somewhere about the frameworks running native. In some strange sense, having the framework code (which should be shared libraries, right?) in 1 architecture is pretty attractive. Maybe Apple learned something between Rosetta and Rosetta 2?

That raises another interesting question? If the original Rosetta required PPC frameworks yet Rosetta 2 uses the native ARM objects, would that mean that the engineering effort to maintain Rosetta 2 would not be as great, and therefore might hint at Rosetta 2 being around longer (it doesn't cost as much to maintain...)

 

- Paul (Technogeezer)
Editor of the Unofficial Fusion Companion Guides
Reply
0 Kudos
ColoradoMarmot
Champion
Champion

Operating systems are vastly more complex,  and tied tightly to hardware today than back then.  They're also a much heaver load - XP was < 600MB for example (and 95 could fit on floppies.

For example, we've seen that the old N-1 CPU core guidance is no longer really sufficient for the last couple MacOS versions - it's clear that N-2 is a much more realistic limit.

There's unlikely to be a real market for desktop OS-level emulation - the market is moving towards cloud hosting as a preferred solution.

Reply
0 Kudos
dempson
Hot Shot
Hot Shot

@Smith_67 wrote:

Thanks for the explanation. I ordered a 2019 Intel based MacBook Pro yesterday as I figured this was going to be the answer. I now also have 2 MacBook Pro machines, one with M1 Max and one with Intel core i7. Not ideal, or a cheap solution, but it has solved my problem.


That was effectively my solution as well, but only by accident - I bought a 2019 16-inch MacBook Pro when it was released (planning for it to be my long term replacement for the 2013 15-inch model), then Apple announced the Apple Silicon transition several months later. I got a 16-inch M1 Pro MacBook Pros after they were introduced, which will be my long term computer, with the Intel 16-inch model being relegated to secondary tasks including running x86 VMs.

Reply
0 Kudos
dempson
Hot Shot
Hot Shot


@Technogeezer wrote:

@dempson wrote:

I'm pretty sure the above point is not correct - it certainly wasn't how the original Rosetta worked for PowerPC to x86. The developer documentation for the x86 to ARM version uses the same description, e.g.

https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment

You could be right, although I thought I had read somewhere about the frameworks running native. In some strange sense, having the framework code (which should be shared libraries, right?) in 1 architecture is pretty attractive. Maybe Apple learned something between Rosetta and Rosetta 2?

The fundamental issue with the PowerPC to Intel transition was different calling conventions between the two processor architectures. The translation can't rewrite the logic of the code, just reimplements the instructions and remaps the registers. This means PowerPC-to-Intel translated functions had different rules from Intel native functions, rendering them unable to call each other.

It wouldn't surprise me if the same issue arises with Intel to ARM, as again we are dealing with CISC vs RISC instruction sets, with one architecture having a lot more registers than the other.

Intel-to-ARM translation may be easier than PowerPC-to-Intel and perform better because it is going from an architecture with "few" registers to one with "many" registers, allowing all the Intel registers to be held in ARM registers, whereas PowerPC-to-Intel translation would have needed some memory-based method to store all the PowerPC registers.

Reply
0 Kudos