I just upgraded a Windows 7 virtual machine to Windows 10, host is OS X, Fusion 8.1.1.
Everything was going great until I reinstalled VMware tools, which I had uninstalled per instructions to get past the video driver error.
Upon restart, it crashed while booting with DRIVER_IRQL_NOT_OR_EQUAL.
I restarted it and it did get into windows, but soon crashed again. And that time it wouldn't start at all. It hit the above error several times, plus an error about corrupted driver pool memory, plus some others.
Finally I tried changing the number of CPU cores from 2 to 1. And that appears to have fixed it: it has booted up and hasn't crashed so far (crossing fingers).
Hi msschmitt,
Welcome to Fusion Community.
I try to reproduce your issue. Could you help me to provide more information about your mac?
1. Could you let me know your Number of Processors and Total Number of Cores in host machine?
2 OS X version?
2. Could you try to the following step?
a. uninstall the vmtools
b.set the CPU cores from 1 to 2,
c. install the vmtools again
Thanks for your help and post.
Cheers
Host machine has quad-core Intel Core i7 (Haswell). I'm running OS X 10.10.5 Yosemite.
I've noticed that even with Fusion set to one core, starting up Windows 10 is very unstable; it will crash several times in a row before it finally starts. Once it is started, it is fine.
I'm suspecting that the problem is with restarts, as opposed to a cold start.
I'll try the experiment; I just hope I can get it restarted!
Hmm, maybe 2 cps was a red herring.
So, we should amend this: Windows 10 + VMware Tools = frequent crash on restart (or start up, not sure which)
At least it is a reproducible problem.
Would it help if I sent a crash dump?
Hi msschmitt,
Could you let me know your Number of Processors and Total Number of Cores in host machine?
Your host processors should more than your Guest's.
Cheers
As I said, 1 processor, 4 cores.
Hi msschmitt,
I try to used "1 processor, 4 cores" mbp to reproduce your issue. No matter i set the CPU to 1,2 and 4, it can work fine with the vmware tools.
My Guest version is : SW_DVD9_Win_Pro_10_1511.2_64BIT_English_MLF_X20-99426
My Host version is following:
2 cpu:
4 cpu:
As I said in a previous message, now I think the 2 CPUs was a red herring.
But if it works for you, we don't know what setting is making it crash on restart.
My VM is set to:
I could collect a crash dump but I'm not sure how to do it in Windows 10.
Does the issue persist if you install a fresh copy of Win10?
(trying to isolate Fusion from your system from your VM)
Trying now.
I configured the test VM to resemble the other one. I see that I have a couple of other non-default settings:
OK, fresh Win 10 Pro x64 install doesn't crash on restart.
We don't know if it is a difference in the VM config (perhaps carried over from some previous Fusion version, I started with Fusion 2), or something in the Windows guest itself.
I do see quite a few differences in the .vmx file but I don't know what is significant.
Was the VM originally a physical machine conversion or a clean VM?
Might have some strange driver leftover from a P2V conversion.
The original VM was a clean VM.
If I recall correctly, it was an install (to a new virtual disk from the Windows 7 Professional DVD, most likely with Fusion 4. I don't think I used the "easy setup", I think I did it as manual configuration.
What I'm running now is a Windows 7 to Windows 10 in-place upgrade.
Hi msschmitt
I will try to reproduce your problem, at the same time. could you please to provide your log files to me?
Thanks and Regards
Hi msschmitt,
Really appreciate your help and files. We are tracking it now.
Thanks and Regards
I'm thinking that since it is getting BSOD on boot, and appears to be directly connected to the VMware tools drivers, what we need is a kernel dump from one of the crashes.
To review, about 80% of the time Windows 10 would crash with various errors on boot. Once it could get past that it was generally stable.
That was an inconvenience, but the problem came to a head when Windows decided to install the feature update to version 1607.
The first problem was when Windows rebooted to install the update. It went into a reboot loop: it would reboot, the windows logo would appear, and then a few seconds later it would reboot again. Over and over, forever.
The only solution I could find was to force it into the advanced boot settings, by powering off at the start of the boot 3 times in a row, and then choosing Troubleshoot > Advanced > Go back to the previous build. That put me back to what it was before it tried installing the feature update. But of course there's no stopping Windows 10 once it decides it wants to install an update...
I removed the VMware Tools and let it try the feature update install. This time it could reboot and do the install.
My theory is that the reboot loop is at the point where Windows was trying to change the video mode; for some reason it couldn't do that with the VMware Tools installed. I don't know if this has anything to do with the other problem, or if it affects anyone else.
So, feature update problem solved, right? Wrong. Windows went through the entire hours long process of performing the update, and got to the very last reboot.. the one that would login as the new build. And remember, boots into Windows 10 are only successful about 20% of the time. So sure enough, that boot got a BSOD.
But rather than just trying again, it appears that when WIndows is doing a feature update, it takes a BSOD as a showstopper issue. So it immediately backed out the entire update! Putting me back to square one. Which means, the BSOD problem must be fixed.
So thinking it might be something in this VM's VMX file, I compared it to the one created in the 7/25/2016 test, and guessed at which of the 22 differences could be significant.
First I tried removing:
acpi.mouseVMW0003 = "FALSE"
acpi.smbiosVersion2.7 = "FALSE"
Result: DRIVER_CORRUPTED_EXPOOL
Then I tried changing ethernet0.virtualDev = "e1000" to "e1000e".
Result: DRIVER_CORRUPTED_EXPOOL
Then I tried changing vmotion.checkpointFBSize = "4194304" to "67108864".
Result: no BSOD! Nor has there been any boot crash since then!
Except this doesn't make any sense, because as it turns out, Fusion sets this value itself each time the VM is started. So my changing of the value doesn't stick, it is back to what it was before.
So what's going on here? I can think of several possibilities:
And there's a possibility that it wasn't fixed directly by the VMX changes but as a side effect. Right now if I compare the current VMX to a backup from before I meddled with it, I see 4 differences: the acpi and ethernet0.virtualDev changes above, plus two more:
If what I did had (or has) not fixed the BSODs, next on my list to try was:
I'm crossing my fingers that the instability problem is in fact fixed, even if it isn't clear what fixed it.