Hi !
I can't start my Debian 11 VM with Linux kernel 5.10.0-13-arm64. I have to boot with 5.10.0-11-arm64. Anyone knows how to fix this ?
Maybe the latest kernel needs to load a specific module about VMware I don't know...
When I try to boot with the 5.10.0-13-arm64 kernel, I get stuck here :
Thx!
Which VMware product are you using?
I use VMware Fusion (up-to-date)
The Tech Preview version of Fusion on an M1 Mac?
Yes, it's this one.
Thread reported so moderators know it should be moved to the area for the Fusion Tech Preview.
Yes I just ran into this problem too - I have to boot into older versions of the kernel.
I read on another post here that a recent kernel security update has caused our VMWare virtualization to hang and that we need to wait for VMWare to address the "bug".
So it looks like manually booting into old kernel versions is the way to go for now. If I was reading good info. 😉
Thx for the info 🙂
If you know what kernel security feature prevents to boot with the new kernel version I would be interested. 🙂
the PM posted that this has been escalated as an issue. Terrible timing coming a week after the update.
> what kernel security feature prevents to boot with the new kernel version
A pointer is here in the forum threads, if you can't find it somebody can repost it time permitting.
@Mikero is that failure to release the firmware fb video device the root cause behind all of this, or are Spectre fixes to blame? I'm hearing some mixed messages on that one...
What about This one? https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008497
I have a feeling this problem is the same problem being addressed in the Ubuntu problem reported here:
And in that, it looks like in reading there that the solution is both a Linux kernel patch, and an update to VMWare, both of which have apparently been done.
Now it's a matter of the distributions putting out the new kernel release, and VMWare pushing out a new version of Fusion.
That's from my memory though which I wouldn't trust for a second. 😉
Same issue her ... any idea how to solve this?
Well, I solved it by switching to using UTM instead. I know that's not a solution for VMWare, but I just couldn't wait any more.
I was very skeptical about UTM at first, but it's been performing flawlessly. I don't think it would do very well in a GUI environment though. Not sure.
It's easy to use if you're familiar with QEMU/KVM in Linux.
Known issue - check here: Tips/Techniques/Gotchas for the Tech Preview
It's a bug in Linux that was introduced a while back.
It looks like the same, or similar bug is back - recently Kali has updated to the 5.18 kernel, and I believe with 5.17 we had been reverting the patches. We are not reverting the patches anymore as I'd thought VMware had put out a release with the fix. It seems that either the same issue still exists, or due to use not reverting those patches anymore, it's finally rearing its head. This does not affect other virtualization systems.
Changing the boot options to add earlycon=efifb console=efifb and I see
Going to edit to add - I ran faddr2line vmlinuxfile __cpuinfo_store_cpu+0x84 and it just spits out
__cpuinfo_store_cpu+0x84/0x260:
?? ??:0
So I'm not sure where to go from here...
I think there are two issues that VMware has not really described accurately.
One issue is related to the console graphics adapter memory provided by EFI. There’s another issue preventing boot that is much more pervasive with distributions including Ubuntu. This issue is not a Linux bug. It’s the failure of the Tech Preview to implement access to a valid ARM architecture CPU capability register. It looks like the Linux kernel started to access this register starting with the security updates in March. @Mikero hints about this in one of the recent posts but fails to do justice to it as the real reason that all of our latest Linux distributions have stopped working.
Actually no, it's the same issue, hitting ID_AA64ISAR2_EL1 causes the panic.
Sorry @Technogeezer the actually no was meant as a reply to my response, not yours!
In Kali, we had been reverting those patches, and I figured that there had been a release to fix that already.