- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
EDITED! Workaround for Ubuntu 22.10 displaying blank screen after applying latest updates
Many people on the forum have reported that their VMs have "stopped working" or "failed to boot" after applying the latest updates to Ubuntu 22.10. The latest kernel update 5.10.0-31 that the latest Ubuntu update installs has broken the console display under Fusion.
Until Ubuntu fixes this (and there is a bug report that's been sent to them about this), the workaround is to boot an older kernel. Most Ubuntu updates will leave the last 2 older kernels installed for just such an occurrence. Also do not apply any more updates to the Ubuntu VM until they fix this issue.
Here's how to boot an older kernel:
If you initially installed Ubuntu Server with its default disk partitioning, it's relatively straightforward. The boot process will display the GRUB boot loader prompt upon initial power up:
Click in the window and use the keyboard down arrow (not the mouse) to select "Advanced options for Ubuntu" and press "return". The following screen will appear:
The first entry (Ubuntu, with Linux 5.19.0-31-generic) is the default kernel that would have been booted, and is the one that is the cause of the problem. Use the keyboard down arrow to select the next kernel down - that is the kernel that was replaced by the updates that installed the broken kernel. In my case, that was 5.19.0-29-generic.
Once you highlight the kernel you wish to boot, press "Return" or "Enter" to boot that kernel. The VM should now boot and the console should display as it did before.
If you installed from an Ubuntu 22.10 daily build and took the default disk layout, the GRUB boot loader does not appear automatically and it's difficult (if not impossible) to reliably get to the boot loader prompt using the procedures that Ubuntu recommends. This default behavior can be changed to match what is seen in the Ubuntu Server- that is, the GRUB boot loader menu can be configured to appear on each boot of the VM.
If you're seeing the symptom described in this post, boot the Ubuntu VM and never mind that the screen is blank. Wait a couple of minutes for the VM to finish booting.
Next, find the IP address of the VM. That should be found by highlighting the running VM in the left hand panel of the Virtual Machine Library, and then looking toward the bottom of the right hand panel, where the IP address should be visible. It should look something like this:
Next, on the Mac open the Terminal app, and then use the ssh utility to remotely log into the VM:
ssh username@ip-address
Where username is your username in the Linux VM, and ip-address is the IP address of the VM.
You may be asked about the authenticity of the fingerprint for the VM. If asked, accept the fingerprint.
Provide the user's password, and you should be logged into the VM.
Once logged into the VM, sudo into a root shell, and edit the file /etc/default/grub.
Two changes need to be made:
First, find the line that reads
GRUB_TIMEOUT_STYLE=hidden
Change that to
GRUB_TIMEOUT_STYLE=menu
This change configures the GRUB boot menu to always display at boot.
Next, find the line that reads:
GRUB_TIMEOUT=0
Change it to
GRUB_TIMEOUT=5
This change will set the boot loader to automatically boot the first entry in the boot loader menu if no input is received for 5 seconds. Using the keyboard up/down arrows or entering any GRUB boot loader command will cancel the 5 second timer. (You can increase this timer if you so desire)
After making these two changes, save the file and exit the editor.
Now rebuild the GRUB boot configuration with the command
update-grub
Power down the VM after the GRUB update is finished:
systemctl poweroff
Now power back on the VM, wait until the screen flashes, and you should now see the GRUB boot loader menu. Follow the instructions earlier in this post for Ubuntu Server to select an older kernel that does not exhibit the issues.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks - I know there's a way to prevent the kernel from updating to begin with, but for the life of me I can't find the instructions to do so. Any suggestions/ideas?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can't edit my last post. I did one better and updated grub in advance of the update.
Mine are 23.04 and the grub lines to edit are a bit different:
GRUB_TIMEOUT is the one to set to 5. DEFAULT is for which menu item to boot by default.
I also changed GRUB_TIMEOUT_STYLE to menu to remove the need to hit the key.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For ubuntu:
apt-mark hold <package-name>
I think the following should keep the kernel and module packages from updating further once 5.19.0-31 is installed.
apt-mark hold linux-image-5.19.0-31-generic linux-modules-5.19.0-31-generic linux-modules-extra-5.19.0-31-generic
If you have not updated yet, then mark the linux-image, linux-modules, and linux-modules-extra package versions that are working for you.
There are other suggested options including using Synaptic package manager, but this one will work on any Ubuntu system and does not require Synaptic to be installed.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ouch... you are correct about the GRUB entry. It should be GRUB_TIMEOUT and not GRUB_DEFAULT.
My eyes saw one thing and my fingers typed another.
I'll edit the post, and I'll include the suggestion on the GRUB_TIMEOUT_STYLE.
By the way, I have not seen this behavior with 22.04 or 23.04.. yet. I keep the VMs updated religiously for the most part, and currently am running 5.15.0-57-generic in my 22.04 VM, and version 5.19.0-21-generic in my 22.03 VM. Perhaps putting the kernel on hold on both would not be such a bad idea...
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yeah, I'm going to do that because knowing Ubuntu they'll issue three more updates before fixing it, and push the working kernel off the stack ![]()
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thinking they might have done something with the vmwgfx driver, I was able to blacklist it once I got the VM running and ssh'd into it with the 5.19.0-31 kernel.
Even with the vmwgfx driver blacklisted, the kernel still fails to display the console.
I think that pretty much proves that Ubuntu borked something in this kernel since it will no longer display using the standard Linux frame buffer.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ColoradoMarmot wrote:Yeah, I'm going to do that because knowing Ubuntu they'll issue three more updates before fixing it, and push the working kernel off the stack
I'm taking a different approach. I just installed the latest mainline kernel 6.1.12-060112-generic using the good ol' 'mainline' utility that I used back in the early tech preview days. It works fine. Took me a few iterations to install all the dependent utilities and packages needed to build it, but once I did that the install and reboot has gone well.
That kernel won't get overwritten until I remove it.
As Mr. Spock would say: "Fascinating". The 'mainline' Linux kernels sourced from upstream and built using Ubuntu configuration files works fine, but the kernel they ship with their other modifications doesn't. Inquiring minds want to know what they did.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you could point us at the Ubuntu bug report, or else forward this information to them, we can see what we can do to help.
One of our Linux engineers looked into this and this was their analysis:
Ubuntu kernel maintainers backported the following change:
5e0137612430 ("video/aperture: Disable and unregister sysfb devices via aperture helpers")
It's change 89314ff239e1933357419fa91b20190150f114a8 in their kinetic kernel.
This change was part of a much larger series that was making sure that fb devices were properly releasing the pci resources which is necessary for proper drivers to load. Because the backport misses all those other changes it breaks PCI resource release for efifb and prevents specific drivers (e.g. vmwgfx) from being loaded. It doesn't affect many other arm64 systems because most arm64 system don't have dedicated PCI gpu's, like vmwgfx is, which is presumably why they've missed it.This change either needs to be reverted in the Ubuntu kernel or the Ubuntu maintainers need to backport all the other changes that were part of the fb/pci resource handling rework.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With way too much time on my hands, I dug around a bit in the kernel boot logs between a kernel that works and the Ubuntu borked kernels, (that now includes 5.19.0-35, still not fixed) I find an interesting issue.
The kernels that don't work are failing to load the vmwgfx driver with an error message that the vmwgfx driver couldn't get access to a region of memory. The ones that do are loading the driver normally.
Doing a web search on the error message indicated an eerily similar error reported to the Linux kernel maintainers last year. It seems to point to a kernel bug where EFI firmware console driver memory isn't released so that the Linux frame buffer (and vmwgfx) drivers can get access to it.
That particular bug was fixed in 5.19.
If this is indeed what's going on, why on earth did the Ubuntu kernel developers fail to include those fixes they obviously had in the 5.19.0-29 version in the 5.19.0-31 and later?
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Because they're ubuntu ![]()
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Agreed. I still can not fathom why that distribution is so highly regarded. Canonical must be engaging in payola.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Short update.
Kernel 5.19.0-38 released... but the bug still exists! ![]()
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there an existing Ubuntu bug report (on bugs.launchpad.net) about this specific issue, or do we need to file a new one? Based on banackm's post above it sounds like Ubuntu needs to cherry-pick some additional commits into their kernel to fix the issue. A bug report will be the best way to get them to do that, short of sending the patches to their kernel mailing list directly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I encourage anyone experiencing this issue to open a bug report on bugs.launchpad.net). I have one open at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2007001 but I’ve gotten zero notice on it or action on it, If this is impacting anyone else, pile on the bug reports until someone takes notice.
I personally think they’ve borked a back port of some patches perhaps by taking an incomplete set.. No other 5.19 kernel from any other distro has this issue. 23.04 at this point looks like it will ship with a 6.2 kernel that they haven’t messed up yet.
22.04 LTS is in real danger as well of becoming unusable on Fusion, so the bug reports may be helpful for that as well. Their 22.04 HWE 5.19 kernel exhibits the same issue. If they switch 22.04.3 installers to this HWE kernel, even the 22.04 series will prove difficult if not impossible to install.
Daily builds of 22.10 also have this borked kernel. But 22.10 is only going to be supported for about another 4 months (3 months after 23.04 gets released), so is Ubuntu really interested in fixing it?
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. I commented there and linked to banackm's post above with VMware's analysis, which does suggest an incomplete backport. Kai-Heng Feng, who commented there earlier, works for Canonical, so I think they did take note. ![]()
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your comments on the bug report. But just because a Canonical employee noticed it does not give me any hope that they are doing anything with it. At least not from the comments.
I hope I am wrong.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
BTW, the official Ubuntu 23.04 betas have been released: an interesting new entry is Ubuntu Cinnamon as an official flavour (together with Ubuntu Unity, previously); desktop images seem to be available only for Intel (ARM available, as usual, as server, on which to install the desired desktop subsequently)…
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@SvenGus thanks for the heads up on 23.04 beta. The Ubuntu Server arm64 installed without issue on Fusion 13 Apple Silicon. In the process of adding and re-configuring to get a "Desktop conversion" and so far nothing has been out of the ordinary.
Editor of the Unofficial Fusion Companion Guides
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
An update from the bug report submitted to Ubuntu indicates that they have indeed identified the issue for 22.10 to an incomplete patch series for releasing firmware console drivers and memory.. They look to be in the process of reverting the update that caused the issue in the first place. I'll reply to this when I see them release a kernel that boots properly.
I'd forget 22.10 right now unless I had to use it. 23.04 was just released. They indicate that 23.04 does not have this issue, as 23.04 is using a 6.2 kernel that has the complete set of fixes for the issue already in Linus' kernel sources (no back-port required).
Editor of the Unofficial Fusion Companion Guides