I just upgraded from Workstation Pro 16 to 17.
I only tested 2 VM's so far, one (Ubuntu 22.04) has this problem, the other (Windows 11) does NOT.
When the Ubuntu VM boots, after the VMWare logo disappears, the screen remains black, with no disk or cpu activity to speak of, for a minute. Then the OS starts booting, and from there on everything runs as smooth as it did in version 16.
Host: Core I9 9th generation (8 cores + hyperthreading), 32 GB RAM, Windows 10 22H2 Business
VM: Ubuntu 22.04 (originally 20.04, upgraded in the VM long before the Workstation upgrade from 16 to 17); 8 GB RAM, 8 cores (ie all those from the host but not counting HT, this never was a problem for the host with v16).
Monitoring the logfile during boot, this is where the delay occurs:
2023-01-05T09:43:00.341Z In(05) vcpu-0 AHCI-USER: Already in check condition 02 3a 00
2023-01-05T09:43:09.372Z In(05) vmx VNET: MACVNetLinkStateTimerHandler: 'ethernet0' state from 1 to 5.
2023-01-05T09:43:10.176Z In(05) vcpu-0 SVGA: FIFO is already mapped
2023-01-05T09:43:10.176Z In(05) svga SVGA enabling SVGA
2023-01-05T09:43:10.176Z In(05) svga SWBScreen: Screen 0 Destroyed: xywh(0, 0, 720, 400) flags=0x3
2023-01-05T09:43:10.178Z In(05) svga SVGA-ScreenMgr: Screen type changed to RegisterMode
2023-01-05T09:43:10.178Z In(05) svga SWBScreen: Screen 1 Defined: xywh(0, 0, 640, 480) flags=0x2
2023-01-05T09:43:53.110Z In(05) vmx GuestRpcSendTimedOut: message to toolbox-dnd timed out.
2023-01-05T09:44:08.838Z In(05) svga SVGA disabling SVGA
2023-01-05T09:44:08.838Z In(05) svga SWBScreen: Screen 1 Destroyed: xywh(0, 0, 640, 480) flags=0x2
2023-01-05T09:44:08.844Z In(05) svga SWBScreen: Screen 0 Defined: xywh(0, 0, 720, 400) flags=0x3
2023-01-05T09:44:09.658Z In(05) vcpu-0 Syncing WHP TSCs took 93 us. Threshold is 1000 us.
2023-01-05T09:44:10.382Z In(05) vcpu-1 CPU reset: soft (mode Emulation)
I upgraded the virtuial hardware version from v14 to v17, the problem remains.
Done some experimenting with CPU, RAM, display (vga) configuration etc., to no avail. Even in a new full clone it remains the same.
After some reconfig in the guest itself, I now see that it *does* start booting the OS, but there seems to be something completely off with timing.
The GRUB boot menu appears after reconfiguring it as not hidden, but the timeout ("automatically in n seconds") counts down at about 1 second per every 15-20 seconds real time. Once it reaches zero and starts booting the real OS, speed is normal.
It didn't do this de last time I ran that VM in Workstation 16, yet I have other VM's (Debian instead of Ubuntu) that also use grub as boot loader and do not exhibit this behavior.
I don't get it anymore 😞
I created a new VM with the same configuration and attached the original VM's virtual disk to it: exactly the same.
That together with the Windows and Debian VMs running OK would make you believe it's something on that virtual disk.
But "boot to firmware" does it too, it takes half a minute before the BIOS screen appears, and it takes 1-2 seconds and sometimes longer to react to every keypress.
So I created a new VM, set the OS type to "Ubuntu 64 bit", did not install an OS, just left the disk empty, and did "Boot to firmware".
Again the same: half a minute before the BIOS screen appeared, and reaction to keypresses takes seconds.
Powered it off, changed the OS type from Ubuntu 64 bit to Debian 10.x 64 bit, and STILL the same.
But I have an existing Debian 10.x VM that boots just fine, and doesn't seem to be affected by this problem at all.
There are some threads in this forum which discuss a similar hang problem, and one of the 'fixes' mentioned talks about adding a line in the vmx file for the clock.
Sorry, I don't have either Wks 17 or Win 11, so I haven't encountered the issue and don't remember the exact fix.
Thanks, I had tried to look for something like that but didn't find anythin gyet.
In the mean time I have been experimenting some more.
I created new VM's, selecting different target OSs, each time just leaving the virtual disk empty and starting them by "boot to firmware".
Of those I tried so far, only "Linux / Ubuntu 64-bit" had the problem.
Other settings I tried that didn't have the problem: "Other / DOS", "Linux / Debian 11.x", were fine.
Changing the target OS after creating the VM had no effect, once bad it remained bad.
RDPetruska, I still didn't find the threads you mentioned, but now that I could create good as well as bad new VM's it was just a matter of comparing VMX files side by side.
This line is the culprit:
mks.enable3d = "TRUE"
Remove it from the Ubuntu/64 vmx file or change to "FALSE", and the problem goes away.
As far as I could tell in a minute, everything in the VM still works.
Yep, suffering from the exact same issue and changing the line to "FALSE" fixes this for me.
As far as I can tell, nothing is affected (apart from a faster boot!), so it's all looking good.
Thank you for the fix.