munrobasher's Posts

Just a heads up more than a question - after upgrading to 17.5.0, the VMware auto start service is set to manual irrespective of what it was before the upgrade. If you use auto start, you'll have to ... See more...
Just a heads up more than a question - after upgrading to 17.5.0, the VMware auto start service is set to manual irrespective of what it was before the upgrade. If you use auto start, you'll have to change it to automatic again.
Same here. Tried with Windows 10 and Windows 11 hosts. I suspect it might be AMD/USB/motherboard related.
>2023-08-04T16:08:26.332Z In(05) vmx Monitor Mode: ULM Yes, that means that VMware Workstation is working with Hyper-V VBS, not it's own hypervisor. My experience shows it's slightly slower. I don't... See more...
>2023-08-04T16:08:26.332Z In(05) vmx Monitor Mode: ULM Yes, that means that VMware Workstation is working with Hyper-V VBS, not it's own hypervisor. My experience shows it's slightly slower. I don't understand under the hood but in order for VMware Workstation to work with Hyper-V, the Windows Hypervisor Platform also has to be installed - which suggests it's using an API.  The other thing the ULM mode introduces is side channel mitigation settings: VMs with side channel mitigations enabled may exhibit performance degradation (79832) (vmware.com)
>vmware.log That, I think, it always the current log.
>The problem for me is this discrepancy between virtualizing using vmware on ubuntu vs virtualizing using vmware on windows 11. As posted by somebody else above, by default VMware Workstation ends u... See more...
>The problem for me is this discrepancy between virtualizing using vmware on ubuntu vs virtualizing using vmware on windows 11. As posted by somebody else above, by default VMware Workstation ends up using the Hyper-V API on Windows 11 because a small part of Hyper-V gets enabled even if the Hyper-V feature isn't installed - credential guard uses VBS (virtualisation based security). There are several threads about performance problems in this mode, i.e. not using the native VMware Workstation virtualisation technology. For example, taking snapshots can be problematic (slow). You can tell which mode your VM is running in by looking at the log file in the VM folder and searching for "monitor mode".  I'm going to guess that Linux isn't doing virtualisation based security which might explain some of the performance differences. But not a factor of four. Can you look at the log on Ubunutu for that "monitor mode" line? Would be interesting to know what mode it's running in on Linux.
Even with a 16GB test file, the sequential read speeds inside the VM are faster than the raw host speeds. Sequential writes are 14% slower inside the VM. For the 1st random test, reads are 36% slower... See more...
Even with a 16GB test file, the sequential read speeds inside the VM are faster than the raw host speeds. Sequential writes are 14% slower inside the VM. For the 1st random test, reads are 36% slower in VM but 2nd random is 20% faster in VM. For the random writes, the VM is 46% slower. So the VM is clearly slower at disk I/O generally but still workable. VMware Workstation isn't really designed for production use is it? I see it more as a test bed/server backup maybe. I wonder how this compares with a bare metal hypervisor? On my machine, VMware Workstation is running in it's owner hypervisor, i.e. CPL0 - as the host is Windows 10. I've got dual boot into Windows 11 which has the Hyper-V mode enabled 'cos of credential guard etc. I'll do a test in that host... VM C: drive: Host F: drive:  
Interestingly, I'm getting some of speeds better in the VM than on the host. Maybe there is some caching. I'll run again with a bigger test file. With 1GB, some caching might be at play for sure. Vi... See more...
Interestingly, I'm getting some of speeds better in the VM than on the host. Maybe there is some caching. I'll run again with a bigger test file. With 1GB, some caching might be at play for sure. Virtual machine C: drive configured as 200GB NVMe: Host F: drive, a SATA-3 1TB SSD:  
I've got a background task to re-visit VirtualBox and maybe even Hyper-V. I've currently got a problem with my main development PC so I'm currently rebuilding it - whilst I do this, I'm working prima... See more...
I've got a background task to re-visit VirtualBox and maybe even Hyper-V. I've currently got a problem with my main development PC so I'm currently rebuilding it - whilst I do this, I'm working primarily in a Windows 11 virtual machine. Most of the time, performance is perfectly fine for the admin/text based tasks I regularly do. I'm running 16GB with 8 vCPU. There are two slow downs that I notice - I use AutoHotkey to paste commonly used text (e.g. client email addresses) into web browsers and apps. Inside the VM, you can see it typing the characters. Outside the VM, it's instant. Second is cropping an image in photos.google.com. A lot of lag dragging the resize handles. I've never benchmarked disks inside a VM but I will now
>Workstation's virtualization of the Intel processor. I'm getting the same issues on an AMD Ryzen 5 so it's not limited to Intel CPUs.
Hardly VMware's fault though is it? Microsoft have made a breaking change that means Hyper-V is now running on most Windows 11 installations (for slightly dubious reasons). It's not surprising that y... See more...
Hardly VMware's fault though is it? Microsoft have made a breaking change that means Hyper-V is now running on most Windows 11 installations (for slightly dubious reasons). It's not surprising that you can't have two hypervisors running on the same CPU.
>There appears to be two issues here - firstly on my system, it sits "Saving state 0%" for a very long time. Not specifically the iowait setting. Just about to try it though. No, using the following... See more...
>There appears to be two issues here - firstly on my system, it sits "Saving state 0%" for a very long time. Not specifically the iowait setting. Just about to try it though. No, using the following doesn't help with saving state stuck at 0% mainMem.ioBlockPages = "4096" mainMem.iowait = "0" Although when it finally clears 0%, it certainly speeds up the rest of the snapshot. Interestingly, it's not every VM that has this problem. My Windows 11 test VM doesn't get stuck at 0%. Just my Windows 10 test VM. In terms of knowing whether you're running native VMware Workstation hypervisor or Host VBS, I wonder if the presence of this side channel checkbox shows this?    
>There really ought to be an obvious indicator in VMware Workstation of whether it is running with the VMware Hypervisor ("Traditional Mode") or with Hyper-V ("Host VBS Mode"). I wondered why I got ... See more...
>There really ought to be an obvious indicator in VMware Workstation of whether it is running with the VMware Hypervisor ("Traditional Mode") or with Hyper-V ("Host VBS Mode"). I wondered why I got warnings I'd not seen before about Hyper-V when I rebuilt my main PC or that the above was even a thing. I had no idea that VMware Workstation was working with Hyper-V at all when Hyper-V feature wasn't installed. I'd skim read articles and news about Device/Credential Guard but it didn't sink in what it all actually meant. There appears to be two issues here - firstly on my system, it sits "Saving state 0%" for a very long time. Not specifically the iowait setting. Just about to try it though.  
FYI: The "nogui" parameter is mandatory when you use -vp switch.
Years later and same problem with WS v17. Problem I have is that I loose the ability to use any USB devices in the virtual machine until I log off the host and back on.
FYI: the VMware v17.02 upgrade process doesn't maintain the current configuration of the VmwareAutostartService service. Specifically it sets start-up type back to manual. You'll have to configure it... See more...
FYI: the VMware v17.02 upgrade process doesn't maintain the current configuration of the VmwareAutostartService service. Specifically it sets start-up type back to manual. You'll have to configure it again after the upgrade if you use auto start VMs. Not the end of the world but a little sad it got through testing.
Yes same here with 17.0.2. That said, I think Workstation has had this problem for years. I had to kill vmware.exe manually after making sure all VMs (inc. auto start one) was shutdown. Surely they j... See more...
Yes same here with 17.0.2. That said, I think Workstation has had this problem for years. I had to kill vmware.exe manually after making sure all VMs (inc. auto start one) was shutdown. Surely they just need to spawn a separate process to do the update which would allow you to close the console?
I appreciate this is a real edge case but I'm interested as to the reason. I have a small physical server set-up at home for development running Windows Server 2019. On this server is a Hyper-V Cento... See more...
I appreciate this is a real edge case but I'm interested as to the reason. I have a small physical server set-up at home for development running Windows Server 2019. On this server is a Hyper-V Centos Linux VM for testing (a web server). I backup the entire server and virtual machines using the community edition of Veeam Backup & Replication to an external hard disk. I was pondering what process I would go through to get that VM back up and running if the physical server failed. So I thought I'd spin up a Windows Server 2019 VM in VMware Workstation on my big PC, install Hyper-V & Veeam in there and then do a "Instant restore". I actually have two VMs - one powered up fine (so effectively got a restore of entire VM in 5 mins) but the other didn't boot. It took me far too long to twig the difference between the two VMs was that the working one was a gen 1 Hyper-V VM but the other was a gen 2. I span up an old laptop with Windows Server 2019 and Hyper-V - it worked on there. So it's something to do with Gen 2 Hyper-V VMs running on Hyper-V inside a VMware Workstation VM. Any ideas? Ohh the other point is that my PC is AMD Ryzen 5 3600 and the original server & laptop are Intel. Could that make a difference?
Part of my problem is that Windows 11 on my host PC isn't as stable as Windows 10. I get occasional blue screens on that as well (like one a week). BSOD in VMs is much more common though. I've been i... See more...
Part of my problem is that Windows 11 on my host PC isn't as stable as Windows 10. I get occasional blue screens on that as well (like one a week). BSOD in VMs is much more common though. I've been investigating the cause of that which has taken me down the route of looking at memory timings. The root cause is four sticks of RAM in a B450 motherboard. This chipset has varying support for the later Ryzen CPUs. Anyway, after a lot of reading, I've tweaked the voltage on the RAM up a bit and the system is a lot more stable, inc. VMware.  This could be a total red herring for the original post but thought I'd mention it. Could be that virtualisation stresses a borderline memory controller too far.
I suspect it's the "auto-bridging" option? It might be picking the wired ethernet network adaptor. You can edit the settings and force it to use the Wi-Fi network adapter.
Ahh yes, well Windows 10 used to be stable on VMware Workstation as well. I suspect it's something in Windows 11.