ralish's Posts

Frustrating to see this was both reported during the beta and despite the numerous replies in this thread still hasn't had a response. For anyone from VMware who can assist with getting this resolve... See more...
Frustrating to see this was both reported during the beta and despite the numerous replies in this thread still hasn't had a response. For anyone from VMware who can assist with getting this resolved, this is a stack trace of the thread on which the high CPU activity occurs, in which it appears to be stuck in an infinite loop: ntoskrnl.exe!KiSwapContext+0x76 ntoskrnl.exe!KiSwapThread+0xab5 ntoskrnl.exe!KiCommitThreadWait+0x137 ntoskrnl.exe!KeWaitForSingleObject+0x256 ntoskrnl.exe!KiSchedulerApc+0x23e ntoskrnl.exe!KiDeliverApc+0x2f6 ntoskrnl.exe!KiCheckForKernelApcDelivery+0x2b ntoskrnl.exe!ExReleaseResourceAndLeaveCriticalRegion+0xf1 win32kbase.sys!UserSessionSwitchLeaveCrit+0x137 win32kfull.sys!NtUserMsgWaitForMultipleObjectsEx+0x529 win32k.sys!NtUserMsgWaitForMultipleObjectsEx+0x20 ntoskrnl.exe!KiSystemServiceCopyEnd+0x25 wow64win.dll!NtUserMsgWaitForMultipleObjectsEx+0x14 wow64win.dll!whNtUserMsgWaitForMultipleObjectsEx+0x90 wow64.dll!Wow64SystemServiceEx+0x164 wow64cpu.dll!ServiceNoTurbo+0xb wow64cpu.dll!BTCpuSimulate+0xbb5 wow64.dll!RunCpuSimulation+0xd wow64.dll!Wow64LdrpInitialize+0x12d ntdll.dll!_LdrpInitialize+0xe7 ntdll.dll!LdrpInitializeInternal+0x6b ntdll.dll!LdrInitializeThunk+0xe win32u.dll!_NtUserMsgWaitForMultipleObjectsEx@20+0xc USER32.dll!MsgWaitForMultipleObjectsEx+0x51 USER32.dll!_MsgWaitForMultipleObjects@20+0x1f vmnat.exe+0x3348 vmnat.exe+0x15391 ntdll.dll!___RtlUserThreadStart@8+0x2b ntdll.dll!__RtlUserThreadStart@8+0x1b   Taken from VMware Workstation Pro v17.5.0 on Windows 11 23H2.
Are there any plans to supported nested virtualisation under Windows Hypervisor Platform? If Hyper-V is installed, or any of several security features are enabled on the host OS (Virtualisation Base... See more...
Are there any plans to supported nested virtualisation under Windows Hypervisor Platform? If Hyper-V is installed, or any of several security features are enabled on the host OS (Virtualisation Based Security), then the Windows Hypervisor Platform is used instead of VMware's native hypervisor. This breaks support for VMs which required nested virtualisation (e.g. ESXi guests, Windows guests that themselves have VBS enabled). The underlying support appears to be present in the Windows Hypervisor Platform given nested virtualisation can be used on VMs directly created using Hyper-V, but VMware Workstation does not have any awareness of it. Previously discussed on these topics but have not received any comment from VMware: Nested hypervisor support under VBS (inc. Device Guard) Nested hypervisor support under VBS
I'm not a VMware employee, so have no visibility to what's happening in the company or the team that develops VMware Workstation. There's doubtless things happening I'm not aware of, but as a VMware ... See more...
I'm not a VMware employee, so have no visibility to what's happening in the company or the team that develops VMware Workstation. There's doubtless things happening I'm not aware of, but as a VMware customer, it's really not a great look that a Technical Preview is launched to solicit customer feedback and there's effectively no engagement by VMware. I've personally surfaced an issue and a feature request, both unaddressed, and the issue is the survey links for this preview are broken. That was two months ago! Presumably it doesn't matter, as if anyone cared about survey responses, they'd have noticed there aren't any and so fixed it by now. The lack of any engagement by VMware seems to be a common experience among posters. What's the point of a Technical Preview if the feedback is just left not merely unresolved but not even acknowledged and responded to? People are taking the time to give constructive feedback and it's simply ignored. Doubtless Workstation Pro 17 will be a paid upgrade, and at least for me it's going to be a bitter feeling if I do fork out the upgrade cost knowing my and others' input has simply been ignored.
Looking at the Windows Hypervisor Platform headers it does appear to be possible. There's the NestedVirtSupport bit of the WHV_PROCESSOR_FEATURES1  structure which is passed as the input buffer to W... See more...
Looking at the Windows Hypervisor Platform headers it does appear to be possible. There's the NestedVirtSupport bit of the WHV_PROCESSOR_FEATURES1  structure which is passed as the input buffer to WHvSetPartitionProperty with the WHvPartitionPropertyCodeProcessorFeaturesBanks property code. There's also the WHvPartitionPropertyCodeNestedVirtualization property which appears to take a BOOL as the input buffer to the function. I'm not clear how these two approaches differ, or how one affects the other. The WHvPartitionPropertyCodeNestedVirtualization property feels the most promising. It's noted in the Data Types documentation for the function that NestedVirtualisation is supported since Windows 10 19H2. This is from a very quick look at the API documentation and header files, so may not be 100% accurate, but overall appears promising.
Adding a note that somewhat curiously this doesn't affect Windows 11. It's entirely specific to Windows Server 2022 (Build 20348), and doesn't affect the latest Windows 10 release (Build 19044) or Wi... See more...
Adding a note that somewhat curiously this doesn't affect Windows 11. It's entirely specific to Windows Server 2022 (Build 20348), and doesn't affect the latest Windows 10 release (Build 19044) or Windows 11 (Build 22000).
I haven't yet had the chance to test this on the latest tech preview, but Windows Server 2022 guests with recent cumulative updates will bluescreen with UNSUPPORTED_PROCESSOR if the guest is configur... See more...
I haven't yet had the chance to test this on the latest tech preview, but Windows Server 2022 guests with recent cumulative updates will bluescreen with UNSUPPORTED_PROCESSOR if the guest is configured with multiple vCPUs. The only known workaround is to limit the guests to a single vCPU (both socket and core), with the obvious potential performance impact. Other users have documented this here but there's been no response yet from VMware, nor is it documented in the release notes as a known issue or in the knowledge base. It's definitely still an issue in the latest stable release (v16.2.4). Hopefully this will be fixed in the next release, and ideally should be backported to supported older releases. If anyone with the tech preview installed is able to confirm if the issue is still present that'd be very helpful, otherwise I'll try and find some time to test myself soon.
Seeing as the survey links are broken, at least for me (see here), I'm posting on the board instead. A feature I haven't seen discussed but which would be extremely useful is nested hypervisor suppo... See more...
Seeing as the survey links are broken, at least for me (see here), I'm posting on the board instead. A feature I haven't seen discussed but which would be extremely useful is nested hypervisor support under Hyper-V enabled hosts (i.e. using the Windows Hypervisor Platform). I've posted some thoughts about this before here, but to summarise, if running on a host which is Hyper-V enabled you can't run guests under VMware Workstation which expose Intel VT-x/EPT. I assume the same issue is present if exposing AMD-V/RVI but don't have such a system to test on. Virtualising the IOMMU does work. The impact is you can't run nested virtualisation scenarios on a system with Hyper-V enabled, be it because you actually use Hyper-V alongside VMware Workstation, or it's a dependency of other features like Device Guard. Where this is particularly frustrating is it blocks running VBS enabled guests as they require VT-x/AMD-V. This limitation doesn't appear to apply to Hyper-V itself, as such configurations work fine on Hyper-V VMs, which suggests it's technically possible.
Are the survey links broken for everyone else? I was going to respond to one of them, but both the short and detailed survey links aren't actually hyperlinks. They look like it, but once you hover ov... See more...
Are the survey links broken for everyone else? I was going to respond to one of them, but both the short and detailed survey links aren't actually hyperlinks. They look like it, but once you hover over them you'll see they don't link to anything. Tested on multiple browsers.
Bumping one time for any input from a VMware employee. Another area this causes problems is network labs using tools like GNS3.
Fixed in v16.2.1.
Some results from a very quick test using Hyper-V on a Windows 10 v21H1 x64 host w/ VBS enabled. All testing was performed in a Generation 2 VM with a fresh Windows 10 v21H1 x64 installation: Enabl... See more...
Some results from a very quick test using Hyper-V on a Windows 10 v21H1 x64 host w/ VBS enabled. All testing was performed in a Generation 2 VM with a fresh Windows 10 v21H1 x64 installation: Enabling VBS (aka. Core Isolation) worked with no additional changes. All that was required was enabling Core Isolation via the Windows Security app and rebooting for the requisite Windows support to be installed and enabled. I've attached a screenshot from System Information post-reboot showing VBS enabled in the VM. Nested virtualisation also works with a few extra steps. These are documented by Microsoft here. To summarise, you need to enable nested virtualisation for the (outer) VM, disable dynamic memory for the (outer) VM, and enable Hyper-V in the (inner) VM. I was then able to launch a Hyper-V VM inside the guest VM. So to summarise, it clearly is possible under Hyper-V to use both VBS enabled VMs and nested virtualisation (inc. simultaneously), including on hosts which themselves have VBS enabled. It being technically possible, the next question is does Microsoft expose the necessary public APIs for 3rd-parties to leverage these configurations? Is anyone from VMware able to comment if such support is on the development roadmap and if there are any major blockers to adding it?
I suppose the best way to confirm the current state of affairs is to see if a Hyper-V VM can be launched with VBS enabled in the guest while VBS is enabled on the host. If the answer is no, then it's... See more...
I suppose the best way to confirm the current state of affairs is to see if a Hyper-V VM can be launched with VBS enabled in the guest while VBS is enabled on the host. If the answer is no, then it's almost certainly not possible under VMware either when using Hyper-V as the virtualisation backend. If the answer is yes, the question is around if the relevant support is exposed through documented APIs.
Is anyone from VMware able to acknowledge this issue and confirm a fix is forthcoming? At least in my scenario it's reproducible every time.
I'm seeing the exact same issue. Have attached a stack trace if at all helpful from reproducing the crash with a debugger attached. If someone from VMware is viewing this thread feel free to reply wi... See more...
I'm seeing the exact same issue. Have attached a stack trace if at all helpful from reproducing the crash with a debugger attached. If someone from VMware is viewing this thread feel free to reply with any extra details that would be helpful for getting out a fix.
Are any VMware developers monitoring this forum able to comment on future support for nested hypervisors on systems with Hyper-V (or dependent features like Device Guard)? At the time host VBS suppor... See more...
Are any VMware developers monitoring this forum able to comment on future support for nested hypervisors on systems with Hyper-V (or dependent features like Device Guard)? At the time host VBS support was introduced back in Workstation v15.5.5 my understanding was this feature was missing due to limitations in the Windows Hypervisor Platform API. Is this still the case? Are there any plans to add support and what are the roadblocks to doing so? If at all helpful as background, my use case is testing VBS configurations in VMs on a host which itself uses VBS, as well as wanting to test ESXi configurations in a VM. Both of these scenarios require Intel VT-X/EPT virtualisation (or AMD-V/RVI for AMD CPUs), which isn't supported with host VBS. Thanks in advance!
Regarding your additional two Qs: Windows 10 v2004 Enterprise x64 (Final release; i.e. Build 19041). Only the 2004 release has the support required for Hyper-V interop (excluding Insider builds... See more...
Regarding your additional two Qs: Windows 10 v2004 Enterprise x64 (Final release; i.e. Build 19041). Only the 2004 release has the support required for Hyper-V interop (excluding Insider builds). Right now I'm not sure I'd call it a bug in either. I don't know enough about WDAG to say if keeping locks on those catalog files is expected behaviour, and the VMware Workstation Pro installer presumably just isn't expecting other processes to be maintaining references to those files. Updating the VMware Workstation Pro installer to at least check for this case would likely be the simplest path forward (and updating the release notes), but it's not necessarily a bug, just a system configuration that should be handled.
Regarding your Qs: I don't expect it would affect new installations as there's no existing driver catalog files at that point for the WDAG container to be locking. I'd definitely expect it to ... See more...
Regarding your Qs: I don't expect it would affect new installations as there's no existing driver catalog files at that point for the WDAG container to be locking. I'd definitely expect it to affect future updates, alongside the existing v15.5.6 update, unless there's some change in WDAG behaviour (via Microsoft) or the installer handling (via VMware). It could conceivably affect updates from older versions as well, but as the pre-v15.5.5 releases aren't compatible with Hyper-V (and thus WDAG), you'd be updating from a non-functional installation. So possible, but uncommon. Maybe if someone intentionally installed WDAG before updating to v15.5.5+ knowing the latter will fix the incompatibility (i.e. updated both, but in the "wrong" order). In all cases the answers above assume the underlying system has the WDAG feature enabled. If not, the circumstances to encounter the issue won't be present.
Posting to document an issue I encountered while updating to VMware Workstation Pro v15.5.6 from v15.5.5 which was not trivial to track down. Now that VMware Workstation is compatible with Hyp... See more...
Posting to document an issue I encountered while updating to VMware Workstation Pro v15.5.6 from v15.5.5 which was not trivial to track down. Now that VMware Workstation is compatible with Hyper-V and features which rely on it (e.g. Device Guard, Credential Guard, etc ...) it's possible to have those features installed side-by-side and use them in tandem. One of those features is Windows Defender Application Guard (WDAG). The WDAG container (a lightweight Hyper-V VM) maintains open handles to many driver catalog files located at C:\Windows\System32\CatRoot. This includes catalog files for VMware drivers which need to be updated during the upgrade process. The result is the upgrade process will fail with any of several possible messages. I personally witnessed errors referencing vsock (as a pop-up dialogue during upgrade), but after the upgrade failure and subsequent rollback, only saw generic failure messages without any pop-ups on subsequent attempts. In both cases the issue was due to driver catalog files being open by the WDAG container. The workaround is fortunately simple but not obvious. Simply stop the Application Guard Container Service (hvsics) before attempting the upgrade. This will stop the WDAG VM process maintaining the open file handles. Once the upgrade process is complete the service can be safely restarted. I'm not sure if the VMware Workstation Pro installer can better handle this case, but if not, it probably at least merits a reference as a Known Issue in the release notes to save others potentially a lot of trouble.
Sorry for the delayed response! I thought I had subscribed to notifications on this thread. It's not a stupid question at all. The answer is that there are some cases where transferring a file... See more...
Sorry for the delayed response! I thought I had subscribed to notifications on this thread. It's not a stupid question at all. The answer is that there are some cases where transferring a file to/from a VM is desired but it deliberately doesn't have a network connection, or is on a network segment which is inaccessible from the host. The 10GB size was just an example, and of course at the more extreme end. Regardless, it appears there's some "low-hanging fruit" in the handling of the VMware Tools copy support which would result in a substantial performance increase in such scenarios.
When copying files from the guest to the host with VMware Tools (not using SMB, FTP, etc ...) the mechanism is very inefficient. Consider the following scenario assuming a 10GB file to be copi... See more...
When copying files from the guest to the host with VMware Tools (not using SMB, FTP, etc ...) the mechanism is very inefficient. Consider the following scenario assuming a 10GB file to be copied from the guest to the host: VMware Workstation copies the file from the guest to a temporary location on the host system. Windows Explorer *copies* the file from the temporary location to the desired location on the host. Assuming the destination location is on the same volume as the temporary location twice as much storage space as should be needed is used. In addition, the temporary file is not removed, so the wasted storage space is retained indefinitely until a background process performs clean-up of temporary files. For large files, this process means the copy takes around twice as long as should be required given the redundant second step. I'd suggest the following changes: Where the destination location resides on the same volume as the temporary storage location a move should be performed resulting in no wasted space and a halving of time required. If the destination location resides on a different volume from the temporary storage location, the data should be removed from the temporary location after the copy is completed. Ideally the file should be directly copied to the destination location instead of using an intermediate temporary location but I expect there are additional factors here. The above suggested improvements should apply to the vast majority of copy operations while being relatively simple to implement? All testing performed on Windows 10 v1903 x64 w/ VMware Workstation v15.5.0. Thanks in advance.