I can see the meta.yml error. we have seen this error before.
Is this logging from the writable volume only or are appstacks also attached?
What we found out is that if we copy an application into an appstack (so no installer just move files from A to B into the appstack) it doesn't create the meta.yml file. Appvolumes Agent uses this file to do the rerorder of the machine and give priority to the writables and appstacks.
What happens if you recreate the writable volume? Do you see the exact same issue with the meta.yml file??
Finally had a chance to get back to this issue....
As a test I removed all AppStacks and only have a writable volume.
My setup goes as this:
- My master template only has the Horizon and App Volumes agents installed, no other applications.
- The writable volumes are assigned to the computer, not the user.
Here's a test that I ran:
- Delete busted writable volume and recompose the linked-clone VM
- This recreates the writable volume and assigns it to the computer account
- Login to the system and confirm the writable volume is working as expected
- Logoff the system
- The system is automatically refreshed at logoff
- Attempt to login to the system again but I only get a black screen; I don't even make it to the login screen
- Looking at the logs I see that meta.yml error present
On another note, I have the exact same setup (including same master template) where things are assigned to the user, not the computer, but I'm not experiencing this issue at all.
Sorry, no idea. I have seen versions (but that's way back) where the writable was corrupted after first use and worked on second use.
Do you delete the machine after use or refresh it?
I do have the policy set to refresh after every logoff.
1 person found this helpful
Let me guess, using Windows 10 as the guest OS? We had a similar problem, only with Windows 10, and AppVol 2.12, and the issue definitely followed the Writable Volume. Re-creating the Writable would fix it every time. What we found that fixed the issue (and I still do not know why), is we had to push the Windows firewall ports for Horizon to our GPO for our agent VMs. I basically looked on a sample VM that had the Horizon client installed, and copied the rules verbatim to a GPO that we push to all of our VMs. We discovered this after working with a VMware ticket on the issue, as they were not able to figure out why it was happening but for some reason they said to check Windows firewall and it has worked since we did this. The part that still also puzzles me is the actual rules that the Horizon Client creates were present on the VMs having the issue, yet the GPO having those same entries would get the client to connect to it. My theory is it's a Microsoft issue somehow tied to Writable Volumes, but we never did figure out why it was happening, only the fix for it.
Has this issue been resolved in last version of AppVol or are you still have this fix applied? If so, could you advise in more detail how you added those rules for windows firewall? It should be possible to add by UEM as well?
We are encountering still the issue with writable and win10 use, it gets corrupted after couple of using it, nto sure why. In most times, it happens when you leave the VM (you see it connected in Horizon Admin View), through the night and next day you login for the machine again, then log off, I see the machine is not loggin off in real in admin viewm then I do it manually in admin view and then writabel becomes corrupted as my next login stuck on Appvolume service. After I remove or recreate the writable, all works perfeclty again.
Thanks for advise