VMware Horizon Community
UCPR
Contributor
Contributor

AppStack attached 50% of the time with user assignment

Hey everyone,

We are in the process of deploying AppVolumes to certain users (for Adobe Pro). I have a weird behavior once the AppStack is deployed. If we assigned the appstack by users, the AppStack will only apply every 2 login, here's the process:

1.Initial login, AppStack available. Sign out occurs, VM reboots (due to RebootAfterDetach).

2.User logs back in and the AppStack in unavailable.

3.User sign out, reboot, user logs back in and AppStack is available.

After logging into vSphere, I was monitoring the tasks after the sign out of the VM, on step 1 the VHD will be detached from the VM and on step 2 it will not be re-attached. On step 3 it re-attaches again. So the issue is at step 2 where the VHD is actually never attached to the VM.

I've tried to replicate the isssue with computer assignment (with detach on shutdown) and it works perfectly, the VHD is always re-attached to the VM.

We are using version 2.14.

Has anyone had the same issue?

Thank you,

Reply
0 Kudos
6 Replies
techguy129
Expert
Expert

Review the manager and agent logs to see what is going on. That will give you better insight into the issue:

Log location

VMware Knowledge Base

Reply
0 Kudos
jsinclair
Enthusiast
Enthusiast

Are you using writable volumes? We had a similar issue where the writables weren't getting detached properly and would not allow them to reattach. if the WV failed, it wouldn't attach any app stacks either. After working with VMware support the past couple months, they found a bug in 2.14.X agent versions. We downgraded our agents to 2.13.3 and haven't had the issue yet. This is supposed to be fixed in 2.15 which will be released in November. Hope that helps..

Reply
0 Kudos
Ray_handels
Virtuoso
Virtuoso

Hey,

Did VMWare state what was casuing the bug? Was it a specific version of Windows that could cause this? We are also looking into going to version 2.14.2 agent but looking at this (we also use writable volumes) I'm kinda afraid to move up to 2.14. How often did this happen and in what situation did you see this happening the most?

Reply
0 Kudos
jsinclair
Enthusiast
Enthusiast

We actually still have this case open with VMware, as we are still experiencing some issues. I'm not sure what the actual bug was, but the support person I have been dealing with mentioned it to me. Since downgrading to 2.13.3, the problem has been less, but it still happens. We have a script that logs off everyone from their desktop on Saturday mornings. Therefore, Monday morning when everyone logged in again it started happening. We went from about 30% of writables failing, to now only 5-6 out of 200.

Reply
0 Kudos
Ray_handels
Virtuoso
Virtuoso

I would suggest checking the svservice.log file on the machines that fail to attach the writable.

Do you always see this with the same users? Do you happen to have computer assigned appstacks? It seems as if it might just not be a bug as we don;t see this behavior at all with over 700 writable attachments per day.

And you could try and look into the manager log to see what is happening there. This way you can check if the Agent actually is capable or connecting to the manager.

Reply
0 Kudos
jsinclair
Enthusiast
Enthusiast

Thanks for the suggestion. I will try that. We have been capturing the manager logs (turned on debugging) and the errors looked mostly like this:

[2018-11-05 14:04:34 UTC]   P3812R6152  INFO   RvSphere: File []/vmfs/volumes/5b55e801-27d490c4-517e-246e96aeaa18/cloudvolumes/writable/XXXXX!20!on!20!W10x64.vmdk was not found

[2018-11-05 14:04:34 UTC]   P3812R6152 ERROR   RvSphere: Failed to reconfigure VM "VP7026" (5010551b-2f91-29fd-50ac-4749032224fc):

[2018-11-05 14:04:34 UTC]   P3812R6152 DEBUG   RvSphere: VM reconfig returned with failure during mounting - attempting to detect partial mount status

[2018-11-05 14:04:34 UTC]   P3812R6152 DEBUG   RvSphere: Failed to mount: [["[P0.VIEW.AVWRITE1.DS28.SPA] cloudvolumes/writable/XXXXX!20!on!20!W10x64.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/Infrastructure_5-8-17.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/Microsoft_Office_2016x_64_7-17-18.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/Arc_Connect_Agent_6.2_10-19-17.vmdk", "failed"], ["[P0.VIEW.APPVOL2.DS20.SAN2] cloudvolumes/apps/SangIt_13_3-14-18.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/Starship_7-20-18.vmdk", "failed"], ["[P0.VIEW.APPVOL2.DS20.SAN2] cloudvolumes/apps/EDI_Notepad_8.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/PowerBI_July_7-24-18.vmdk", "failed"], ["[P0.VIEW.APPVOL1.DS19.SAN2] cloudvolumes/apps/Default_10-31-18-update.vmdk", "failed"]]

Even though the writable was there and accessible. It was very random, and wouldn't happen to the same people or even the same pool. We have 2 pools configured identically except one has NVIDIA GRID vGPU's.

However, we made a change last week to the snapvol.cfg file that is pushed to the writables. We currently use Palo Alto Traps for Malware/exploits, and I added the exceptions for that directory and registry, and so far we haven't had an issue. I'm not 100% convinced that was it, but it definitely seems plausible. We weren't seeing any alerts in Traps, but that doesn't always mean it might not be an issue. This coming Monday will be the true test as everyone will have had that writable change this week and all the desktops get logged off on Saturdays.

Below is the PA KB article that referenced these changes:

https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClOcCAK

https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClOcCAK

Reply
0 Kudos