I have an environment with VMware View 6.1.1 linked clones and appvolumes 2.9.
Applications are delivered through Appvolumes appstacks
Users can install applications through Appvolumes writable volumes ((UIA_ONLY)
User profiles are being handled by Persona Management.
When I assign a writable volume to a user and the user logs in to Horizon View, the user gets assigned a VDI from the pool, the writable volume and other appstacks are being attached to the assigned VD, the logon occurs and the user can install applications. So no problem there.
When the user logs out of the VDI, the writable volume and appstack volumes are not being detached automatically. I then detach the appstacks and writable volume manually (via the vSphere client) and delete the desktop.
When the same user logs again in to Horizon View, he gets assigned a VDI from the linked clone pool, writable volume (UIA_ONLY) and other appstacks are being attached to the assigned VDI, but the VDI itself displays the message "Please wait for the App Volumes Service" and the logon process does not finish.
This situation only occurs when users are assigned a writable volume and does not occur when users are assigned only appstacks.
Enabling or disabling the option "Prevent user login if writable is in use on another computer" does not have any effect on resolving the problem.
A few things to check first of all this issue should not have been in 2.9 App Volumes 2.9 Release Notes
1)When the user logs out of the VDI, the writable volume and appstack volumes are not being detached automatically. I then detach the appstacks and writable volume manually (via the vSphere client) and delete the desktop.
Please do not do this as it would cause incosistency in the AppVolumes Manager Database and unable to Track what is attached/Detached.
2)What is the Horizon View pool Policy for ""Disconnect after logoff" is it immediately ?
3)Fresh deployment or upgraded Environment
4)Can you attach the Production.log file from AppVolumes Manager (Location t C:\Program Files(x86)\CloudVolumes\Manager\Log). You can search for something like (Failed to reconfigure VM)
The reason why I detach and delete the appstacks and writable volumes manually is because I do not have any other choice as the VDI just sits there and waits forever.
The View policy for "Automatically logoff after disconnect" is "after 120 minutes"
The View policy for "Delete or refresh machine on logoff" is "Delete Immediately"
As a test, I chanegd the View policy for "Automatically logoff after disconnect" to "Immediately". After this, the appstacks and writable volume are detached automatically when a usr logs out, but when the same user logs again in to Horizon View, he gets assigned a VDI from the linked clone pool, writable volume (UIA_ONLY) and other appstacks are being attached to the assigned VDI, but the VDI itself displays the message "Please wait for the App Volumes Service" and the logon process does not finish.
The App Volumes environment is an upgrade from version 2.7. I did however re-import the 2.9 templates for writables and appstacks
Log files from both App Volumes Managers are attached.
I noticed in the log files that Appvolumes can not reach the VDI by it's IP address. (Unable to locate VM with IP xxxx)
I see in the vSphere client that a couple of seconds after the writable volume and the appstacks are being attached, the VMware tools stop running. I can however still ping the VDI.
I will review the logs later for sure but I have a slight Clarification with Regards to Point 2 As per my understanding
The View policy for "Automatically logoff after disconnect" is "after 120 minutes" That should mean the writable will get disconnected after 120 Minutes.
If it is set to "immediately" it will detach the writable Immediately (Expected) but the next login is Hung (unexpected)
Thanks for taking the time to review the log files.
Even after 120 minutes after a disconnect of a user the appstacks and writable are not getting detached.
When a user does a logoff (in stead of a disconnect) the appstacks and writable are not getting detached.
Only when there is no writable, the apstacks are getting detached as expected.
We found a workaround for our issue with writable volumes. It's not a nice workaround, but it works:
1. Do not assign to a group but only to individual users
2. Log on so the writable volume is created and attached the first time
3. Log off
4. Restart Appvolume Services
Very strange behavior.
I would try and refresh the desktop after logoff instead of delete it.
What happens when logging off is that the Appvolumes manager sends a command to the VCenter server to reconfigure the machine. It should do this just before the machine is shut down and just after the user logs off.
What I did see happening is that the delete machine command is being send to the Vcenter server just before the reconfigure is being send that way. After that you end up with an inconsisted machine because Appstacks cannot be deleted and the VM tries to remove all disks attched to itself.
Regarding the not being able to connect to the machine. We had some issues with the VDI pool as well that we could not reach it. After removing it from the domain and installing the following KB article https://support.microsoft.com/en-us/kb/2550978 the problem was solved.
Do you happen to provide specific firewall settings to your golden image using policies??
We tried now with refreshing the desktop instead of deleting at logoff. Now the appstacks are getting detached after logoff (but it takes still +/- 2 min to do this).
We did already install the MS 255978 hotfix you are mentioning.
There are no specific firewall settings in place.
What we see now is the following
1. We create a writable volume (UIA only) for an AD user (or to an AD group where the AD user is member of)
2. User logs in (1st login), the writable volume is created and the user can work and install his own applications
3. User logs out, the appstacks and writable volume are being detached and the desktop is being refreshed
4. User logs in (2nd login), we see in vCenter that the writable volume and appstacks are being attached, but the VDI itself displays the message "Please wait for the App Volumes Service" and the logon process does not finish. (we waited for more then an hour)
5. User disconnects from his session
6. We recompose the VDI
7. User logs in (3rd login), we see in vCenter that the writable volume and appstacks are being attached, the logon continues without any problem and the user can work and install his own applications
What I also saw is that during the 2nd logon there is only a svservice.log file in the directory C:\Program Files (x86)\CloudVolumes\Agent\Logs of the VDI
during the 3rd login there is on top of the svservice.log file also a cvfirewall.cfg and a cv_startup_postsvc.log file in C:\Program Files (x86)\CloudVolumes\Agent\Logs
I am attaching the log files for reference.
There is also an SR open for this (SR# 15722287707) but we are not getting anywhere with this.
I have seen this issue before of not being able to log in the second time around when a writable volume is being used. I thought that this was an issue with version 2.5 (the first version delivered as AppVolumes instead of CloudVolumes) but this has been a while.
And you don't seem to be the only one that has to wait quite some time for answers
My guess is that they are busy working on version 2.10 or being busy for VMWorld. It does kinda suck that there are some issues open for that long.
Reconfiguring after logoff should be done within a few seconds (we see a max of 8 seconds reconfigure during logoff no matter how many appstacks there are attached).
What version is the agent you are using and what version is the template you are using for the writable volume?
Have you tried updating the template files or removing and recreating them?
Writable template version: 184.108.40.206
Agent version: 220.127.116.113
Yes we did remove/recreate/re-upload the prepackaged templates for both the writable (UIA only) and appstack, we even chose a different datastore.
Today, I was setting up an exact same environment in Brazil for the same company.
In Brazil, Writable volumes work as they should.
I have checked all settings and all the settings are identical.The only difference I can see is that the setup in Europe was initally with version 2.7 and afterwards upgraded to version 2.9. The setup in Brazil was immediately done with version 2.9.
Could this be the source of our problems?
It could very well be the case.
Problem is that the writable volume (once created) isn't updated automatically when a new template is delivered. The only thing that happens is that new writable volumes are created with this template.
There is an update writeble volume option so you can update the WV's with the newly created snapvol.cfg (which is mostly the thing that changes). I really think that there should be some sort of change in applications to manage this better than it is managed now (just work with it and get the hang of it)
We are using a 2.6 template and still can work without issues.
I do agree with you though (regarding the other ticket between LWL and Appvolumes) that support and documentation could be a lot better. Also communication could be a lot better.
We worked with CloudVolumes before it was taken over by VMWare and to be honest support was better those days (oh how we love the old days) . Now it is often way to quit on the other end of the Ocean and my guess that a lot of stuff is actually happening, we just don't see it.
I would love to have some more interaction with the Appvolumes guys of the forum for brainstorming and stuff... So hopefully they read it and chip in on discussions.
For me though Appvolumes still is, hands down, one of the best products for our enviornment. We need dynamic composition because we have over 16000 users and only 1500 active sesisions. Regarding integration and the appstacks, it just works as intended. We still need to find applications that don't work due to Appvolumes. Mostly, if it doesn't work, it is a firewall problem or anything else. We do have all middleware installed in golden image (.Net Framework and VC Redists).
Yes we do.
I believe that current documentation states that the packaging machine should be the same (or at least as much as possible) as the golden image you will be using for your VDI Pools.
What we do now is when we reconfigure or update our golden image we clone this machine, rename it, uninstall the VMWare Agent (and Direct Connect agent if installed) and use this as our packaging machine. This way we almost always have the exact same machine as our golden image.
If you only have Windows Patches installed it's not an absolute must to be honest but if you update for example JRE or install a new .Net Framework I would stronly suggest also updating the packaging machine.
Strange. How would you ever work with plugins for Office if this Office version is in base image but not on the packaging machine?
And installing office in both base and appstack isn't really recommended.
For us this is the easiets way to go. Just to make sure all middleware is in base image and sequencing machine.
We have now the same problem in the second environment we set up immediately with version 2.9 (so no upgrade from an older version).
Therefore the problem we have is not related to an upgrade but to something else.
We are stuck 😞