After the user is logged in for a while, the SVservice crashes and the Writable volume and the App Stacks detach. Same template, same servers same app stacks and writable volumes stay attached when using the 2.13.3 agent.
anyone else having this issue?
The issue ended up being that you can not have 2 different App Volumes managers connected to the same vCenter. I was migrating from 2.10 to 2.14 and when one environment would scan vCenter it would detach the App Stacks and Writable volumes managed by the other environment. I finished the migration of my App Stacks to the new environment and shut down the old AV managers and the issue went away.
I am seeing this exact behavior after upgrading to 2.14. We are not using writable volumes, but everything else is the same as your situation.
No, never seen this behavior and we have users that are logged in for days on end..
Does the Appvolumes log or the Event log say anything?
We went back and completely uninstalled and re-installed the AV Agent after having this problem a few times. We also turned off all the new A-Sync mounts and queues as well.
Not sure which one did it, but we have not had a problem since then. We plan to continue running with this as we saw no performance increase at all with A-Sync anyway.
To update on jlenag's post, the issue has returned and we have entered in a support case with VMware about it. More details to follow.
The issue ended up being that you can not have 2 different App Volumes managers connected to the same vCenter. I was migrating from 2.10 to 2.14 and when one environment would scan vCenter it would detach the App Stacks and Writable volumes managed by the other environment. I finished the migration of my App Stacks to the new environment and shut down the old AV managers and the issue went away.
Just curious - were you using a shared datastore path with these two different managers? Or, with the new manager, did you run an import pointing to the path that the old manager is using? I noticed that an "import" doesn't actually copy anything, even if you set a new path. It just creates links to the imported path.
I'm facing the same issue and I had it narrowed down to either being caused by sharing a datastore path, or sharing a vCenter. I really hope it isn't the shared vCenter issue, as you determined, because I don't like cutting over to new versions like that. Yesterday I did a manual copy of all my appstacks to a new folder and I'm testing that today. Fingers crossed.
Our setup has the same vcenter, same datastores, but different folders in the datastores.
We copied only the appstacks we wanted to use with the new managers to a new folder in the datastore and then imported them from there. We have still been seeing this issue intermittently, and are still waiting on support for a full explanation of what is going on.
Ah, that's disappointing to hear! But appreciated.
And for an additional data point, this actually started happening when I began a migration to 2.13.2 from 2.11, so it isn't exclusive to 2.14. On top of that, even when changing my test pools back to the snapshot with the 2.11 agent, appstacks are still disconnecting from them.
Just wanted to say that I haven't had any disconnects since I manually copied all of my appstacks to a new folder on the datastore. So right now my setup is working with a single vCenter, Appolumes manager 2.11 pointing to folder1, Appvolumes manager 2.14 pointing to folder2.
Sorry for the late reply. I had a similar set up as you jordanht I had a 2.11 manager pointing to /cloudvolumes/apps and 2.14 manager pointing to /cloudvolumes2/apps on the same vSAN datastore. I had copied the app stacks from one folder to the other and then imported them into the new AppVolumes manager before assigning them to the users.
The only thing that worked for me was to shutdown the old environment.
What support told me was that the AV manager scans vCenter and if it sees App Volumes that are attached that it thinks shouldn't be it detaches them.
Adam, can you share an SR# for when you contacted support on this issue?
Thanks, Jeremy
I sent it to you in a message.
When I asked why something like this wasn't documented my support engineer said that engineering told him that the config was neither supported or a best practice so they didn't think they needed to publish anything about it.