So this happens every so often, quite randomly.
When reviewing the logs for the particular user encountering this, it reports the following as it relates to the writable:
Cvo: Inspecting Cvo::MountResultSet (152125540) (from log block)
#<Cvo::MountResultSet:0x000000122280c8 @disabled_owner=false, @invalid_license=false, @writable_conflict=false, @writable_unprotected=false, @writable_error=true, @hypervisor_error=false, @error_count=1, @change_count=0, @mounted_count=0, @busy_count=0, @results=[#<Cvo::MountResult:0x00000012228028 @status="Failed", @snapvol=#<Snapvol id: 393, name: "domain\\user for \"desktop\" on W7x64", path: "", description: nil, created_at: "20170325 13:43:13", updated_at: "20170523 18:39:50", datastore_name: "", filename: "", enabled: true, writable: true, total_use_count: 21, provision_uuid: nil, provisioning: false, provision_completed_at: nil, provision_started_at: nil, size_mb: 3078, attachment_count: 0, assignment_count: 1, reachable: true, volume_guid: "{c3401bf9-1b41-4027-ab75-154764724921}", snapvol_version_id: nil, block_login: true, mount_prefix: "desktop", mounted_at: "20170523 18:39:45", defer_create: true, template_file_name: "[datastore] cloudvolumes/writable/domain!...", template_version: nil, missing: false, protected: false, agent_version: "2.12.0.32U", capture_version: nil, free_mb: 23255, total_mb: 25596>, @snapvol_file=nil, @writable=true, @volume_guid="{c3401bf9-1b41-4027-ab75-154764724921}", @message="Failed to mount because writable volume is missing">
This same behavior is true for the other few that have encountered it. Obviously, it failed because it doesn't have a path.
After reviewing both the AppVolumes database and the datastore holding the volume I can confirm that the relevant information exists and is reachable, no outage events in recent history.
Steps taken to resolve:
Rescan writables, result: nope
Rescan storage locations, result: nope
Disable -> Enable writable, result: seems to have worked (literally just worked as I was typing this post.)
Not sure if that's the actual workable solution as the former steps had resolved previous encounters. Will make it the first step for future reports.
Anyway, any insights as to why this is happening and how to prevent future encounters?
Thanks.