VMware Horizon Community
jahos_
Enthusiast
Enthusiast

failed to lock file when recomposing desktops with computer appstacks

Recomposing a floating desktop pool with computer appstacks assigned generates an error message "cannot power on VM on Host in Datacenter. Failed to lock file" on all desktops that are being recomposed.

Desktops are constantly trying to power on and the desktop pool never gets recomposed.

Tags (1)
14 Replies
jahos_
Enthusiast
Enthusiast

Apparently, when assigning computer appstacks based on OU and with "allow on domain entities" enabled, the VMDK files get locked when recomposing the desktop pool

Disallowing non-domain entities fixed it.

This should not happen and I hope it will be fixed in the next version

Jason_Marshall
VMware Employee
VMware Employee

So is this with a non-domain VM? How did you assign? Or is this domain VM's but you just happen to have the non-domain option selected?

Reply
0 Kudos
jahos_
Enthusiast
Enthusiast

No, this happens to domain vm's with computer appstacks that are being recomposed. Selecting the option "allow non domain entities" causes the error in vCenter "failed to lock the file. File is being locked by a consumer on host xxxx with a read-only lock". The desktops try to power on, but are then powered off again because of this error. Then they are powered on again, powered off, and so on.

The computer assignments are based on OU membership.

Strange enough I really have to allow non domain entities, because when we are accessing the view desktops from a NON DOMAIN CLIENT, logons are slow and the user profile is messed up. I really don't understand what app volumes is doing here, because as far as I know appvolumes is not doing anything with the client machine.

Reply
0 Kudos
Jason_Marshall
VMware Employee
VMware Employee

This is interesting and should not be happening. Have you created a ticket with VMware support on this and if so please send it to me.

Reply
0 Kudos
joinerc
Contributor
Contributor

I am having this exact same issue. I just tried calling support but since we are using the trial of Horizon View they told me they couldn't offer me support! VERY FRUSTRATING as we are a current vSphere customer.

We too have a floating desktop pool with appstacks assigned to an AD OU and only see the error after recomposing. As you stated the VMs try to power on over and over but fail because they "fail to lock file". I noticed the appstack vmdk that gets added after recomposing gets changed to a persistent independent disk from a nonpersistent disk on the affected VMs. If I manually change the hard disk back to nonpersistent then the VM boots fine and is happy.

This sounds like a bug to me which is why VMware offering me no help is a major FAIL!

Reply
0 Kudos
JHT_Seattle
Hot Shot
Hot Shot

Had the same issue here in testing, and the only resolution I found was to delete the VMs from View.  So, probably won't be assigning any stacks to the OU again anytime soon Smiley Happy

Reply
0 Kudos
Jason_Marshall
VMware Employee
VMware Employee

There is a feature in App Volumes that locks the Volume from deletion. This is there because if you delete a VM with the AppStack attached ESX will delete all the disks attached. While not recommended, this protection can be disabled by setting the system environment variable on the Manager (CV_NO_PROTECT=1) and restarting the AppVolumes Manager service.

PLEASE USE CAUTION if you do this. Again if this is set and you delete a VM with an AppStack attached the AppStack will also be deleted. However this should allow you to recompose the VM with the AppStack attached and get past this issue.

Reply
0 Kudos
jahos_
Enthusiast
Enthusiast

Are you using NFS storage? I had this issue on Nutanix, which is NFS.

VMware supports says they cannot simulate this on VMFS storage.

Reply
0 Kudos
jahos_
Enthusiast
Enthusiast

Thanks for clarifying this Jason. I was wondering why a lock is needed, because the appstacks are readonly.

Reply
0 Kudos
joinerc
Contributor
Contributor

Hi Jason,

Thanks for the help and apologies for venting in a moment of frustration.

The CV_NO_PROTECT seemed to maybe have helped some but unfortunately I'm still having the issue. It wasn't clear to me whether to set the system variable on the AppVol server or our desktop template so I set it on both. We have 10 VMs in a floating desktop pool and now when I recompose, 3 out of the 10 booted up normally and still had the appvolume attached. The other 7 still complained about locking files so I had to remove the appvol hard disk from the VMs for the them to boot up and finish recomposing.

Here are the exact steps I followed:

1) Set the CV_NO_PROTECT on AppVol server and our desktop template

2) Shutdown the template and took a snapshot of the VM

3) Restarted the AppVolumes Manager service on the AppVol server

4) Recomposed desktop pool so VMs in pool would have the CV_NO_PROTECT variable (no appstacks assigned at this point)

5) Assigned appstack to OU for desktop pool after VMs finished recomposing successfully

6) Logged into VMs to make sure the system variable and the appstack were applied; and they were.

7) Recomposed desktop pool again

😎 7/10 VMs failed to boot after their final configuration because of the locked file issue.

9) Removed the appstack hard disk from the 7 VMs and the VMs finished recomposing

10) Checked that the appstack was automatically added back to the 7 affected VMs from the OU assignment and it was

I tried recomposing again after those steps and again I'm having the same issue except this time only 1/10 VMs successfully recomposed. The 1 the succeeded this time is one that originally failed during the previous recompose.

I'm attaching 3 screenshots of the recompose process right around the failure:

1) Capture.jpg shows the status of the VMs in the View Admin Portal right before they are reconfigured the final time in vCenter.

2) Capture2.jpg shows the status in vCenter right after the VMs are reconfigured for the final time and vCenter is configuring the digest.

3) Capture3.jpg shows the error I receive right after the disk digest is done and the VMs try to boot up.

Hopefully all of that provides a little more insight as to what's happening. I tried to be as detailed as possible! Smiley Happy

Thanks for your help and guidance on this Jason! I really do appreciate it!

PS. Re jahos_'s question about NFS or VMFS, we are using VMFS5 LUNs on a HP 3Par 7200.

Reply
0 Kudos
Jason_Marshall
VMware Employee
VMware Employee

Very perplexing. can you try using an AD or user assignment. Does that make any difference? Looking to narrow down possibilities at this point.

And no worries on the venting. We have all been there...Smiley Wink

Reply
0 Kudos
joinerc
Contributor
Contributor

Assigning to an AD user works fine.

The problem seems to occur when the VM is rebuilt after a recompose and the app volume hard disk is reattached. Initially when I attach an app volume, there are 3 hard disks: Hard Disk 1 - System disk, Hard disk 2 - disk descriptor disk {checkpoint.vmdk}, Hard disk 3 - app volume disk. At this point the app volume disk is an independent NONPERSISTENT disk. After recomposing, the failed VMs (which are most of them) show: Hard disk 1 - system disk, Hard disk 2 - app volume disk, Hard disk 3 - disk descriptor disk. At this point the app volume disk gets attached as an independent PERSISTENT disk. If I delete the app volume disk or set it to nonpersistent then the VM boots fine.

Since my last post I have also tried the following:

1) Updated app vol manager and the agent on our template to 2.6

2) Assigned app to an AD user

3) Assigned app to individual AD workstation

4) Assigned app to an AD security group containing workstations

5) Our template and linked-clones were on HW v8 so I upgraded them all to HW v10 (vmtools is also up to date)

The only solution that has worked is assigning to an AD user. The problem is that we have a local user that needs the apps, not an AD user. The only way I see to assign to a local user is to assign it to machinename\user which will obviously be different for every machine. If we can somehow assign an app to any user with a particular username (AD or local) and not specify machinename then that would work also...if that makes sense? I don't see a way to currently do that though.

Reply
0 Kudos
joinerc
Contributor
Contributor

Latest Update: As a hopeful temporary workaround, I went ahead and tried assigning an app stack to a local user on a VM in the pool. Unfortunately this didn't work either. After assigning and rebooting, the vmdk never attaches to the VM when the local user signs in. Smiley Sad

Am I missing something? Assigning an app stack to an AD workstation or a local user on an AD workstation are both supported options, correct? I don't see anything in the documentation that says otherwise.

Reply
0 Kudos
joinerc
Contributor
Contributor

Ok so app stack not attaching for a local user was my fault. I forgot I had disabled non-domain attachments in the app volumes manager configuration. I had disabled that while troubleshooting the OU assignment problem. Re-enabled and I can now attach to a a local user on the workstation.  This at least provides a work around until I can solve the issues with workstation OU assignments.

Still question though whether I'm just missing something with OU assignments. Is assigning to workstations not recommended/not supported?

Reply
0 Kudos