VMware Cloud Community
ICTAdmin
Contributor
Contributor
Jump to solution

Lock on VMX and VMDK files

Hello,

I had a VM running on an Equallogic SAN. This has been running fine for a long time. However we recently experienced the VM hang. On further inspection I couldn't power down the VM or migrate at all. I also noticed the Storage Usage on summary reported as 0. 

As I couldn't do anything with the VM I migrated other VMs off the host and rebooted. When the host started back up the VM was marked as inaccessible. I removed from the inventory and then went to re-add. The register VM button was greyed out and I also couldn't register with the VSphere client wizard either. 

I then tried to re-add by logging into the host and adding directly there with the register VM wizard. This ran through successfully but just added an inaccessible VM with a random numerical name. 

I then investigated the Datastore and found there was a  vmname.vmx.lck present. I tried to delete this and was unable to do so. I have tried with various rf commands and run through all unlocking documents using vmfsfilelockinfo  which reports that is it locked in exclusive mode by the host which I rebooted. I rebooted it again followed by other hosts in the cluster (thank god for 10GB vmotion) and the lock still persists. 

I have also unmounted that datastore from the cluster, and rebooted the SAN (only this VM running on it). Still the lock persists.

I also cannot copy any of the VMDKs or vmx from the datastore to another as I am assuming this is to do with the lock too.

Has anyone come up against this issue before?

Thanks,

Labels (4)
Reply
0 Kudos
1 Solution

Accepted Solutions
ICTAdmin
Contributor
Contributor
Jump to solution

I managed to fix this and in case anyone else is struggling, here is what I did.

Using voma version 0.8 and above. Sample commands to check for errors are:

voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxxxx:1
voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxxxx

and the big fix:

voma -m vmfs -f fix -d /vmfs/devices/disks/naa.xxxxxx:1
voma -m vmfs -f fix -d /vmfs/devices/disks/naa.xxxxxx

I ran each command in that sequence above. It reported that 0 errors had been fixed but I was then allowed to register the VM and it booted fine.

Apparently ESXi 6.7 update 2 (which we already had) gives you version version 0.8 which is the version that works the magic. 

The following 3rd party information helped massively:

 https://bakingclouds.com/esxi-6-7-update-2-with-new-voma-version-0-8-now-supports-vmfs6/

the VMWare tech do is here:

https://kb.vmware.com/s/article/2036767

Hopefully this will save someone the hours I wasted on this!

View solution in original post

Reply
0 Kudos
4 Replies
Arvind_Kumar11
Enthusiast
Enthusiast
Jump to solution

Pls have a look on below KB if this helps:

https://kb.vmware.com/s/article/10051 

Reply
0 Kudos
ICTAdmin
Contributor
Contributor
Jump to solution

Thanks for the reply, I have already gone through the steps outlined in that article. There are no processes associated with the locked VM. I have identified the host which has the lock and rebooted but the problems persists.

Reply
0 Kudos
continuum
Immortal
Immortal
Jump to solution

Sorry for short answer - its 430 in the morning here ....

In worst case vmfs-locks can be patched. If you need help with that feel free to call on skype.

Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
ICTAdmin
Contributor
Contributor
Jump to solution

I managed to fix this and in case anyone else is struggling, here is what I did.

Using voma version 0.8 and above. Sample commands to check for errors are:

voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxxxx:1
voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxxxx

and the big fix:

voma -m vmfs -f fix -d /vmfs/devices/disks/naa.xxxxxx:1
voma -m vmfs -f fix -d /vmfs/devices/disks/naa.xxxxxx

I ran each command in that sequence above. It reported that 0 errors had been fixed but I was then allowed to register the VM and it booted fine.

Apparently ESXi 6.7 update 2 (which we already had) gives you version version 0.8 which is the version that works the magic. 

The following 3rd party information helped massively:

 https://bakingclouds.com/esxi-6-7-update-2-with-new-voma-version-0-8-now-supports-vmfs6/

the VMWare tech do is here:

https://kb.vmware.com/s/article/2036767

Hopefully this will save someone the hours I wasted on this!

Reply
0 Kudos