Good to know whom to contact in such a case!
I have the same issue. I moved a VM from a 6.0.0 cluster to a standalone ESXi 6.7U2.
Any help appreciated.
you were able to move a VM and then have locked files in the new environment ?
thats strange and unexpected ...
please create a header dump and send it myway so that I can look into it.
If possible contact me via skype sanbarrow
Have you created a post with the steps to try and fix this?
I've just had this happening in my homelab. After an issue with the storage array, which led to a disk replacement, I now have 5-6 VMs which are inaccesible/orphaned and their vmx is locked by one of the ESXi hosts. Of course I've pretty much tried everything, host reboots, storage array reboot etc. I will not release the lock. Since it's my homelab I'm pretty much open to try anything.
This is what I do nowadays:
plan A - safest option :
extract the VMs with Linux
plan B - smartest option but not for the faint at heart:
patch the .vh.sf file
send from iphone
Can you explain how to extract the VM with Linux?
connect to the datastore with sshfs in readonly mode - then use ddrescue against the flat.vmdks
If that does not work - try if you can get the location of the fragments with vmkfstools -p 0 flat.vmdk
If that does not work - try to get the location of the fragments by analysing the VMFS-metadata
If that does not work - find the first fragment with scalpel and hope that the flat.vmdks are allocated in one piece