VMware Cloud Community
sahara101
Contributor
Contributor

Snapshot issues

Hi, 

 

we had a problem today that during a snapshot delete, the esxi crashed. I could not start the vm anymore after this, so I created a new one, added the existing disks, created and deleted a snapshot and then could boot the vm. Some hours troubleshooting windows and vmware tools issues after this.

Now, it is asking for consolidation, but cannot do it because it is saying that cannot lock file. On every snapshot I try I see that the disks change to vm-00000x.vmdk. 

Is there a solution for this? 

Fehler beim Konsolidieren der Festplatten: Datei konnte nicht gesperrt werden.

Fehler bei Konsolidierung des Festplattenknotens 'scsi0:1': Datei konnte nicht gesperrt werden.

Fehler bei Konsolidierung des Festplattenknotens 'scsi0:0': Datei konnte nicht gesperrt werden.

 

Or this: 

Fehler beim Konsolidieren der Festplatten: Eine oder mehrere Festplatten sind ausgelastet.

These are the files:

sahara101_0-1676742271933.png

And these are the files in the new folder for the new vmx:

sahara101_1-1676742390032.png

 

 

 

Thanks!

Reply
0 Kudos
4 Replies
a_p_
Leadership
Leadership

Please attach the VM's latest vmware.log to your next reply.
Maybe it contains useful information.

André

Reply
0 Kudos
sahara101
Contributor
Contributor

Will post later…

 

an issue arose that the data is missing which the server had after the first snapshot was created. Can I somehow copy it using the 000002-vmdks?

Reply
0 Kudos
a_p_
Leadership
Leadership

You cannot just read from a snapshot file without the parent/base files.
According to the screenshots, the VM has/had 5 active snapshots. Each of them with modified data blocks compared to its parent(s).

With "so I created a new one, added the existing disks," you most likely added the base .vmdk file, which means that the VM was reverted to the point in time before any snapshot was created.

It may be possible to recreate the snapshot chain by manually modifying some of the descriptor files.
However, it depends on what happened during snapshot deletion, the size of the snapshots, and how much the base virtual disk has been modified since it was used in the new VM, i.e. how long the new VM has been powered on with the base .vmdk.
To avoid more damage, shut down the new VM asap unless already done.

If you want me to take a look at this, I need some details.

Create a file listing of the original VM's folder/files by running ls -lisa > filelist.txt in the VM's folder. Then compress/zip the following files: filelist.txt, *.vmx, *.vmsd, *.vmdk (only the 12 small ones without flat, or sesparse in their file names), and the vmware*.log files.
Then attach the resulting .zip archive to your next reply.

Other than this you may consider to restore the VM from its latest backup, which may mean that you loose some data, but will have a healthy VM.

André

Reply
0 Kudos
sahara101
Contributor
Contributor

Here the zip file, thanks! The 00002 should be the one with the data. I changed something in the vmx so when I add it now it only shows remove from inventory so it is totally wrong..

Restor eis on the way but it is missing a day, problem started with backup snapshot delete.

 

LE: I managed to start it, will update if working

Reply
0 Kudos