VMware Cloud Community
rpatz
Contributor
Contributor

VM says it needs to consolidate snapshots, but wont/wont boot

VM says on the summary page "Virtual machine disks consolidation is needed". However, the disks will not consolidate the option is grayed out.  When I did have that option we run I got the error "detected an invalid snapshot configuration". What am I missing here?

4 Replies
Paltelkalpesh
Enthusiast
Enthusiast

If consolidate/take snap is greyed out, you may have an active task running now on the VM ,kindly go through below link which may give some hints to resolve your issue.

Consolidate VMdisk not working

Reply
0 Kudos
rpatz
Contributor
Contributor

It's no longer grayed out however when I click consolidate snapshots it errors out and says "detected an invalid snapshot configuration"

Reply
0 Kudos
hussainbte
Expert
Expert

Is the VM Powered on?

Why was the snapshot taken.. I mean any snapshot based backup solution like Avamar.. VDP etc..

If you found my answers useful please consider marking them as Correct OR Helpful Regards, Hussain https://virtualcubes.wordpress.com/
Reply
0 Kudos
TheBobkin
Champion
Champion

Hello,

You likely have some disks pointing to broken snapshots (e.g. when a back-up marks it failed but it stays there) or some split-chains of snapshots.

Check the consistency of the snapshot-chain:

1. Open an SSH session to the host that the VM is registered on.

2. Change Directory to the location of the VM home folder e.g.

#cd vmfs/volumes/DatastoreThatVMIsOn/VMFolderName

(You can check this via the vSphere/Web Client by right-clicking on the VM and checking the configuration .vmx location or one of it's hard drive paths assuming these are in the same location)

3. Check what disks/snapshots are being pointed to by using cat on the .vmx:

# cat VMName.vmx | grep scsi*

(This will output a list of the what file each is pointing to e.g.

scsi 0:0.fileName = "VMName.vmdk"

or if pointing to a snapshot will look like:

scsi 0:0.fileName = "VMName-000001.vmdk")

4. Check what each snapshot is pointing to:

# cat VMName-000001.vmdk | grep parentFileNameHint

(This tells you what disk this snapshot points to next in the chain, this may be the base-disk or another snapshot, if another snapshot then cat the next snapshot specified until you get to base disk)

You need to work out here do you have multiple snapshots pointing to the same next snapshot/disk, if so then work out where the inconsistency is and which is the current 'good' state.

5. Faster way of checking disk-chain consistency;

# vmkfstools -q -v10 VMName-000001.vmdk

Run this against the highest-level snapshot (the one listed as disk mount in the .vmx), it will try to open and close all disks in the chain and tell you where it is failing if any.

Another question:

How long has this VM been running on snapshots?

If not long then worst-case scenario you could point it back to older snapshot (in a consistent chain) or to base-disk (NOTE, ANY DATA ADDED/CHANGED ON THESE DISKS SINCE THE TIME THAT SNAPSHOT WAS TAKEN WILL BE PERMANENTLY GONE).

Bob

-o- If you found this comment useful please click the 'Helpful' button and/or select as 'Answer' if you consider it so, please ask follow-up questions if you have any -o-

Reply
0 Kudos