VMware Cloud Community
EMZak
Contributor
Contributor

Consolidation of snapshots takes a very long time

Hi.

I started snapshot consolidation. Snapshot consolidation has been running for 30 hours. Login to the console ends with a timeout. How long can consolidation 2.7TB + 250GB + 400GB vmdk take? Free space in the datastore was 20GB ....

Thanks for the reply.

Reply
0 Kudos
10 Replies
scott28tt
VMware Employee
VMware Employee

Try this: VMware Knowledge Base


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
continuum
Immortal
Immortal

You got nerves !!!

With just 20 gb free on the datastore the question is not „how long will it take „ but rather „will it work at all without filling up the datastore.“

Hope you are using thick provisioned vmdks.

How many snapshots do you have ?

How much free space do you have on the datastore now ?


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
EMZak
Contributor
Contributor

I am newbie. It was stupid. I know it now. But it is too late...

VMDKs are thick provisioned. I have 3 snapshots.

How do I find free space using ssh?

Consolidation is running about 72 hours... 😞

Thanks for your help.

Reply
0 Kudos
a_p_
Leadership
Leadership

You can find out by running df -h command.

To find out the used disk space for the files in the VMs folder, please run ls -lisa which reports the used space (in kB) in the second column.

Please run both commands and post the output a plain text, i.e. not as a screenshot.

André

Reply
0 Kudos
EMZak
Contributor
Contributor

[root@localhost:/vmfs/volumes/5d28c6d0-12b6a01a-7b3f-6c2b5991eaa5/ServerAD] df -h

Filesystem   Size   Used Available Use% Mounted on

VMFS-6       3.6T   3.6T     41.4G  99% /vmfs/volumes/datastore1

vfat       249.7M 150.7M     99.0M  60% /vmfs/volumes/0b23f27b-1d89ef9f-809d-be198fbc1324

vfat         4.0G  26.0M      4.0G   1% /vmfs/volumes/5d28c6dd-e02236b6-7b35-6c2b5991eaa5

vfat       285.8M 172.9M    112.9M  60% /vmfs/volumes/5d28c6c7-e334b514-2e17-6c2b5991eaa5

vfat       249.7M   4.0K    249.7M   0% /vmfs/volumes/73b19a7f-4d90b2b0-b289-ba6ebb50f7c1

[root@localhost:/vmfs/volumes/5d28c6d0-12b6a01a-7b3f-6c2b5991eaa5/ServerAD] ls -lisa

total 3643853952

    260    128 drwxr-xr-x    1 root     root         81920 May 19 20:54 .

      4   1024 drwxr-xr-t    1 root     root         73728 Jul 15  2019 ..

109052996 418762752 -rw-------    1 root     root     433970135040 May 19 20:54 ServerAD-000001-sesparse.vmdk

113247300      0 -rw-------    1 root     root           314 Aug 26  2019 ServerAD-000001.vmdk

247465028 9442304 -rw-------    1 root     root     21363023872 May 19 20:05 ServerAD-000002-sesparse.vmdk

251659332      0 -rw-------    1 root     root           321 Sep 10  2019 ServerAD-000002.vmdk

264242244 227478528 -rw-------    1 root     root     238969696256 May 19 20:49 ServerAD-000003-sesparse.vmdk

268436548      0 -rw-------    1 root     root           321 May 19 20:05 ServerAD-000003.vmdk

243270724 26009600 -rw-------    1 root     root     26633830400 Sep 10  2019 ServerAD-Snapshot2.vmem

239076420  20480 -rw-------    1 root     root      20490789 Sep 10  2019 ServerAD-Snapshot2.vmsn

260047940 26009600 -rw-------    1 root     root     26633830400 Sep 10  2019 ServerAD-Snapshot3.vmem

255853636  20480 -rw-------    1 root     root      20490789 Sep 10  2019 ServerAD-Snapshot3.vmsn

4195396 2936012800 -rw-------    1 root     root     3006477107200 May 22 22:46 ServerAD-flat.vmdk

37749828   1024 -rw-------    1 root     root        270840 May 19 20:49 ServerAD.nvram

8389700      0 -rw-------    1 root     root           452 May 19 19:26 ServerAD.vmdk

12584004      0 -rw-r--r--    1 root     root           788 May 19 19:26 ServerAD.vmsd

58721348      0 -rw-r--r--    1 root     root          3249 May 19 20:49 ServerAD.vmx

213910596   1024 -rw-r--r--    1 root     root        227651 Aug 26  2019 vmware-10.log

230687812   1024 -rw-r--r--    1 root     root        414550 Nov 15  2019 vmware-11.log

146801732   1024 -rw-r--r--    1 root     root        207672 Aug 16  2019 vmware-6.log

163578948   1024 -rw-r--r--    1 root     root        209396 Aug 16  2019 vmware-7.log

180356164   1024 -rw-r--r--    1 root     root        212131 Aug 19  2019 vmware-8.log

197133380   1024 -rw-r--r--    1 root     root        308459 Aug 26  2019 vmware-9.log

281019460   1024 -rw-r--r--    1 root     root        486759 May 19 20:49 vmware.log

218104900  88064 -rw-------    1 root     root      90177536 Aug 26  2019 vmx-ServerAD-356081836-1.vswp

Reply
0 Kudos
a_p_
Leadership
Leadership

According to the flat file's time stamp, the snapshot deletion process is still running. With the huge snapshots this may indeed take considerable time.

What you may do is to check the current state/progress from the command line (https://kb.vmware.com/s/article/2146185).


André

Reply
0 Kudos
EMZak
Contributor
Contributor

I have tried this. But it not working for me. Smiley Sad 

[root@localhost:/vmfs/volumes/5d28c6d0-12b6a01a-7b3f-6c2b5991eaa5/ServerAD] vim-cmd vimsvc/task_list

Failed to login: Operation timed out

Is it possible to stop consolidation via ssh?

Reply
0 Kudos
a_p_
Leadership
Leadership

It may be possible to kill the process, but I can't tell you whether this will harm the VM, so better avoid this if possible.

What you can do is to:

  • monitor the flat file's time stamp to see whether it gets updated
  • if the time stamp is not being updated, check the important sevices' state on the ESXi host, e.g. /etc/init.d/hostd status
  • press ALT-F12 on the console to see if there are related errors reported in the vmkernel log
  • use esxtop to monitor disk IO

André

Reply
0 Kudos
continuum
Immortal
Immortal

> but I can't tell you whether this will harm the VM,

Same here.

Andre - if you ever have the chance to setup a test for such a scenario on real hardware - lets try if we can figure out a way how to kill a consolidation process.
If we find a way how to"pause" a consolidation without doing any harm this would be something we can use quite often.

Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
EMZak
Contributor
Contributor

After five days is consolidation complete. 🙂 Uff...

Thank you all for your replies!

Reply
0 Kudos