Hi!
I have a machine where I wanted to change the disk size of a linux-machine. The value cannot be changed (grayed out). I recognized that there are several snapshots. Now I started to remove the first snapshot. After 5 minutes 52% where finished, no further success. Next I tried shut down the machine. When I take a look at the console the last message is "Power Down.", but the machine still seems to be swichted on. Removing snapshot meanwhile adwanced to 99% where it hangs for 30 minutes now.
Is it normal that this takes so much time?
If I want to remove all snapshots... is it better to start with removing last one?
regards
Detlef
try to hv less number of snapshots.
best approach is deleting from latest one.
vim-cmd vmsvc/snapshot.removeall [VMID]
(detail about command describerd at vmware kb Committing snapshots when there are no snapshot entries in the snapshot manager)
The command "Delete all" is working better and faster than the removal of one snapshot
oh yes, i missed that option thanks mate
Looks good! After 50 minutes it was finished with the first snapshot. After that the machine really switched of. Now I selected to remove all snapshots. I'll see how long this takes now 🙂
but the machine still seems to be swichted on. Removing snapshot meanwhile adwanced to 99% where it hangs for 30 minutes now.
Is it normal that this takes so much time?
Yes its normal,based on the storage space, IOS and resources available in ESX(i), since you have several snapshot it will take some time to commit, dont distrub while the snapshot are getting committed, it will corrupt the VMDK and Also keep this a suggestion, never leave snapshot for long time like 1 week or 1 month.
snapshot are not the option for backup, it just a point in time recovery to used for patching or any changes to the servers, it has to committed immediatly once you feel that the server is ok with the modification.
For remove all use the option in the snapshot manager delete all option which will commit all the latest changes to the parent VMDK.
Refer best practices for snapshot VMware KB link below:
http://kb.vmware.com/kb/1025279
Working with snapshot
http://kb.vmware.com/kb/1009402
Removing all snapshots took 8 minutes now 🙂
wonderful
Hmmm...
Im unable to mount one of the disks of the virtual machine. Mount says "wrong fs type" :-? Is it possible that removing snapshots corrupts a virtual machine? In dmesg it says: "corrupt root inode".
Take a look at vmware kb Recreating a missing virtual machine disk (VMDK) descriptor file
(+)
Typically, removal of snapshots does not cause problems
The disk-files seem to be there, so I think I don't have to recreate a vmdk-file?
/vmfs/volumes/4f5633fe-0a47cff2-01ed-2c768aafd46c/opsidemo # ls -l *.vmdk
-rw------- 1 root root 34359738368 Apr 2 06:49 Opsi-flat.vmdk
-rw------- 1 root root 523 Apr 2 06:55 Opsi.vmdk
-rw------- 1 root root 21474836480 Apr 2 06:57 opsidemo-000001-flat.vmdk
-rw------- 1 root root 528 Apr 2 06:55 opsidemo-000001.vmdk
-rw------- 1 root root 21117861888 Apr 2 05:40 opsidemo.vmdk
-rw------- 1 root root 17179869184 Dec 6 13:25 opsidemo_2-flat.vmdk
-rw------- 1 root root 474 Apr 2 05:40 opsidemo_2.vmdk
-rw------- 1 root root 25769803776 Apr 2 06:59 opsidemo_3-flat.vmdk
-rw------- 1 root root 529 Apr 2 06:55 opsidemo_3.vmdk
The defective disk is opsidemo_2 opsidemo_3
correction: it's opsidemo_3 not opsidemo_2 ...
Post result command
cat opsidemo_2.vmdk
Oh... it should be the opsidemo_3:
/vmfs/volumes/4f5633fe-0a47cff2-01ed-2c768aafd46c/opsidemo # cat opsidemo_3.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=38b5e1b4
parentCID=ffffffff
isNativeSnapshot="no"
createType="monolithicFlat"
# Extent description
RW 50331648 FLAT "opsidemo_3-flat.vmdk" 0
# The Disk Data Base
#DDB
ddb.longContentID = "ef72a0b0166aabdb8e08ce7638b5e1b4"
ddb.toolsVersion = "0"
ddb.virtualHWVersion = "8"
ddb.uuid = "60 00 C2 93 63 cd 3f 6c-32 da 72 e9 47 96 e1 ca"
ddb.geometry.cylinders = "3133"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"
ddb.deletable = "true"
/vmfs/volumes/4f5633fe-0a47cff2-01ed-2c768aafd46c/opsidemo #
Strange ...vmdk descriptor looks as normal, try to power off VM and post vmware.log file referring to the unsuccessful launch VM
Also post .vmx file information.Let see the VMDK mapping..
I don't see any error about mounting vmdk in your vmware.log files (It looks so scsi0: 2 "opsidemo_3.vmdk" & scsi0: 3 "Opsi.vmdk" & scsi0: 0"osidemo_test.vmdk" mounted without problem)
But didn't I see your SCSI 0:1 device (may be old name opsidemo_2.vmdk)
If you renamed flat vmdk, you should edit vmdk descriptor, If you renamed vmdk descriptor you should remove old vmdk from VM invetory (Not from disk) and readded by new name
Hi
It looks as guest OS problem (not vmware), try to boot in single mode and check by fsck your guest FS on /dev/sdb1, same detail you can found at What command do you run to check file system consistency under UNIX or Linux?