HeadlessChicken
Contributor
Contributor

Delete snapshot failed, now VM won't boot, stuck at "discs consolidation is needed".

Hello Community,

Could anyone possibly offer any assistance as to why my "delete all snapshots" failed, leaving my VM unable to boot (there was one snapshot).  vSphere client now says

"discs consolidation is needed" but from the client, if you kick off the consolidation it fails, client loses connection with ESX for a few seconds, but ESX does not reboot.  Client then has to reconnect.

ESX is 5.5.0/2403361, Client is 5.5.0/1993072. Both were earlier versions of 5.5.0 (still U2) but I upgraded them this morning to see if it helped (it didn't).  It's running on a desktop, i7 3440, 32GB, has never given me any bother in nearly two years..until now!  Snapshot management has never been a problem before.

VMkernel shows this (I started consolidation just after 12.02.02.....

2015-07-18T12:01:40.003Z cpu6:36774)World: 14302: VC opID hostd-930e maps to vmkernel opID ecb607a0

2015-07-18T12:01:40.489Z cpu4:36483)World: 14302: VC opID 5B09AC75-00000049 maps to vmkernel opID 127cbd70

2015-07-18T12:01:55.107Z cpu6:36777)World: 14302: VC opID hostd-c030 maps to vmkernel opID 29b7983b

2015-07-18T12:02:20.003Z cpu1:36785)World: 14302: VC opID hostd-f489 maps to vmkernel opID b5bdc239

2015-07-18T12:02:40.003Z cpu2:36471)World: 14302: VC opID hostd-29f5 maps to vmkernel opID c6306bf

2015-07-18T12:02:55.104Z cpu4:36471)World: 14302: VC opID hostd-619b maps to vmkernel opID cbb35ac7

2015-07-18T12:03:00.006Z cpu7:36546)World: 14302: VC opID hostd-c94e maps to vmkernel opID 64df396b

2015-07-18T12:03:06.767Z cpu3:36471 opID=dd43debf)World: 14302: VC opID 5B09AC75-0000004A maps to vmkernel opID dd43debf

2015-07-18T12:03:08.106Z cpu4:36471 opID=dd43debf)User: 2888: wantCoreDump : hostd-worker -enabled : 1

2015-07-18T12:03:08.248Z cpu4:36471 opID=dd43debf)UserDump: 1820: Dumping cartel 36471 (from world 36471) to file /var/core/hostd-worker-zdump.001 ...

2015-07-18T12:03:28.948Z cpu4:36471 opID=dd43debf)UserDump: 1944: Userworld coredump complete.

2015-07-18T12:03:32.290Z cpu4:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.425Z cpu7:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.545Z cpu7:49510)Vol3: 731: Couldn't read volume header from control: Not supported

2015-07-18T12:03:32.545Z cpu7:49510)Vol3: 731: Couldn't read volume header from control: Not supported

2015-07-18T12:03:32.545Z cpu7:49510)FSS: 5091: No FS driver claimed device 'control': Not supported

2015-07-18T12:03:32.629Z cpu7:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.638Z cpu2:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.646Z cpu2:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.653Z cpu2:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.661Z cpu2:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.664Z cpu2:49510)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-18T12:03:32.670Z cpu2:49510)FSS: 5091: No FS driver claimed device 'mpx.vmhba32:C0:T1:L0': Not supported

2015-07-18T12:03:33.018Z cpu2:49510)VC: 2059: Device rescan time 201 msec (total number of devices 😎

2015-07-18T12:03:33.018Z cpu2:49510)VC: 2062: Filesystem probe time 504 msec (devices probed 6 of 😎

2015-07-18T12:03:33.018Z cpu2:49510)VC: 2064: Refresh open volume time 79 msec

2015-07-18T12:03:38.002Z cpu0:33211)BC: 3423: Pool 0: Blocking due to no free buffers. nDirty = 2 nWaiters = 1

2015-07-18T12:03:38.066Z cpu4:49510)Config: 346: "SIOControlFlag2" = 0, Old Value: 0, (Status: 0x0)

2015-07-18T12:03:42.236Z cpu4:33211)BC: 3423: Pool 0: Blocking due to no free buffers. nDirty = 9 nWaiters = 1

2015-07-18T12:03:42.242Z cpu1:49510)Config: 346: "VMOverheadGrowthLimit" = -1, Old Value: -1, (Status: 0x0)

2015-07-18T12:03:45.253Z cpu4:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 208 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:45.968Z cpu4:49575)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 224 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:45.970Z cpu2:49513)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 225 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:45.993Z cpu6:49804)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 248 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:51.047Z cpu4:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 360 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:51.679Z cpu0:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 201 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:52.895Z cpu5:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 243 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:52.905Z cpu3:49571)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 204 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:54.279Z cpu6:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 267 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:54.379Z cpu3:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 340 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:57.019Z cpu1:49805)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 558 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:03:57.545Z cpu0:49540)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 1057 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:00.254Z cpu5:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 514 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:00.261Z cpu0:49512)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 524 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:00.261Z cpu0:49575)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 517 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:00.315Z cpu1:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 558 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.207Z cpu2:49511)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 509 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.688Z cpu1:49513)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 255 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.688Z cpu7:49575)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 206 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.688Z cpu4:49804)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 256 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.688Z cpu3:49805)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 296 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.689Z cpu5:49806)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 257 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.731Z cpu0:49807)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 340 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:04.795Z cpu3:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 407 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:06.255Z cpu0:49802)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 223 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:06.663Z cpu3:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 375 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:06.730Z cpu7:49510)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 391 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:06.789Z cpu0:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 450 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:08.767Z cpu4:49512)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 527 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:08.782Z cpu0:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 585 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:08.847Z cpu2:49807)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 616 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:09.263Z cpu0:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 480 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:09.266Z cpu2:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 425 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:09.272Z cpu1:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 438 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.099Z cpu0:49802)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 252 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.105Z cpu6:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 257 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.391Z cpu3:49571)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 201 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.681Z cpu6:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 240 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.757Z cpu6:49802)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 258 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:10.759Z cpu6:49571)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 252 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:11.407Z cpu6:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 232 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:11.508Z cpu6:49513)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 333 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:11.883Z cpu2:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 208 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:11.922Z cpu5:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 240 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:12.049Z cpu2:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 323 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:12.723Z cpu5:49510)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 325 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:12.782Z cpu0:49804)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 385 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:12.791Z cpu2:49512)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 392 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:12.832Z cpu4:49575)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 435 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:13.140Z cpu1:49571)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 212 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:13.918Z cpu7:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 225 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:14.150Z cpu2:49803)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 231 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:14.392Z cpu2:49512)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 226 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:14.393Z cpu6:49513)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 225 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:33.549Z cpu7:49540)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 214 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:34.911Z cpu0:49575)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 656 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:40.294Z cpu0:49510)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 292 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:42.824Z cpu0:49649)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 205 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:43.393Z cpu4:49804)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 222 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:43.446Z cpu6:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 269 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:43.677Z cpu7:49807)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 222 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:44.619Z cpu7:49650)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 269 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:44.961Z cpu0:49802)FS3Misc: 1753: Long VMFS rsv time on 'datastore1 4TB' (held for 213 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:46.316Z cpu4:49540)FS3Misc: 1753: Long VMFS rsv time on 'datastore2 2TB' (held for 208 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors

2015-07-18T12:04:49.089Z cpu3:49821)World: 14302: VC opID 5B09AC75-000001F3 maps to vmkernel opID 980b7887

2015-07-18T12:04:49.277Z cpu6:49802)Hardware: 3124: Assuming TPM is not present because trusted boot is not supported.

2015-07-18T12:04:55.307Z cpu4:49510 opID=64bc8457)World: 14302: VC opID hostd-0b58 maps to vmkernel opID 64bc8457

2015-07-18T12:04:55.359Z cpu3:49821)World: 14302: VC opID hostd-303b maps to vmkernel opID ed524f5a

2015-07-18T12:04:55.418Z cpu3:49807 opID=ad7b9210)World: 14302: VC opID hostd-0dec maps to vmkernel opID ad7b9210

2015-07-18T12:04:55.426Z cpu4:49802)World: 14302: VC opID hostd-303b maps to vmkernel opID ed524f5a

2015-07-18T12:04:55.449Z cpu2:49513)World: 14302: VC opID hostd-0b58 maps to vmkernel opID 64bc8457

2015-07-18T12:05:00.005Z cpu7:49802)World: 14302: VC opID hostd-cdb1 maps to vmkernel opID 528b0bf0

2015-07-18T12:05:20.006Z cpu3:49510)World: 14302: VC opID hostd-ed16 maps to vmkernel opID 2dfc5105

2015-07-18T12:05:40.003Z cpu7:49510)World: 14302: VC opID hostd-5947 maps to vmkernel opID 3ebf8511

2015-07-18T12:06:00.003Z cpu6:49821)World: 14302: VC opID hostd-2a9f maps to vmkernel opID 1885c518

2015-07-18T12:06:20.546Z cpu3:49510)World: 14302: VC opID hostd-a9e4 maps to vmkernel opID 34c9edf5

2015-07-18T12:07:00.006Z cpu4:49575)World: 14302: VC opID hostd-11a9 maps to vmkernel opID b4c3a808

2015-07-18T12:07:19.107Z cpu5:49513)World: 14302: VC opID hostd-3ec6 maps to vmkernel opID e60077b4

Thanks.

20 Replies

Yes, this has happend more frequently it seems in the latest releases.  Try logging into the command line and consolidating that way

VMware KB: Committing snapshots when there are no snapshot entries in the Snapshot Manager

If you found this helpful please mark.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
HeadlessChicken
Contributor
Contributor

Thanks for the reply.

KB looked helpful - I was hopeful!

vmx file shows the VM is using -000001.vmdk

Console shows no snapshot in use though ->

~ # vim-cmd vmsvc/snapshot.get 104

Get Snapshot:

~ #

And unable to create a snapshot as part of the KB add/remove process...

~ # vim-cmd vmsvc/snapshot.create 104 testsnapshot desc 0 0

Create snapshot failed

~ # vim-cmd vmsvc/snapshot.create 104 testsnapshot desc 0 1

Create Snapshot:

Create snapshot failed

~ #

Removing snapshot cmd doesn't return error, but doesn't appear to do anything else either...

~ # vim-cmd vmsvc/snapshot.removeall 104

Remove All Snapshots:

~ #

I also tried to use the vCenter Convertor Standalone to just duplicate it to a flat image, but when you select that VM and click next, it fails with "Unable to obtain hardware information for the selected machine"

Datastore browser shows 2 files exactly the same too, which definitely looks odd (...02.vmdk) - see screenshot.

0 Kudos
Nithy07cs055
Hot Shot
Hot Shot

When was the snapshot taken ? was it too long ,, ?

check if the VM needs consolidation? you can check this VMware KB: Consolidating snapshots in vSphere 5.x/6.0

What error you get why you try consolidation from client ??

and if you still facing the issue , you can try from command line

# vim-cmd vmsvc/getallvms   (Make a note of the VMID,  make sure you note down the correct Vm's VMID)

# vim-cmd vmsvc/snapshot.get [VMID]

# vim-cmd vmsvc/snapshot.create [VmId] [snapshotName] [snapshotDescription] [includeMemory] [quiesced]

#  vim-cmd vmsvc/snapshot.removeall [VMID]

this will create a test snapshot and remove all from the Virtual machine

also if you want monitor the snapshot deletion from CLI check this http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100756...

Still if this does not work --> final option

---------------------------------------------

Login to the host  directly using vsphere client ( on the host on which the VM resides )

check if any process on the recent task

if so try a  service management restart on the host .. and consolidate

Thanks and Regards, Nithyanathan R Please follow my page and Blog for more updates. Blog : https://communities.vmware.com/blogs/Nithyanathan Twitter @Nithy55 Facebook Vmware page : https://www.facebook.com/Virtualizationworld
0 Kudos
HeadlessChicken
Contributor
Contributor

Thanks for the reply...

>> When was the snapshot taken ? was it too long ,, ?

Snapshot is a few weeks old

>>check if the VM needs consolidation?

Well it does, it tells me it does, I mention this in the original post.

>> What error you get why you try consolidation from client ??

I also mention this in the original post, loss of connection from vSphere client and errors in kernel log.

>>and if you still facing the issue , you can try from command line

Command line fails, as mentioned in previous post

>>Still if this does not work --> final option ....if so try a  service management restart on the host .. and consolidate

As mentioned in original post, I upgraded ESX to the latest version of 5.5, so the whole box has been restarted

0 Kudos

Do you have Vmware support?  If you do those guys are wizards at cleaning up old snaps.  Another option is to try and do a clone from the machine. I think that will get rid of the snaps.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
Himanshu_vmware
Enthusiast
Enthusiast

1. Please make some free space on datastore and then try to consolidate or clone.

2. If above option does not work then try migrating the VM to another host and then try to consolidate or Clone.

0 Kudos
Nithy07cs055
Hot Shot
Hot Shot

In that case i guess these files are completely locked up .. Try cloning the machine on different host and datastore and consolidate and check if that works ..

Kindly update us  if you find a fix for this issue Smiley Happy

Thanks and Regards, Nithyanathan R Please follow my page and Blog for more updates. Blog : https://communities.vmware.com/blogs/Nithyanathan Twitter @Nithy55 Facebook Vmware page : https://www.facebook.com/Virtualizationworld
0 Kudos
HeadlessChicken
Contributor
Contributor

There's a couple of TB free so free space isn't the issue....

I tried from vCenter Server 5.5.0.20500/2646489 just now to both migrate and clone the VM to another host.

Both actions fail with "The virtual disk is either corrupted or not a supported format."

Not a surprise I guess as the Converter Standalone also fails to connect to that VM..

Consolidate from the web client fails fail just as it does from command line and windows client...

This is a VM that was working absolutely fine until I told it to delete the snapshot, and there was only one snapshot there!!

0 Kudos
HeadlessChicken
Contributor
Contributor

VMware support ??  I wish!  Troubled economic times!

0 Kudos
Dee006
Hot Shot
Hot Shot

I believe this issue is purely related to storage end.Since your environment running in the desktop,I can't imagine how to ruled out the issue.I found few KB which related to error message posted in thread.Check it out.Are you running ESXi in bare metal or hosted environment?If it is hosted environment try to disable the Anti-virus.

VMware KB: Cannot remount a datastore after an unplanned permanent device loss (PDL)

VMware KB: VMkernel logs report the message: Long VMFS3 rsv time

Kindly share your findings whether your issue got fixed.

0 Kudos
HeadlessChicken
Contributor
Contributor

Thanks for the reply, but unfortunately I didn't see anything in the KB to help.

ESX is running on bare metal.

The datastore appears to be mounted fine, there are lots of other VM's on the same datastore which will run without issue..  I couldn't match exactly the errors in that KB.

The delay thing looks a bit suspect maybe but increasing the default 200 timer to 500 does not resolve.

The first thing I see when I attempt consolidation is a crash/core dump.  It comes before the rsv timer problem.

2015-07-20T12:08:20.502Z cpu1:34220)World: 14302: VC opID F24B0D7F-00000072 maps to vmkernel opID 2eb68fe0

2015-07-20T12:08:25.082Z cpu6:33969 opID=f0b634f)World: 14302: VC opID F24B0D7F-00000073 maps to vmkernel opID f0b634f

2015-07-20T12:08:25.748Z cpu6:33969 opID=f0b634f)User: 2888: wantCoreDump : hostd-worker -enabled : 1

2015-07-20T12:08:25.885Z cpu6:33969 opID=f0b634f)UserDump: 1820: Dumping cartel 33911 (from world 33969) to file /var/core/hostd-worker-zdump.003 ...

2015-07-20T12:08:46.787Z cpu3:33969 opID=f0b634f)UserDump: 1944: Userworld coredump complete.

2015-07-20T12:08:46.788Z cpu6:33911)World: 14302: VC opID F24B0D7F-00000075 maps to vmkernel opID 66bf8e03

2015-07-20T12:08:50.090Z cpu0:39282)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-20T12:08:50.224Z cpu7:39282)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

2015-07-20T12:08:50.333Z cpu5:39282)<3>ata2.01: bad CDB len=16, scsi_op=0x9e, max=12

If I try to start the VM, I this error ->

Failed to start the virtual machine.

Module DiskEarly power on failed.

Cannot open the disk '/vmfs/volumes/4d9b33f3-9e471da0-c678-c86000ddc06c/myvmname_1-000001.vmdk' or one of the snapshot disks it depends on.

The file specified is not a virtual disk

It seems that the delete all snapshots has corrupted the disk, that's that, and now I just don't know if it's possible to fix it..

0 Kudos
continuum
Immortal
Immortal

> The file specified is not a virtual disk

That is one of my favorite problems. If you want I have a look via Teamviewer 9.
Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

You can try and make sure the disks are all pointed to the right descriptor

VMware KB: Recreating a missing virtual disk (VMDK) descriptor file for delta disks

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
sakthivelramali
Enthusiast
Enthusiast

Hi

Delete the lck-folders and your VM should start again.

Thanks Sakthivel R
0 Kudos
ThompsG
Virtuoso
Virtuoso

Hi there,

Just wondering if you got any resolution with this or are you still battling it?

Kind regards,

Glen

0 Kudos
continuum
Immortal
Immortal

The problem "file is not a virtual disk!" can have be caused by various causes.
Each cause has its own fix - trying the wrong approach just adds another new problem.

So if you want help in such a case create a new thread and post the descriptorfile of the disk in question and a directory listing created via putty on the commandline.
Verify if the referenced delta- or flat.vmdk really exists with the exact name and path


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
ThompsG
Virtuoso
Virtuoso

Sorry I'm not sure if this reply was meant to be aimed at me or not? If it was aimed at me then I'm not currently looking for any help, just interested in if the posters issue was resolved and if not then leading a hand. Like you, I've had my fair share of resolving these sort of issues and would have liked to see the outcome to add to my first aid box if required Smiley Happy

Thank and kind regards.

0 Kudos

I hope you are running VADP (snapshot) level backup and you backup server locked the file (.lck). just restart the esxi  host management services and try again to consolidate.  FYI: restarting management services won't affect the running vm.

  • Log in to SSH or Local console as root.
  • Run these commands:

    /etc/init.d/hostd restart
    /etc/init.d/vpxa restart

0 Kudos
SARAVANAN_O
Enthusiast
Enthusiast

Please try below link

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100231...VMware KB: Committing snapshots when there are no snapshot entries in the Snapshot Manager

0 Kudos