I powered off the vm and put the host in maintenance mode before powering it off so I could open the case and clean it out as it was getting really hot and it was full of dust.
When I powered on the machine and logged into vsphere to power on the VM I noticed it's status was "invalid" and then I noticed I could see the disks but no datasores.
When I ssh into the host I can see the vm's directory in place and copying it to another ESXI host did not allow me to open it there either.
Does anyone have any suggestions on how I can recover the VM or the entire datastore so I can start up the VM?
ESXI 6.5.0 build 4564106
Raid 10 with 4 drives
We found several further I/O errors so we had to switch to Linux.
Now we have the VMFS mounted and are busy copying out the files via ddrescue ....
Hi nolak and welcome to the community!
Sorry to be slow but there seems to be something missing here (apart from your missing datastore) - was the VM across a couple of datastores or was it encapsulated in a single datastore?
Could you also do a ls -lisa of the VMs directory and post the results here? Also a df -h as well.
Kind regards.
Sure, here's what I got.
ls -lisa
total 461098008
302030532 8 drwxr-xr-x 1 root root 7140 Jan 5 21:24 .
4 1024 drwxr-xr-t 1 root root 1680 Jan 5 23:29 ..
696295108 12582912 -rw------- 1 root root 12884901888 Mar 6 2018 NextCloud-Snapshot1.vmem
692100804 6144 -rw------- 1 root root 5473251 Mar 6 2018 NextCloud-Snapshot1.vmsn
469802692 12582912 -rw------- 1 root root 12884901888 Jan 3 00:20 NextCloud-Snapshot10.vmem
465608388 6144 -rw------- 1 root root 5486642 Jan 3 00:20 NextCloud-Snapshot10.vmsn
721460932 12582912 -rw------- 1 root root 12884901888 Mar 6 2018 NextCloud-Snapshot2.vmem
717266628 6144 -rw------- 1 root root 5473251 Mar 6 2018 NextCloud-Snapshot2.vmsn
482385604 12582912 -rw------- 1 root root 12884901888 Jun 13 2018 NextCloud-Snapshot3.vmem
478191300 6144 -rw------- 1 root root 5478786 Jun 13 2018 NextCloud-Snapshot3.vmsn
524328644 12582912 -rw------- 1 root root 12884901888 Aug 5 01:58 NextCloud-Snapshot4.vmem
520134340 6144 -rw------- 1 root root 5473247 Aug 5 01:58 NextCloud-Snapshot4.vmsn
666934980 12582912 -rw------- 1 root root 12884901888 Aug 16 08:11 NextCloud-Snapshot5.vmem
662740676 6144 -rw------- 1 root root 5473247 Aug 16 08:11 NextCloud-Snapshot5.vmsn
37789380 12582912 -rw------- 1 root root 12884901888 Oct 18 07:45 NextCloud-Snapshot6.vmem
33595076 6144 -rw------- 1 root root 5473532 Oct 18 07:45 NextCloud-Snapshot6.vmsn
541105860 12582912 -rw------- 1 root root 12884901888 Oct 22 07:54 NextCloud-Snapshot7.vmem
515940036 2048 -rw------- 1 root root 1272269 Oct 22 07:54 NextCloud-Snapshot7.vmsn
604020420 12582912 -rw------- 1 root root 12884901888 Oct 24 09:35 NextCloud-Snapshot8.vmem
595631812 6144 -rw------- 1 root root 5481713 Oct 24 09:35 NextCloud-Snapshot8.vmsn
742432452 1024 -rw------- 1 root root 8684 Jan 5 21:24 NextCloud.nvram
318807748 8 -rw-r--r-- 1 root root 3410 Jan 3 00:30 NextCloud.vmsd
755015364 8 -rwx------ 1 root root 3138 Jan 5 21:24 NextCloud.vmx
746626756 0 -rw------- 1 root root 150 Jan 3 00:30 NextCloud.vmxf
700489412 509952 -rw------- 1 root root 521940992 Mar 6 2018 NextCloud_0-000001-delta.vmdk
704683716 0 -rw------- 1 root root 327 Mar 6 2018 NextCloud_0-000001.vmdk
725655236 28084224 -rw------- 1 root root 28757995520 Jun 13 2018 NextCloud_0-000002-delta.vmdk
729849540 0 -rw------- 1 root root 334 May 27 2018 NextCloud_0-000002.vmdk
486579908 29640704 -rw------- 1 root root 30351831040 Aug 5 01:55 NextCloud_0-000003-delta.vmdk
490774212 0 -rw------- 1 root root 334 Aug 2 23:38 NextCloud_0-000003.vmdk
528522948 10094592 -rw------- 1 root root 10336612352 Aug 16 08:08 NextCloud_0-000004-delta.vmdk
532717252 0 -rw------- 1 root root 334 Aug 5 01:55 NextCloud_0-000004.vmdk
675323588 26920960 -rw------- 1 root root 27566813184 Oct 18 07:42 NextCloud_0-000005-delta.vmdk
679517892 0 -rw------- 1 root root 334 Oct 18 07:22 NextCloud_0-000005.vmdk
385916612 1230848 -rw------- 1 root root 1260138496 Oct 22 07:52 NextCloud_0-000006-delta.vmdk
390110916 0 -rw------- 1 root root 334 Oct 22 07:44 NextCloud_0-000006.vmdk
545300164 6440960 -rw------- 1 root root 6595293184 Oct 24 09:33 NextCloud_0-000007-delta.vmdk
553688772 0 -rw------- 1 root root 388 Oct 22 08:05 NextCloud_0-000007.vmdk
767598276 2476032 -rw------- 1 root root 2535206912 Jan 5 21:24 NextCloud_0-000008-delta.vmdk
771792580 0 -rw------- 1 root root 388 Jan 5 21:21 NextCloud_0-000008.vmdk
775986884 123242496 -rw------- 1 root root 126200066048 Jan 3 00:14 NextCloud_0-000010-delta.vmdk
784375492 0 -rw------- 1 root root 384 Jan 3 00:14 NextCloud_0-000010.vmdk
310419140 119150592 -rw------- 1 root root 966367641600 Mar 6 2018 NextCloud_0-flat.vmdk
314613444 0 -rw------- 1 root root 557 Mar 4 2018 NextCloud_0.vmdk
658546372 1024 -rw------- 1 root root 337135 Oct 25 00:40 vmware-27.log
817929924 1024 -rw------- 1 root root 189815 Feb 14 2019 vmware-28.log
457219780 1024 -rw------- 1 root root 303086 Jan 3 00:23 vmware-29.log
620797636 1024 -rw------- 1 root root 214008 Jan 3 00:24 vmware-30.log
713072324 1024 -rw------- 1 root root 227718 Jan 3 00:29 vmware-31.log
813735620 1024 -rw------- 1 root root 512249 Jan 5 21:20 vmware-32.log
327196356 1024 -rw------- 1 root root 226271 Jan 5 21:24 vmware.log
df -h
VmFileSystem: SlowRefresh() failed: Unable to get FS Attrs for /vmfs/volumes/59dbe502-a05e6150-4dbd-40167e6387f4
Error when running esxcli, return status was: 1
Errors:
Error getting data for filesystem on '/vmfs/volumes/59dbe502-a05e6150-4dbd-40167e6387f4': Unable to get FS Attrs for /vmfs/volumes/59dbe502-a05e6150-4dbd-40167e6387f4, skipping.
> Error getting data for filesystem on '/vmfs/volumes/59dbe502-a05e6150-4dbd-40167e6387f4':
> Unable to get FS Attrs for /vmfs/volumes/59dbe502-a05e6150-4dbd-40167e6387f4, skipping.
Thats not good - if the VMs are important read
Create a VMFS-Header-dump using an ESXi-Host in production | VM-Sickbay
With a dump like that I maybe able to extract the VMs manually.
Ulli
Hi Ulli.
This is a Nextcloud server and all the files are synced onto the desktops with the exception of maybe 2 users.
Let me see how important it is getting these back, it depends on the files those users had saved.
If we need you to do a manual recovery how long do you think it would take and what would be the cost?
Thanks
> If we need you to do a manual recovery how long do you think it would take and what would be the cost?
I can answer those questions once I have seen the dumpfile.
> how long do you think it would take
That depends on the size of the vmdk.
If I am able to create a sh-script to extract the vmdk via dd / ddrescue it usually takes slightly longer than a vmkfstools -i command would take.
Ok, thanks for the quick reply.
So trying this out I'm stuck on 3, I put the device listed after Device: and I'm getting the error No such file or directory. Here's the script as I typed it, don't know if I'm doing something wrong.
dd if=/dev/disks/Device:t10.ATA_____WDC_WD10EFRX2D68PJCN0_________________________WD2DWMC4J0176987:1 bs=1M count=1536 of=/tmp/Casename.1536
You need to remove Device:
dd if=/dev/disks/Device:t10.ATA_____WDC_WD10EFRX2D68PJCN0_________________________WD2DWMC4J0176987:1 bs=1M count=1536 of=/tmp/Casename.1536
should be
dd if=/dev/disks/t10.ATA_____WDC_WD10EFRX2D68PJCN0_________________________WD2DWMC4J0176987:1 bs=1M count=1536 of=/tmp/nolasco.1536
Please dont use Casename - I have more Casename.1536 files than I will ever need 🙂
Hi, thanks for the reply. I got busy and just now got a chance to take care of this but I keep getting an Input/output error. I figured its because there isn't enough space in the tmp folder so I tried gzip with same error.
Ther are no other datastores on the device to output this to and I get the same error if I try any other directory.
Any advice?
If it's easier we can communicate over some chat like hangouts or skype.
Thanks
Please show me the full output of the dd command.
I often see an I/O error after about 21 Mb.
I am available via skype
Ulli
We found several further I/O errors so we had to switch to Linux.
Now we have the VMFS mounted and are busy copying out the files via ddrescue ....
Well, I don't really have much to add, I added a second SATA drive to the host and created a new Datastore on it. Ulli used a linux VM on the same host and using ddrescue as he mentioned was able to recover the VM I needed from the original datastore to the new one.
I now have access to the VM and all my files.
Thanks Ulli.