VMware Cloud Community
plexustech
Contributor
Contributor

Datastore missing after raid rebuild esxi 6.7

The partition shows up on the array that appears in devices, but I don't see anything in the datastores. I'm seeing the following log details in vmkernal.log that appear pertinent to me.

2018-10-03T03:12:50.027Z cpu11:2097777)ScsiDeviceIO: 3015: Cmd(0x459a40bb4700) 0x1a, CmdSN 0x596 from world 0 to dev "naa.600605b008f54cc01f26fd89641704b7" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-10-03T03:10:54.920Z cpu11:2097370)WARNING: Vol3: 3102: datastore1/5b91d3af-1dcf06c8-e37e-ac220b8c42e2: Invalid physDiskBlockSize 512

2018-10-03T03:10:54.925Z cpu11:2097370)FSS: 6092: No FS driver claimed device '5b91d3ae-fe7ddcfc-31ba-ac220b8c42e2': No filesystem on the device

Is this thing really toast? almost seems like the uuid changed. Any advice or direction anyone might be able to give, would be very much welcome.

Reply
0 Kudos
7 Replies
A13x
Hot Shot
Hot Shot

Did you have storage failure and have you dont anything with the ESXi host since it has been rebuilt like a rescan or a reboot? Was the datastore disconnected or in a failed state previously?

Reply
0 Kudos
SupreetK
Commander
Commander

1) Was this datastore initially created on the previous versions of ESXi or created on ESXi 6.7 itself?

2) Is this an NVMe device/disk?

3) Run the command <esxcli storage vmfs snapshot list> and see if this is getting detected as snapshot.

Cheers,

Supreet

Reply
0 Kudos
SureshKumarMuth
Commander
Commander

I think it got remounted as a new Datastore with new UUID. Was it a force mount ?

continuum Ulli can help you, he is an expert in data recovery.

Regards,
Suresh
https://vconnectit.wordpress.com/
Reply
0 Kudos
continuum
Immortal
Immortal

Hi Cory
please create a VMFS header dump for the device /dev/disks/naa.*****704b7:10
see Create a VMFS-Header-dump using an ESXi-Host in production | VM-Sickbay
Please compress the dump and provide a download-link.
If you are lucky it contains enough info to manually extract your vmdk-files.
Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
plexustech
Contributor
Contributor

Thank you everyone for the feedback.

The datastore was originally created upon installation of 6.7

is not a nvme disk

I should have indicated that <esxcli storage vmfs snapshot list> does not yield anything

I should have also specified, that when one of the disks failed from the raid 10 array everything continued to hum along just fine as one would expect. It wasn't until I installed the replacement drive that the datastore then went missing.

I did run the trial of Diskinternals vmfs recovery software and it looks like it can read the partition. Unfortunately a license cost like $699 to pull the data off. I haven't had a chance yet to try to boot into a linux iso and attempt to copy the data off that way.

Here is the header dump.  https://drive.google.com/open?id=17eyl2fMMnhmNE6EgFOfLNbw4VJlNo5Ij

Hopefully continuum can work some voodoo. Smiley Happy

Reply
0 Kudos
continuum
Immortal
Immortal

> I haven't had a chance yet to try to boot into a linux iso and attempt to copy the data off that way.
You can not read VMFS 6 with Linux.
I checked your dump and it looks like 4 VMs may be recoverable and 4 VMs are not readable.
Unfortunately all your VMs are thin provisioned.
> I did run the trial of Diskinternals vmfs recovery software and it looks like it can read the partition.
Can you see all your VMs with Diskinternals ?


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
aughsydney
Contributor
Contributor

Hey plexustech,

I'm having a similiar problem in 6.7 - In my case the problem occurrs when I add new storage on the HPE MSA 2040 storage array (Storage is added to the Pool that hosts the existing datastores)

I get the same symptoms as you see in vsphere and I get the same error message in the vmkernel.log

In my case there are two ways to solve the problem (which allows me to mount the datastore)

1. Remove the new storage I added

2. Downgrade to esxi 6.5

Neither option is satisfactory .

This suggests that the problem is ESXi 6.7 and not the MSA - having said that in the compatibility matrix esxi 6.7 and an MSA 2040 are not compatible - but I've worked in such environments (incompatibilities based simply on the fact that two companies have not tested their products against each other in a lab) for years and did not have a problem

regards

Reply
0 Kudos