VMware Cloud Community
taspence
Contributor
Contributor

Datastore not mounting after RAID rebuild from failed drive

VMWare ESXi 5.5.0 - I have an HP Server with a RAID 5 configuration that after replacing a (1) drive and re initializing the RAID, my datastore is now not mounting. I have two missing VM's. In the storage configuration I should have 2 datastores. Under datastores only one is showing. Under devices I I show two devices. naa.600508b1001c698c8f8380587e5cc212 is what I am after. If I rescan, it does not mount. If I go to add storage, choose LUN, and regardless if I choose VMFS-5 or VMFS-3 I get prompted with "this configuration will delete the current disk layout. All file systems and data will be permanently lost."

Running partedUtil getptbl /vmfs/devices/disks/naa.600508b1001c698c8f8380587e5cc212

I get

gpt

145875 255 63 2343487580

1 2048 2343481874 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Running partedUtil getUsableSectors /vmfs/devices/disks/naa.600508b1001c698c8f8380587e5cc212

I get

34 2343487546

Running esxcli storage vmfs snapshot list bring be back to the command prompt so I'm guessing its not viewing it as a snapshot

I have a hard time believing that the raid wiped my data with the rebiuld of just one drive in a raid 5.

Any guidance would be appreciated. Thank you.

Tags (2)
0 Kudos
5 Replies
SupreetK
Commander
Commander

During rescan, are you seeing any errors for the LUN <naa.600508b1001c698c8f8380587e5cc212> in vmkernel.log? Can you run the below command and share the output that is starting with the offset <00200000> until the start of next offset?

hexdump -C /vmfs/devices/disks/naa.600508b1001c698c8f8380587e5cc212 | less

Cheers,

Supreet

0 Kudos
a_p_
Leadership
Leadership

I have an HP Server with a RAID 5 configuration that after replacing a (1) drive and re initializing the RAID

Can you please provide some more details.

  • What type/model of RAID controller do you use?
  • How did you replace the disk (Hot-Swap, or with the host powered off)?
  • In case of Hot-Swap, how long did you wait after removing the failed disk, and inserting the new one?
  • What exactly do you mean with "re initializing the RAID"?

Anyway, since the partition still shows up, I'd suggest you try to contact community user continuum​ prior to trying anything that may make things worse.

André

0 Kudos
continuum
Immortal
Immortal

If you create a VMFS header dump and send it my way I can probably assist you.
See Create a VMFS-Header-dump using an ESXi-Host in production | VM-Sickbay


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
eliren
Contributor
Contributor

Hi, i have similar problem but when i try run esxcfg-scsidevs -m command i get

[2018-09-24 15:23:42 'StorageInfo' warning] Skipping dir: /vmfs/volumes/4ab1034b-5ff84f48-f645-00237d9e2ae6. Cannot open volume: /vmfs/volumes/4ab1034b-5ff84f48-f645-00237d9e2ae6

ESXi 4.0

HP DL 360 G5 Raid 5

Thanks for you attention. Sorry for my english.

0 Kudos
continuum
Immortal
Immortal

That does not look good.
Did you try to create a dump of the VMFS-partition ?
If this datastore has valuable VMs I would suggest that you call me via skype.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos