Need more information. What version of ESXi? What "new disks"? On what hardware? How did you install them and what procedure did you take, if any, to make use of them in ESXi? What is the exact output of VOMA using the procedure documented here?
Thanks. ESXi 5.5. HP DL 380 G8, controler p822. just insert the disks without shutting down the server and then pull out, no commands were entered. VOMA output (attachments). Before this (voma) command I followed the following guidelines :https://vmwaremine.com/2014/06/23/use-partedutil-recover-damaged-vmfs5-gpt-partition/#sthash.K9yPjsR...
What exactly did you do regarding the link you've mentioned?
What's the output for the following two commands:
partedUtil getptbl /vmfs/devices/disks/naa.600...c21b
partedUtil getUsableSectors /vmfs/devices/disks/naa.600...c21b
Please paste the text output into a reply post, rather than posting a picture.
André
partedUtil getptbl /vmfs/devices/disks/naa.600...c21b
~ # partedUtil getptbl /dev/disks/naa.600508b1001c3e74c618f544a5d5c21b
gpt
214142 255 63 3440198448
1 2048 3440191230 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
partedUtil getUsableSectors /vmfs/devices/disks/naa.600...c21b
~ # partedUtil getUsableSectors /vmfs/devices/disks/naa.600508b1001c3e74c618f544a5d5c21b
34 3440198414
What exactly did you do regarding the link you've mentioned?
partedUtil setptbl /dev/disks/naa.600508b1001c3e74c618f544a5d5c21b gpt "1 2048 3440191230 AA31E02A400F11DB9590000C2911D1B8 0"
and vmkernel.log :
2019-03-10T15:53:29.761Z cpu8:34013 opID=57393e55)Vol3: 714: Couldn't read volume header from control: Not supported
2019-03-10T15:53:29.761Z cpu8:34013 opID=57393e55)Vol3: 714: Couldn't read volume header from control: Not supported
2019-03-10T15:53:29.761Z cpu8:34013 opID=57393e55)FSS: 5092: No FS driver claimed device 'control': Not supported
partedUtil setptbl /dev/disks/naa.600508b1001c3e74c618f544a5d5c21b gpt "1 2048 3440191230 AA31E02A400F11DB9590000C2911D1B8 0"
From where did you get this value?
According to the "getUsableSectors" command it should be:
partedUtil setptbl /dev/disks/naa.600508b1001c3e74c618f544a5d5c21b gpt "1 2048 3440198414 AA31E02A400F11DB9590000C2911D1B8 0"
I can't promise that this already fixes the issue, but it shouldn't hurt either. Remember to do a rescan rescan after that, e.g. from the GUI or via command line (vmkfstools -V).
If this doesn't help, then please run the "offset ....;done" command from step 1 in https://kb.vmware.com/s/article/2046610 and post the result.
André
From where did you get this value?
when i type 3440198414 get vmkernel error :
2019-03-10T16:49:06.524Z cpu0:38773)LVM: 2907: [naa.600508b1001c3e74c618f544a5d5c21b:1] Device expanded (actual size 3440196367 blocks, stored size 3440195584 blocks)
then calculate 3440195584+2047 and type :
partedUtil setptbl /dev/disks/naa.600508b1001c3e74c618f544a5d5c21b gpt "1 2048 3440197631 AA31E02A400F11DB9590000C2911D1B8 0"
but dont help
If this doesn't help, then please run the "offset ....;done" command from step 1 in https://kb.vmware.com/s/article/2046610 and post the result.
/vmfs/devices/disks/naa.600508b1001c3e74c618f544a5d5c21b
gpt
214142 255 63 3440198448
1 2048 3440198414 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Checking offset found at 2048:
0200000 d00d c001
0200004
*
1400000
0140001d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
---------------------
/vmfs/devices/disks/naa.600508b1001c592f9922d71eb8b3fb22
gpt
53535 255 63 860050224
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 15472640 860049407 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
---------------------
~ #
That doesn't look really promising. Maybe continuum has an idea!?
André
Please read Create a VMFS-Header-dump using an ESXi-Host in production | VM-Sickbay
If you send the output of
dd if= "/dev/disks/naa.600508b1001c3e74c618f544a5d5c21b" bs=1M count=1536 of=/replace-with-a-path-with-enough-free-space/xamza.1536
I can have a closer look.
Ulli
Thanks. Restored from backup.every minute counted