<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost. in ESXi Discussions</title>
    <link>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2966611#M287935</link>
    <description>&lt;P&gt;I am seeing a segmentation fault, when running voma.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;[root@esxi:~] voma -m vmfs avfix -d /dev/disks/naa.600508b1001c1bf8a053590733375&lt;BR /&gt;ffb&lt;BR /&gt;Running VMFS Checker version 2.1 in default mode&lt;BR /&gt;Initializing LVM metadata, Basic Checks will be done&lt;BR /&gt;Detected valid GPT signatures&lt;BR /&gt;Number Start End Type&lt;BR /&gt;1 2048 10548652032 vmfs&lt;BR /&gt;Initializing LVM metadata..\Segmentation fault&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 03 May 2023 13:34:18 GMT</pubDate>
    <dc:creator>barrelscrapings</dc:creator>
    <dc:date>2023-05-03T13:34:18Z</dc:date>
    <item>
      <title>(Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2966515#M287923</link>
      <description>&lt;P&gt;So just to preface, I dont l want to intrude.My data is not vauable this is purley a learning experience.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I work IT and have &lt;EM&gt;SOME&lt;/EM&gt; experience, but I am very much and the begining of Vmware journy. As I am starting to roll out VM deploys a, I like to break my labs and restore them.. so I can fix issues for my clients. Its the only way I know how to learn &lt;img class="lia-deferred-image lia-image-emoji" src="https://communities.vmware.com/html/@5B889176627CE5032067BFA65F9ADF33/emoticons/1f601.png" alt=":beaming_face_with_smiling_eyes:" title=":beaming_face_with_smiling_eyes:" /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ive run into an issue which I am unable to solve:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a proliant dl380 gen 7 running RAID 5 on a &lt;SPAN class=""&gt;P410i RAID controller. The disks are all 900gb HP 2.5inch SAS drives (not that it matters).&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;I decided to move these drives and replace them with SSD's in order to pass them through the controller, apparrently this is possible only if I wipe the drives (hba mode not available) - when I went to return the SAS drives, I found I had forgotten the order.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;I did not think much of that, and when prompted, I launched the array utlity iso and managed somewhat to get the original RAID 5 to be shown. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;I was not prompted to rebuild the array however it did tell me that I may incur some dataloss as the drives where not in the exact order? I cant remembe the exact errorr to be honest.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Launching into ESXI 7, I found the datastores where not seen, and I saw a new error about the scratch folder, something about it not being configured (I wish I could remember).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So, I thought it would be best to reinstall, I wiped the SD card and upgraded to ESXI 8.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have found the VMFS partition, and there is reference to the "Main volume" when running:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;[root@localhost:/dev/disks] offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4&lt;BR /&gt;-s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done&lt;BR /&gt;/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0&lt;BR /&gt;gpt&lt;BR /&gt;3740 255 63 60088320&lt;BR /&gt;1 64 204863 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128&lt;BR /&gt;5 208896 2306047 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0&lt;BR /&gt;6 2308096 4405247 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0&lt;BR /&gt;7 4407296 60088286 4EB2EA3978554790A79EFAE495E21F8D vmfsl 0&lt;BR /&gt;---------------------&lt;BR /&gt;/vmfs/devices/disks/naa.600508b1001c1bf8a053590733375ffb&lt;BR /&gt;gpt&lt;BR /&gt;656623 255 63 10548655152&lt;BR /&gt;1 2048 10548652032 AA31E02A400F11DB9590000C2911D1B8 vmfs 0&lt;BR /&gt;Checking offset found at 2048:&lt;BR /&gt;0200000 d00d c001&lt;BR /&gt;0200004&lt;BR /&gt;1400000 f15e 2fab&lt;BR /&gt;1400004&lt;BR /&gt;0140001d 4d 61 69 6e 20 56 6f 6c 75 6d 65 00 00 00 00 00 |Main Volume.....|&lt;BR /&gt;0140002d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|&lt;BR /&gt;---------------------&lt;BR /&gt;/vmfs/devices/disks/naa.600508b1001c2863757871ad9c529fbf&lt;BR /&gt;unknown&lt;BR /&gt;486397 255 63 7813971632&lt;BR /&gt;---------------------&lt;/PRE&gt;&lt;P&gt;My understanding is, the parition looks healthy? Though, I am still unable to find the datastores.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Looking at the raw data, it seems to be intact, at the very least I can parse bits and peices of information like this:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry for the spam, but gives you an idea of what I mean:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;����:���:m���c8��D�:�&amp;lt;�~ `VaE�UaE�U@E�U�VdmdX����:���� �lP��UP��U��U�dmdf��&amp;gt;&lt;BR /&gt;���:����Ǌ`�Ǌ`�Ǌ`��dmdf�� ��:��R�`PQU�D�:�&amp;lt;�q@�Z�Ǌ`�Ǌ`�Ǌ`�Zdmdf��`�h�p�x������������������������������������ �(�0�8�@�H�P�`�h�p�x������������������������������������ �(�0�8�@�H�P�X�`�h�p�x������������������������������������ �(�0�@��:��n �:�R�`PQU�D�:�&amp;lt;�� �`&lt;BR /&gt;��U��U&amp;amp;��U�`&lt;BR /&gt;dmd��`��:�k&lt;BR /&gt;����d��d��d�dmdf �����:�} by �/�6/d�6/d�6/d�dmdf �&lt;BR /&gt;����:��%@� t��Ut��Ut��U�dmdf@GI����:��Z&amp;amp;���K��UK��UX��U��dmdf�.&lt;BR /&gt;| - .sdd.sf&lt;BR /&gt;| - Server 2022&lt;BR /&gt;| - ISO&lt;BR /&gt;| - .locker&lt;BR /&gt;| | - cache&lt;BR /&gt;| | | - loadESX&lt;BR /&gt;| | - tmp&lt;BR /&gt;| | - core&lt;BR /&gt;| | - var&lt;BR /&gt;| | | - tmp&lt;BR /&gt;| | - vmware&lt;BR /&gt;| | | - lifecycle&lt;BR /&gt;| | - downloads&lt;BR /&gt;| | - log&lt;BR /&gt;| | - store&lt;BR /&gt;| | - locker&lt;BR /&gt;| | | - packages&lt;BR /&gt;| | | | - var&lt;BR /&gt;| | | | | - db&lt;BR /&gt;| | | | | | - locker&lt;BR /&gt;| | | | | | | - addons&lt;BR /&gt;| | | | | | | - vibs&lt;BR /&gt;| | | | | | | - reservedVibs&lt;BR /&gt;| | | | | | | - bulletins&lt;BR /&gt;| | | | | | | - solutions&lt;BR /&gt;| | | | | | | - baseimages&lt;BR /&gt;| | | | | | | - manifests&lt;BR /&gt;| | | | | | | - reservedComponents&lt;BR /&gt;| | | | | | | - profiles&lt;BR /&gt;| | - vdtc&lt;BR /&gt;| | - healthd&lt;BR /&gt;| - vmkdump&lt;BR /&gt;| - *redacted* 2.0&lt;BR /&gt;| - VPN Server&lt;BR /&gt;| - ezpz&lt;BR /&gt;| - Docker Experiments&lt;BR /&gt;| - macback&lt;BR /&gt;| - pfSense&lt;BR /&gt;| - testing&lt;BR /&gt;| - Backup's&lt;BR /&gt;| - webserver&lt;BR /&gt;| - RC test&lt;BR /&gt;| - mac&lt;BR /&gt;| - Proxmox Backup&lt;BR /&gt;| - mac2&lt;BR /&gt;| - macOS&lt;BR /&gt;| - DNS&lt;BR /&gt;| - PROXMOX BACKUP SERVER&lt;BR /&gt;| - Windows 10 Enterprise&lt;BR /&gt;| - VPN client&lt;BR /&gt;| - omv&lt;BR /&gt;����:���8���c��c��c�dmdf ����:�J $r���U���U��U�dmdfD&lt;BR /&gt;D&lt;BR /&gt;&amp;gt;&lt;BR /&gt;� ��:�. 7@ 9 7&amp;gt;dy�,d 7&amp;gt;d�9dmdf�1�1 `��(� � � �� � � ������������������( �0 �8 �@ �H �P �X �` �h �p �x �� �� �� �� �� �� �� �� �� �� �� �� �� �� �� �&lt;BR /&gt;�&lt;BR /&gt;�&lt;BR /&gt;�&lt;BR /&gt;�&lt;BR /&gt;�(&lt;/PRE&gt;&lt;P&gt;I hope I'm missing something really dumb, like a command to force scan for datastores..&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;things that I've tried:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;mounting via vmfs6-fuse with live debian 11 environment - it complains about a bad magic number which is far above my paygrade.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&amp;nbsp;ran testdisk, got to 1% after an hour and decided my time is better spent posting this &lt;img class="lia-deferred-image lia-image-emoji" src="https://communities.vmware.com/html/@F39A924BD6342F6112FBAC6AD391E474/emoticons/1f923.png" alt=":rolling_on_the_floor_laughing:" title=":rolling_on_the_floor_laughing:" /&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;More than happy to try anything, keen to document my findings even If I fail - for myself and those learning.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Pls ignore if people actuallyneed help this is not urgent at all&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cheers goodnight -&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Nathan&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2023 22:56:22 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2966515#M287923</guid>
      <dc:creator>barrelscrapings</dc:creator>
      <dc:date>2023-05-02T22:56:22Z</dc:date>
    </item>
    <item>
      <title>Re: (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2966611#M287935</link>
      <description>&lt;P&gt;I am seeing a segmentation fault, when running voma.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;[root@esxi:~] voma -m vmfs avfix -d /dev/disks/naa.600508b1001c1bf8a053590733375&lt;BR /&gt;ffb&lt;BR /&gt;Running VMFS Checker version 2.1 in default mode&lt;BR /&gt;Initializing LVM metadata, Basic Checks will be done&lt;BR /&gt;Detected valid GPT signatures&lt;BR /&gt;Number Start End Type&lt;BR /&gt;1 2048 10548652032 vmfs&lt;BR /&gt;Initializing LVM metadata..\Segmentation fault&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2023 13:34:18 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2966611#M287935</guid>
      <dc:creator>barrelscrapings</dc:creator>
      <dc:date>2023-05-03T13:34:18Z</dc:date>
    </item>
    <item>
      <title>Re: (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2967402#M288015</link>
      <description>&lt;P&gt;5 days later, any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I didnt hear back by tomrrow night Ill have to go ahead and wipe.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Mon, 08 May 2023 20:08:57 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/Not-ugrent-just-a-home-lab-Raid-5-rebuild-VMFS-datastores-lost/m-p/2967402#M288015</guid>
      <dc:creator>barrelscrapings</dc:creator>
      <dc:date>2023-05-08T20:08:57Z</dc:date>
    </item>
  </channel>
</rss>

