Hello,
I have a datastore called "Secondary", this datastore was apparently extended over two different drives. Another person thought it would be a good idea to swap out the Raid Controller. They did that and then reconnected the old raid controller when nothing worked.
Anyway. Here we are. We have Disk Internals VMFS Recovery software but it seems to crash at 1TB transferred. Is there a way to rebuild the datastore without losing data and having ESXI see the datastore as it did before? Some information follows.
[root@vmserver01:~] esxcfg-scsidevs -m
GetUnmountedVmfsFileSystemsInt: fsUUID is null, skipping naa.6a4badb00fbbf00022583f81158e1c2b:1
GetUnmountedVmfsFileSystemsInt: fsUUID is null, skipping naa.6a4badb00fbbf00022583f81158e1c2b:1
GetUnmountedVmfsFileSystemsInt: fsUUID is null, skipping naa.6a4badb00fbbf00022583f81158e1c2b:1
[root@vmserver01:~] esxcfg-volume -l
VMFS UUID/label: 59d18cea-83868190-73f8-782bcb39f071/Secondary
Can mount: No (some extents missing)
Can resignature: No (some extents missing)
Extent name: naa.6a4badb00fbbf00026435d4d2207a1c1:1 range: 0 - 3814143 (MB)
[root@vmserver01:~] esxcli storage vmfs snapshot list
59d18cea-83868190-73f8-782bcb39f071
Volume Name: Secondary
VMFS UUID: 59d18cea-83868190-73f8-782bcb39f071
Can mount: false
Reason for un-mountability: some extents missing
Can resignature: false
Reason for non-resignaturability: some extents missing
Unresolved Extent Count: 1
[root@vmserver01:~] ls -lah /dev/disks
total 13795133312
drwxr-xr-x 2 root root 512 May 6 15:37 .
drwxr-xr-x 16 root root 512 May 6 15:37 ..
-rw------- 1 root root 2.7T May 6 15:37 naa.6a4badb00fbbf00022583f81158e1c2b
-rw------- 1 root root 2.7T May 6 15:37 naa.6a4badb00fbbf00022583f81158e1c2b:1
-rw------- 1 root root 3.6T May 6 15:37 naa.6a4badb00fbbf00026435d4d2207a1c1
-rw------- 1 root root 3.6T May 6 15:37 naa.6a4badb00fbbf00026435d4d2207a1c1:1
-rw------- 1 root root 116.7G May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080
-rw------- 1 root root 4.0M May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:1
-rw------- 1 root root 250.0M May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:5
-rw------- 1 root root 250.0M May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:6
-rw------- 1 root root 110.0M May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:7
-rw------- 1 root root 286.0M May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:8
-rw------- 1 root root 2.5G May 6 15:37 t10.SanDisk00Ultra00000000000000000000004C530001040413105080:9
lrwxrwxrwx 1 root root 60 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:1 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:1
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:5 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:5
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:6 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:6
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:7 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:7
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:8 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:8
lrwxrwxrwx 1 root root 62 May 6 15:37 vml.01000000003443353330303031303430343133313035303830556c74726120:9 -> t10.SanDisk00Ultra00000000000000000000004C530001040413105080:9
lrwxrwxrwx 1 root root 36 May 6 15:37 vml.02000000006a4badb00fbbf00022583f81158e1c2b504552432036 -> naa.6a4badb00fbbf00022583f81158e1c2b
lrwxrwxrwx 1 root root 38 May 6 15:37 vml.02000000006a4badb00fbbf00022583f81158e1c2b504552432036:1 -> naa.6a4badb00fbbf00022583f81158e1c2b:1
lrwxrwxrwx 1 root root 36 May 6 15:37 vml.02000000006a4badb00fbbf00026435d4d2207a1c1504552432036 -> naa.6a4badb00fbbf00026435d4d2207a1c1
lrwxrwxrwx 1 root root 38 May 6 15:37 vml.02000000006a4badb00fbbf00026435d4d2207a1c1504552432036:1 -> naa.6a4badb00fbbf00026435d4d2207a1c1:1
Any results ???
My apologies.
I just finished running the commands with the patched binaries.
It datastore appears visible in the esxi web client. VM's have not been tested for operability yet but I suspect that they will be fine.
I am currently running a transfer of the datastore into a cloud backup provider.
Thank you.
The patches were deleted, I wanted to examine them, I have a similar problem the hardware has failed and I have mounted the disks on a new hardware as the uuids have changed the extended datastore does not mount
I have modified the references on the parent disk but it still does not work
Could you help me please? I have a similar problem.