So I did a stupid thing in my lab and rebooted too often and got VSAN out of whack. I'm part of the way to fixed, and it doesn't really matter anyway, but I am curious now how to recover from all this.
Somehow I managed to get my 3-node cluster in a state where 2 nodes were in VSAN evacuated so everything was unavailable because the objects on 2/3 of the drives were "ABSENT". I recovered the VSAN by using "vsan.host_exit_evacuation" in rvc and suddenly 2 of my 4 VMs were back! Yay!
But the other two are still "Unassociated" and I'm curious what to do next.
As we can see below, I've got a bunch of Unassociated objects, but most (10 of 17) are OK.
> vsan.obj_status_report ~cluster -t -u
2015-07-08 22:05:55 +0000: Querying all VMs on VSAN ...
2015-07-08 22:05:55 +0000: Querying all objects in the system from 192.168.2.68 ...
2015-07-08 22:05:55 +0000: Querying all disks in the system from 192.168.2.68 ...
2015-07-08 22:05:56 +0000: Querying all components in the system from 192.168.2.68 ...
2015-07-08 22:05:56 +0000: Querying all object versions in the system ...
2015-07-08 22:05:57 +0000: Got all the info, computing table ...
Histogram of component health for non-orphaned objects
+-------------------------------------+------------------------------+
| Num Healthy Comps / Total Num Comps | Num objects with such status |
+-------------------------------------+------------------------------+
| 1/3 (Unavailable) | 7 |
| 3/3 (OK) | 10 |
+-------------------------------------+------------------------------+
Total non-orphans: 17
Histogram of component health for possibly orphaned objects
+-------------------------------------+------------------------------+
| Num Healthy Comps / Total Num Comps | Num objects with such status |
+-------------------------------------+------------------------------+
+-------------------------------------+------------------------------+
Total orphans: 0
Total v1 objects: 0
Total v2 objects: 17
+-----------------------------------------+---------+---------------------------+
| VM/Object | objects | num healthy / total comps |
+-----------------------------------------+---------+---------------------------+
| Unassociated objects | | |
| 4d3b6f55-81ca-fb04-a006-c03fd56d78b2 | | 1/3 |
| 9afe6e55-3041-592b-0b38-b8aeed70e3ed | | 1/3 |
| 98fe6e55-64c8-1d42-5518-b8aeed70e3ed | | 3/3 |
| 96fe6e55-c0a8-a847-04dc-b8aeed70e3ed | | 3/3 |
| 94fe6e55-e0b5-274f-ef68-b8aeed70e3ed | | 3/3 |
| 92fe6e55-cc40-9e66-b3af-b8aeed70e3ed | | 3/3 |
| 69416f55-4c9a-1a6c-1e3c-c03fd56d78b2 | | 3/3 |
| bd6a6f55-26b5-ca90-f924-c03fd56d782f | | 3/3 |
| 553b6f55-4760-7199-ff5b-c03fd56d78b2 | | 3/3 |
| 90fe6e55-fc89-99a7-a331-b8aeed70e3ed | | 1/3 |
| 2a6d6f55-30fc-a9a9-8fea-b8aeed70e3ed | | 3/3 |
| 95fe6e55-7c08-abb8-51c8-b8aeed70e3ed | | 1/3 |
| 99fe6e55-d41b-39bf-98c4-b8aeed70e3ed | | 3/3 |
| 91fe6e55-501e-f9c2-7e30-b8aeed70e3ed | | 1/3 |
| 97fe6e55-7cb3-00c5-1162-b8aeed70e3ed | | 3/3 |
| 88fe6e55-e807-89c5-27b4-b8aeed70e3ed | | 1/3 |
| 93fe6e55-38f9-3edc-4ed9-b8aeed70e3ed | | 1/3 |
+-----------------------------------------+---------+---------------------------+
+------------------------------------------------------------------+
| Legend: * = all unhealthy comps were deleted (disks present) |
| - = some unhealthy comps deleted, some not or can't tell |
| no symbol = We cannot conclude any comps were deleted |
+------------------------------------------------------------------+
I tried refreshing the state to make them accessible:
> vsan.check_state -r -e ~cluster
2015-07-08 22:06:54 +0000: Step 1: Check for inaccessible VSAN objects
Detected 4d3b6f55-81ca-fb04-a006-c03fd56d78b2 to be inaccessible, refreshing state
Detected 9afe6e55-3041-592b-0b38-b8aeed70e3ed to be inaccessible, refreshing state
Detected 90fe6e55-fc89-99a7-a331-b8aeed70e3ed to be inaccessible, refreshing state
Detected 95fe6e55-7c08-abb8-51c8-b8aeed70e3ed to be inaccessible, refreshing state
Detected 91fe6e55-501e-f9c2-7e30-b8aeed70e3ed to be inaccessible, refreshing state
Detected 88fe6e55-e807-89c5-27b4-b8aeed70e3ed to be inaccessible, refreshing state
Detected 93fe6e55-38f9-3edc-4ed9-b8aeed70e3ed to be inaccessible, refreshing state
2015-07-08 22:07:00 +0000: Step 1b: Check for inaccessible VSAN objects, again
Detected 4d3b6f55-81ca-fb04-a006-c03fd56d78b2 is still inaccessible
Detected 9afe6e55-3041-592b-0b38-b8aeed70e3ed is still inaccessible
Detected 90fe6e55-fc89-99a7-a331-b8aeed70e3ed is still inaccessible
Detected 95fe6e55-7c08-abb8-51c8-b8aeed70e3ed is still inaccessible
Detected 91fe6e55-501e-f9c2-7e30-b8aeed70e3ed is still inaccessible
Detected 88fe6e55-e807-89c5-27b4-b8aeed70e3ed is still inaccessible
Detected 93fe6e55-38f9-3edc-4ed9-b8aeed70e3ed is still inaccessible
2015-07-08 22:07:01 +0000: Step 2: Check for invalid/inaccessible VMs
2015-07-08 22:07:01 +0000: Step 2b: Check for invalid/inaccessible VMs again
2015-07-08 22:07:01 +0000: Step 3: Check for VMs for which VC/hostd/vmx are out of sync
Did not find VMs for which VC/hostd/vmx are out of sync
But no, that didn't work. Here's an example of an object I'd like to try to recover, a VMDK file for a pfSense router:
> vsan.object_info ~cluster 553b6f55-4760-7199-ff5b-c03fd56d78b2
DOM Object: 553b6f55-4760-7199-ff5b-c03fd56d78b2 (v2, owner: 192.168.2.68, policy: forceProvisioning = 0, hostFailuresToTolerate = 1, spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, proportionalCapacity = 0, spbmProfileGenerationNumber = 0, cacheReservation = 0, stripeWidth = 1)
RAID_1
Component: 553b6f55-5ef5-169b-19e7-c03fd56d78b2 (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 1.4 GB)
Component: 553b6f55-f269-179b-831b-c03fd56d78b2 (state: ACTIVE (5), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 1.4 GB)
Witness: 6c419d55-4447-dfbe-5673-c03fd56d782f (state: ACTIVE (5), host: 192.168.2.69, md: t10.ATA_____ST9750420AS_________________________________________5WS0GZWP, ssd: t10.ATA_____Samsung_SSD_850_EVO_mSATA_250GB_________S248NWAG305036B_____,
votes: 1, usage: 0.0 GB)
Extended attributes:
Address space: 8589934592B (8.00 GB)
Object class: vdisk
Object path: /vmfs/volumes/vsan:52c17a3c84a7c56c-07f6ae1b09d46883/4d3b6f55-81ca-fb04-a006-c03fd56d78b2/pfSense.vmdk
Everything looks OK but it remains inaccessible. How can you recover an inaccessible VSAN object?
Is this vSAN on vSphere 5.5 or vSphere 6? That will determine your troubleshooting options. Thank you, Zach.
I apologize! This is VSAN2 under vSphere 6.
I've still gotten nowhere in recovering these "unassociated" objects. They're there, I can sort of see them, but I can't get at them.
Can anyone help? Or should I just take off and nuke the site from orbit?
The object you quote, i.e. 553b6f55-4760-7199-ff5b-c03fd56d78b2 is actually OK. It has 3/3 in ACTIVE, so it available and healthy. We need to look at the objects which have 1/3. In fact, you can see the descriptor path is on a directory called 4d3b6f55-81ca-fb04-a006-c03fd56d78b2, which is 1/3, so not available. That's why you can't get at the VMDK. The data is healthy, but the directory with the descriptor file is not.
- Run vsan.object_info on 4d3b6f55-81ca-fb04-a006-c03fd56d78b2 (and maybe some other 1/3) and post results here. I am interested to see the state of the non-healthy components. Are they on healthy physical disks? Are they DEGRADED or ABSENT?
- Instead VSAN Health plugin and see if anything else in the cluster looks suspicious
- Run vsan.cluster_info and vsan.disks_stats and post results here. This will also help to see the overall state and may help us root cause why some objects are 1/3.
Sorry, I didn't see your previous reply. And if I understand the movie reference, We're in the pipe, 5 by 5.
Need to identify a VM with the Inaccessible VSAN Objects
It seems very tedious, but also seems like it would work. Thank you, Zach.
Thanks for the help! Here's the vsan.object_info results for every 1/3 object:
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 4d3b6f55-81ca-fb04-a006-c03fd56d78b2
2015-07-24 19:59:02 +0000: Fetching VSAN disk info from 192.168.2.68 (may take a moment) ...
2015-07-24 19:59:02 +0000: Fetching VSAN disk info from 192.168.2.69 (may take a moment) ...
2015-07-24 19:59:02 +0000: Fetching VSAN disk info from 192.168.2.67 (may take a moment) ...
2015-07-24 19:59:04 +0000: Done fetching VSAN disk infos
DOM Object: 4d3b6f55-81ca-fb04-a006-c03fd56d78b2 (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 4d3b6f55-cff3-d019-0b94-c03fd56d78b2 (state: ABSENT (6), csn: STALE (29!=34), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 0.4 GB)
Component: 4d3b6f55-fb55-d219-748a-c03fd56d78b2 (state: ABSENT (6), csn: STALE (0!=34), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Witness: 4d3b6f55-de1c-d319-04ec-c03fd56d78b2 (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 9afe6e55-3041-592b-0b38-b8aeed70e3ed
DOM Object: 9afe6e55-3041-592b-0b38-b8aeed70e3ed (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 9bfe6e55-10a3-02a2-896b-b8aeed70e3ed (state: ABSENT (6), csn: STALE (27!=33), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 0.7 GB)
Component: 9bfe6e55-a4b8-03a2-c82f-b8aeed70e3ed (state: ABSENT (6), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Witness: 9bfe6e55-5c4d-04a2-0e23-b8aeed70e3ed (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 553b6f55-4760-7199-ff5b-c03fd56d78b2
DOM Object: 553b6f55-4760-7199-ff5b-c03fd56d78b2 (v2, owner: 192.168.2.67, policy: forceProvisioning = 0, hostFailuresToTolerate = 1, spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, proportionalCapacity = 0, spbmProfileGenerationNumber = 0, cacheReservation = 0, stripeWidth = 1)
RAID_1
Component: 553b6f55-5ef5-169b-19e7-c03fd56d78b2 (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 1.4 GB)
Component: 553b6f55-f269-179b-831b-c03fd56d78b2 (state: ACTIVE (5), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 1.4 GB)
Witness: 6c419d55-4447-dfbe-5673-c03fd56d782f (state: ACTIVE (5), host: 192.168.2.69, md: t10.ATA_____ST9750420AS_________________________________________5WS0GZWP, ssd: t10.ATA_____Samsung_SSD_850_EVO_mSATA_250GB_________S248NWAG305036B_____,
votes: 1, usage: 0.0 GB)
Extended attributes:
Address space: 8589934592B (8.00 GB)
Object class: vdisk
Object path: /vmfs/volumes/vsan:52c17a3c84a7c56c-07f6ae1b09d46883/4d3b6f55-81ca-fb04-a006-c03fd56d78b2/pfSense.vmdk
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 95fe6e55-7c08-abb8-51c8-b8aeed70e3ed
DOM Object: 95fe6e55-7c08-abb8-51c8-b8aeed70e3ed (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 95fe6e55-50bf-7830-7c63-b8aeed70e3ed (state: ABSENT (6), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Component: 95fe6e55-80ea-7930-427f-b8aeed70e3ed (state: ABSENT (6), csn: STALE (28!=34), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 0.7 GB)
Witness: 95fe6e55-68a3-7a30-87e1-b8aeed70e3ed (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 91fe6e55-501e-f9c2-7e30-b8aeed70e3ed
DOM Object: 91fe6e55-501e-f9c2-7e30-b8aeed70e3ed (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 91fe6e55-84b6-123c-c506-b8aeed70e3ed (state: ABSENT (6), csn: STALE (28!=34), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 1.3 GB)
Component: 91fe6e55-fc78-143c-9874-b8aeed70e3ed (state: ABSENT (6), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Witness: 91fe6e55-989b-153c-75b8-b8aeed70e3ed (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 88fe6e55-e807-89c5-27b4-b8aeed70e3ed
DOM Object: 88fe6e55-e807-89c5-27b4-b8aeed70e3ed (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 88fe6e55-647c-7cd6-65cb-b8aeed70e3ed (state: ABSENT (6), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Component: 88fe6e55-6800-7ed6-b565-b8aeed70e3ed (state: ABSENT (6), csn: STALE (29!=36), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 0.4 GB)
Witness: 88fe6e55-7016-7fd6-2a7e-b8aeed70e3ed (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
> vsan.object_info /localhost/Basement/computers/NUC\ Cluster 93fe6e55-38f9-3edc-4ed9-b8aeed70e3ed
DOM Object: 93fe6e55-38f9-3edc-4ed9-b8aeed70e3ed (v2, owner: 192.168.2.68, policy: No POLICY entry found in CMMDS)
RAID_1
Component: 93fe6e55-a859-4a36-eadf-b8aeed70e3ed (state: ABSENT (6), csn: STALE (28!=34), host: 192.168.2.67, md: t10.ATA_____ST9750420AS_________________________________________5WS0H999, ssd: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____,
votes: 1, usage: 2.5 GB)
Component: 93fe6e55-1453-4b36-a990-b8aeed70e3ed (state: ABSENT (6), host: Unknown, md: 52a5ee9b-8467-d344-c411-9cbd43ff62d1, ssd: Unknown, note: LSOM object not found,
votes: 1)
Witness: 93fe6e55-00d3-4b36-3b7e-b8aeed70e3ed (state: ACTIVE (5), host: 192.168.2.68, md: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S, ssd: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____,
votes: 1, usage: 0.0 GB)
Here's vsan.cluster_info:
> vsan.cluster_info /localhost/Basement/computers/NUC\ Cluster
2015-07-24 20:01:49 +0000: Fetching host info from 192.168.2.68 (may take a moment) ...
2015-07-24 20:01:49 +0000: Fetching host info from 192.168.2.69 (may take a moment) ...
2015-07-24 20:01:49 +0000: Fetching host info from 192.168.2.67 (may take a moment) ...
Host: 192.168.2.68
Product: VMware ESXi 6.0.0 build-2494585
VSAN enabled: yes
Cluster info:
Cluster role: agent
Cluster UUID: 52c17a3c-84a7-c56c-07f6-ae1b09d46883
Node UUID: 554c2a56-ddac-c10f-34d4-c03fd56d782f
Member UUIDs: ["556675a2-b37f-a645-be92-b8aeed70e3ed", "554c969f-ac96-a1fd-e96b-c03fd56d78b2", "554c2a56-ddac-c10f-34d4-c03fd56d782f"] (3)
Node evacuated: no
Storage info:
Auto claim: yes
Checksum enforced: no
Disk Mappings:
SSD: t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____ - 232 GB, v2
MD: t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S - 931 GB, v2
FaultDomainInfo:
Not configured
NetworkInfo:
Adapter: vmk1 (192.168.100.68)
Adapter: vmk2 (192.168.100.168)
Host: 192.168.2.69
Product: VMware ESXi 6.0.0 build-2494585
VSAN enabled: yes
Cluster info:
Cluster role: master
Cluster UUID: 52c17a3c-84a7-c56c-07f6-ae1b09d46883
Node UUID: 554c969f-ac96-a1fd-e96b-c03fd56d78b2
Member UUIDs: ["556675a2-b37f-a645-be92-b8aeed70e3ed", "554c969f-ac96-a1fd-e96b-c03fd56d78b2", "554c2a56-ddac-c10f-34d4-c03fd56d782f"] (3)
Node evacuated: no
Storage info:
Auto claim: yes
Checksum enforced: no
Disk Mappings:
SSD: t10.ATA_____Samsung_SSD_850_EVO_mSATA_250GB_________S248NWAG305036B_____ - 232 GB, v2
MD: Local ATA Disk (t10.ATA_____ST9750420AS_________________________________________5WS0GZWP) - 698 GB, v2
FaultDomainInfo:
Not configured
NetworkInfo:
Adapter: vmk2 (192.168.100.169)
Adapter: vmk1 (192.168.100.69)
Host: 192.168.2.67
Product: VMware ESXi 6.0.0 build-2494585
VSAN enabled: yes
Cluster info:
Cluster role: backup
Cluster UUID: 52c17a3c-84a7-c56c-07f6-ae1b09d46883
Node UUID: 556675a2-b37f-a645-be92-b8aeed70e3ed
Member UUIDs: ["556675a2-b37f-a645-be92-b8aeed70e3ed", "554c969f-ac96-a1fd-e96b-c03fd56d78b2", "554c2a56-ddac-c10f-34d4-c03fd56d782f"] (3)
Node evacuated: no
Storage info:
Auto claim: yes
Checksum enforced: no
Disk Mappings:
SSD: t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____ - 232 GB, v2
MD: t10.ATA_____ST9750420AS_________________________________________5WS0H999 - 698 GB, v2
FaultDomainInfo:
Not configured
NetworkInfo:
Adapter: vmk1 (192.168.100.67)
No Fault Domains configured in this cluster
And here's vsan.disks_stats:
> vsan.disks_stats /localhost/Basement/computers/NUC\ Cluster
+--------------------------------------------------------------------------+--------------+-------+------+-----------+------+----------+---------+
| | | | Num | Capacity | | | Status |
| DisplayName | Host | isSSD | Comp | Total | Used | Reserved | Health |
+--------------------------------------------------------------------------+--------------+-------+------+-----------+------+----------+---------+
| t10.ATA_____Samsung_SSD_850_EVO_M.2_250GB___________S24BNWAG305486L_____ | 192.168.2.67 | SSD | 0 | 232.88 GB | 0 % | 0 % | OK (v2) |
| t10.ATA_____ST9750420AS_________________________________________5WS0H999 | 192.168.2.67 | MD | 17 | 691.64 GB | 6 % | 2 % | OK (v2) |
+--------------------------------------------------------------------------+--------------+-------+------+-----------+------+----------+---------+
| t10.ATA_____Samsung_SSD_840_EVO_250GB_mSATA_________S1KPNSAFB04921F_____ | 192.168.2.68 | SSD | 0 | 232.88 GB | 0 % | 0 % | OK (v2) |
| t10.ATA_____TOSHIBA_MQ01ABD100_________________________________43FGF304S | 192.168.2.68 | MD | 17 | 922.19 GB | 3 % | 2 % | OK (v2) |
+--------------------------------------------------------------------------+--------------+-------+------+-----------+------+----------+---------+
| t10.ATA_____Samsung_SSD_850_EVO_mSATA_250GB_________S248NWAG305036B_____ | 192.168.2.69 | SSD | 0 | 232.88 GB | 0 % | 0 % | OK (v2) |
| t10.ATA_____ST9750420AS_________________________________________5WS0GZWP | 192.168.2.69 | MD | 10 | 691.64 GB | 0 % | 0 % | OK (v2) |
+--------------------------------------------------------------------------+--------------+-------+------+-----------+------+----------+---------+
Sorry my friend, you have reached the end of my expertise. Thank you, Zach.
Hi.
Did you ever manged to fix the Unassociated objects?
I have the same problem https://communities.vmware.com/thread/564609?q=i%20have%20a%20challenge%20for%20vsan%20ex
+-----------------------------------------+---------+---------------------------+
| VM/Object | objects | num healthy / total comps |
+-----------------------------------------+---------+---------------------------+
| Unassociated objects | | |
| 4a59ad58-8859-bd11-31cc-00249b1aced2 | | 3/3 |
| 41d7ac58-5aa6-0413-c7fb-00249b1aced2 | | 3/3 |
| 29ae1959-9ae3-3e17-05f4-b8aeedec5878 | | 1/1 |
| 4696a557-0611-0028-fd8a-b8aeedec579d | | 1/1 |
| 8dcac257-fcf9-5429-6bc5-b8aeedec5878 | | 3/3 |
| fc111759-04cc-9c61-ba99-00249b1aced1 | | 3/3 |
| bdfd5758-f60c-3c7e-3ddf-b8aeedec5878 | | 3/3 |
| 218b6658-52bb-1f9d-4b2e-b8aeedec43bf | | 3/3 |
| fd111759-a22e-43a9-e637-00249b1aced1 | | 3/3 |
| f0706b58-3898-8ac9-0130-00249b1aced0 | | 3/3 |
| e4611859-e2de-38cd-8c58-00249b1aced2 | | 3/3 |
| 73927658-f45f-f7d6-de14-00249b1aced0 | | 3/3 |
+-----------------------------------------+---------+---------------------------+