Hello all, looking for your expert advice on the best way to proceed with a really ugly failure scenario. We recently physically moved our lab to a new facility and as we were bringing things back online, found that we had lost a flash cache disk on two out of our five servers. To complicate matters further, we found that vCenter was sitting on a local disk outside of vSAN and it appears to have experienced some corruption on the disk because it fails to start with a "Unable to enumerate all disks" error. In researching that further, five of the twelve vmdks report an i/o error when trying to read them and the flat file is nowhere to be found. I thought we might be able to re-build the vmdks, but seems without the flat file we are out of luck (right?). Unfortunately there is no backup.
With regard to VSAN, my understanding is that when the cache disk fails the entire disk group is removed from service. That is what appears to be the case here. The failed flash disks have been replaced on both servers and there is also another flash disk on each that can be used. However it appears that the disk groups are still not in service because in the local ESXi console all of the disks are reported as not operational (see attached picture). Also many of our VMs are currently showing up in the local console as Invalid. I think the reason for that is probably because there is not enough storage on the remaining three servers to accommodate storage for all of the VMs. What is the best way to recover from this multiple-failure scenario while preserving our data? I am thinking of creating a new vCenter, putting all hosts in maintenance mode and then adding them to the new vCenter. Then I will replace the failed cache disk on each server using the new vCenter. Would that work, or is there a better/safer strategy? Also what is the procedure to replace the failed cache disks in vSAN to bring the disk groups back into service without losing data?
All hosts and vCenter are running version 6.5.
disks_with_issues.jpg 565.8 K