vSAN is in general better than out-of-band-management tools for identifying issues with disks that won't be noticed at higher levels (e.g. disks showing the early signs of failure such as increased frequency and duration of latency events, failing reads etc.) - the fact that it triggered proactive evacuation (as opposed to just failing the devices) indicates that this was due to increasingly poor responsiveness of the devices, potentially the additional write load from the first device being evacuated exposed issues on the second device. One would need to review the stated cause of the proactive evacuation and/or vmkernel/vobd logs to get further information as to specifically why the device is being considered non-viable. The disk that is evacuating for multiple days is potentially failing to be read from due to being in a poor state or potentially there is insufficient capacity available in appropriate Fault Domains to complete the evacuation - either way I would advise you to contact support to determine a) why the disks are being proactively removed and b) why it is failing to complete evacuation.