VMware {code} Community
one_taste
Contributor
Contributor

RAID DISK 1+0 Failed Disk Replacement on VMWARE ESXI 6.5

[ASK & Help]

I am using a Lenovo TD350 server, and VMWARE ESXI 6.5 on that server.

The disk I use is 8X SSD (hot-plugged / hot-swap) @ 800gb with RAID 1 + 0 configuration, one disk has "Failure" and the disk becomes disabled. Server is still online but performance slightly decreases because IOPS is reduced due to one disk experiencing "Fail"

When I did replace failed disks, with new disks. I encountered was that the new disks did not  automatically / synchronized to the RAID 1 + 0 array and instead mounted as independent disks as shown.

The question is, on VMWARE ESXI I enforce auto-claim driver mode whether the above problems caused auto claim so that it becomes mounted as independent disk? I have tried restarting the machine, so that the RAID adapter can recognize / initialize / synchronize from the start of the start-up server but remain the same.

Because I read should a hot-plugged / hot-swap drive should be automatically read / tersynchronize even if the machine is online.

Full-ress image: https://prnt.sc/jd1j37

1.png

Thank you

Reply
0 Kudos
1 Reply
a_p_
Leadership
Leadership

This is not a VMware issue, but an issue with how the RAID controller handles the new/replaced SSD.

Can you confirm that the new SSD did not have any data on it, i.e. was initialized when you added it to the system?

André

Reply
0 Kudos