I have a serious issue whereby the RAID10 volume on my local disks (x16) corrupts whenever I do a vmotion. The manufacturers have been looking at it for months and they are now passing it back to me saying it could be the ESXi software, which I agree is a possibility. I say this is a serious issue because we have had it 3 months and it hasn't worked from the start which means I'm running into resource issues because I haven't been able to put this into production. The second server which is the exact same, has gone into production already.
The server is a new PE730 (up to date on lifecycle firmware updates) with 16 local disks running 5.5Update2 Dell Customised image. The RAID controller is an embedded PERC H730 Mini running RAID10 using 14 disks (1mb block) and 2 hot spares
If I vmotion a server from a similarly set up PE730 the server in question then fails the vmotion and when I go into the RAID BIOS many of the disks are in a missing, foreign or failed state. This appears not to be a fault with the disks because I can then recreate the volume by clearing the config, clearing any foreign config, rebooting and making the volume again. The RAID controller has been replaced by Dell but still not fixed.
The manufacturers are saying that that it could be the ESXi PERC driver within the customised image but it's working fine on the other PE730
PERC firmware is 25.3.0.0016, Driver is 6.901.55.00.1vmw according to the IDRAC