Server: Dell PowerEdge R710 with Perc 6/E RAID controller managing two Dell PowerVault MD1000s (all at latest firmware versions).
ESXi Version: 5.5
Background: ESXi is installed on the R710 (Datastore1). Two MD1000s are each RAID-6; Datastore2 contains my VMs, Datastore3 for extended storage.
Scenario: R710 has been removed due to chassis failure. A new R710 has been put in its place.
Issue: When powered on and connected via a vSphere Client, the Datastore2 and Datastore3 do not appear under the datastores list. They do appear in the devices list. Adding the two datastores back - keeping existing signatures (otherwise the Linux VM network adaptors lose their configuration), they both now appear in the datastores list. The VMs can be added to inventory, and the system is back operational. When the server is rebooted, Datastore2 and Datastore3 disappear again. The process of adding them back into the datastores list etc. has to be repeated. The issue appears to be linked to the VMFS signatures of the datastores, as they are held by the server if new signatures are created when adding them. This option is not preferred due to the reconfiguration of the VMs themselves.
Questions: Is there a way to resolve this problem? If so, how?
Try to re-signature the volume and assign a new UUID. re-signature will not cause any data loss. what does the VMkernel logs say about unmounting the volumes.?
Recreating the signatures does work, but it affects the VMs and trashes the network settings. If posible I want to try to avoid this.