Hi Guys,
New to the forum but here goes:
We have a new ESXi 6.5u1 install that was running reliably (albeit slow) on a non-cached RAID configuration.
Replaced RAID card with cached unit, booted server, updated RAID config to the new controller and booted into ESXi without an issue.
However.. the Datastore vanished and the VMs are in limbo without it.
The only work-around that I could find suggested using the vSphere Client to connect in and re-add the Datastore (without formating it), but 6.5-U1 removes that option.
Any other gurus out there able to offer a solution for 6.5u1?
Any assistance would be appreciated.
Mike
If you replace RAID adapter with RAID adapter of same model existing raid configuration should be preserved. But existing VMFS datastore would be detected as a snapshot.
You have to options in this case:
1. Mount it keeping existing signature with persistent mount flag
2. Mount it with resignaturing. Then you will have to re-add VMs into ESXi inventory.
See https://kb.vmware.com/s/article/1011387 for more details and commands.
Seriously? You replaced a raid card, thinking it would all re-appear the next time?
But thank heavens you made a backup first, and making an new datastore for an easy restore.
normally you should be able to access it. I am guessing it has been recognized as a snapshotted volume and that is why it is not showing up. You need to probably resignature it. Go to the command line and do a:
esxcli storage filesystem list
that should give you all volumes, mounted and unmounted, that are attached to the host. If you see it listed with a "snap" in the name then it is probably easiest to resignature it, no point in copy/pasting, you can find how to do that from the commandline here: https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vcli.examples.doc%2Fcli_manage_file...
If you replace RAID adapter with RAID adapter of same model existing raid configuration should be preserved. But existing VMFS datastore would be detected as a snapshot.
You have to options in this case:
1. Mount it keeping existing signature with persistent mount flag
2. Mount it with resignaturing. Then you will have to re-add VMs into ESXi inventory.
See https://kb.vmware.com/s/article/1011387 for more details and commands.
Thanks depping & Finikiez
CLI listed the filesystem without an issue, so the RAID card did recover the array.
Mount Point Volume Name UUID Mounted Type Size Free
------------------------------------------------- ----------- ----------------------------------- ------- ---- ---------- ----------
/vmfs/volumes/55b4c773-25fab634-1dde-25ae04da43a1 55b4c773-25fab634-1dde-25ae04da43a1 true vfat 261853184 261844992
/vmfs/volumes/5a22f7da-be4a6b70-9ab4-7cd30adf3630 5a22f7da-be4a6b70-9ab4-7cd30adf3630 true vfat 299712512 83836928
/vmfs/volumes/8b0ecf7a-08246bfc-2750-7d3ceaabb7c2 8b0ecf7a-08246bfc-2750-7d3ceaabb7c2 true vfat 261853184 100679680
/vmfs/volumes/5a22f7ed-0f922508-1cb8-7cd30adf3630 5a22f7ed-0f922508-1cb8-7cd30adf3630 true vfat 4293591040 4270850048
I followed Finikiez advice on step two and resignatured the volume, then renamed within the Console and re-added the VMs to the inventory.
Volume Name: datastore1
VMFS UUID: 5a22f7e7-6d458c04-7dd9-7cd30adf3630
Can mount: true
Reason for un-mountability:
Can resignature: true
Reason for non-resignaturability:
Unresolved Extent Count: 1
Thank you both for the assistance, it is sincerely appreciated!
I got the environment online the following day without much additional hassle and have only been able to get back to the forum today to update.
I'll dig a little deeper to figure out how to remove the old invalid VM entry from Inventory.
You can remove invalid VMs from host's inventory by right-click on them and choose 'remove from inventory'