I have a new DL380 G10 with two PCIe NVMe drives. I have set up a Windows 2016 VM as a test, given it equal storage from each NVMe and set the two disks to be Dynamic and then set up Disk Mirroring in Windows on all partitions on disks 0 and 1. I see two boot options when I start the VM - 1. Windows Server 2016 and 2. Windows Server 2016 - secondary plex - so I think the Windows mirroring is successful.
My aspiration is that if one NVMe drive fails I'll be able to easily recover and run this VM and others to be set up from the remaining good drive.
I've found the VMWare document "Set Up Dynamic Disk Mirroring" for SAN LUNs, a slightly different variation than my case, and I see that it says to add a couple of advanced options pertaining to the SCSI controller, returnNoConnectDuringAPD, and returnBusyOnNoConnectStatus.
In my case do I need to do this for the NVMe controller I added to the settings for this VM?
I also browsed the DataStores NVMe1 and NVMe2 (the not so colorful names I have given the two NVMe drives). The ESXi metadata files - vswp, nvram, vmx, logs &c. - only live in the folder for the VM on NVMe1. The VM's folder on NVMe2 only has the vmdk file.
If NVMe1 were to fail can I recover with just the vmdk file on NVMe2? If not, which files other than vmdk file are critical and is there a way to keep them in sync between NVMe1 and NVMe2, or would the occasional static copy do?
The team who will eventually use this server don't really care and say they can easily rebuild VMs if a drive fails and they might actually prefer to have the second NVMe drive available for more VMs.
But it offends my IT sensibilities not to try and set up some sort of RAID on this and be able to recover more quickly should a drive fail.
Moderator: Moved to vSphere Storage
I haven't received any replies so I tried removing each of the drives one at a time and see if I could start my Windows VM. No luck. Even with NVMe0 which includes all the ancillary VM files in the host and NVMe1 with only the mirrored VMDK file pulled out I was out of luck.
I guess it's time to consult VMWare Support.