Hi all
Upgraded from 5.5U2 to 6.0
Had a NAS VM running using two disks in (virtual) RDM mode to get direct access to disks and smart status - created with the -z flag.
After the upgrade to 6.0, ESXi now mounted the drive(s?) - which are softraid 1 and having an UFS filesystem - at /vmfs/volumes/sink. This happened only after the second boot, but now there's no way back for me.
After the first boot after upgrade, I was able to assign the disks to a VM and boot it up. From the next reboot, the problem above occured.
Maybe worth noting that the volume is not mounted as an VMFS storage (at least it appears so), but directly in the vmfs/volumes folder - and it's not a symlink as the usual datastores are.
Altready tried:
/ recreating the RDM files with vmkfstools
/ unmount the volume at /vmfs/volumes/sink via esxcli storage filesystem unmount (using all the -u/-n/-p options, but ESXi says it "can't find the nas disk", which not really makes sense)
There is a thread on serverfault (not mine) with the same issue and similar details.
vmware - ESXi 6.0 mounting RDM as local volume - Super User
At the moment, I downgraded to 5.5u2 again to get the VM to work.
Any help is appreciated.
Cheers
Iam
it's possible to blacklist ufs module with following esxcli command, so no automount of ufs volumes will happen. host reboot is required
$ esxcli system module set -m ufs -e false
it's possible to blacklist ufs module with following esxcli command, so no automount of ufs volumes will happen. host reboot is required
$ esxcli system module set -m ufs -e false