We have a few NFS-based datastores (Server for NFS on Windows) that become inactive whenever we have to reboot the hosting server. The only ways I can find to get the datastores to become active again are 1) restart the ESXi host, 2) unmount and remount the datastore in ESXi or vCenter, and 3) wait and it will apparently reactivate at some point on its own. All of these have drawbacks. #2 is the fastest I've found, but whenever I do that, I lose the vCenter permissions on the datastore, so I have to spend time reassigning permissions to ensure that users aren't able to accidentally create VMs on there. Is there a different way to explicitly reactivate inactive NFS datastores without unmounting? Or, alternatively, is there a way to force #3 to happen more quickly?
Thanks
Mike
Have you tried this command
esxcfg-nas -r ----> This will help to restore all NAS mount from the config file.
~ # esxcfg-nas
esxcfg-nas <options> [<label>]
-a|--add Add a new NAS filesystem to /vmfs volumes.
Requires --host and --share options.
Use --readonly option only for readonly access.
-o|--host <host> Set the host name or ip address for a NAS mount.
-s|--share <share> Set the name of the NAS share on the remote system.
-y|--readonly Add the new NAS filesystem with readonly access.
-d|--delete Unmount and delete a filesystem.
-l|--list List the currently mounted NAS file systems.
-r|--restore Restore all NAS mounts from the configuration file.
(FOR INTERNAL USE ONLY).
-h|--help Show this message.
Have you tried this command
esxcfg-nas -r ----> This will help to restore all NAS mount from the config file.
~ # esxcfg-nas
esxcfg-nas <options> [<label>]
-a|--add Add a new NAS filesystem to /vmfs volumes.
Requires --host and --share options.
Use --readonly option only for readonly access.
-o|--host <host> Set the host name or ip address for a NAS mount.
-s|--share <share> Set the name of the NAS share on the remote system.
-y|--readonly Add the new NAS filesystem with readonly access.
-d|--delete Unmount and delete a filesystem.
-l|--list List the currently mounted NAS file systems.
-r|--restore Restore all NAS mounts from the configuration file.
(FOR INTERNAL USE ONLY).
-h|--help Show this message.
Just tried that now. Appears not to have worked. I did check the NFS host to make sure Server for NFS is running and even restarted it.
[root@seicvsphere:~] esxcfg-nas -l
vmbackup is /vmbackup from seicfs2.corp.leidos.com mounted unavailable
installs is /installs from seicfileshare.corp.leidos.com mounted available
[root@seicvsphere:~] esxcfg-nas -r
[root@seicvsphere:~] esxcfg-nas -l
vmbackup is /vmbackup from seicfs2.corp.leidos.com mounted unavailable
installs is /installs from seicfileshare.corp.leidos.com mounted available
[root@seicvsphere:~] esxcfg-nas -r
[root@seicvsphere:~] esxcfg-nas -l
vmbackup is /vmbackup from seicfs2.corp.leidos.com mounted unavailable
installs is /installs from seicfileshare.corp.leidos.com mounted available
[root@seicvsphere:~]
Appears that the esxcfg-nas command will work using different commands, and it looks like it retains the permissions through vCenter. Using the info in How To Fix An Unavailable vSphere NAS Mount using SSH - Wahl Network , I was able to use the following commands to make it available again:
esxcfg-nas -d vmbackup
esxcfg-nas -a -o seicfs2.corp.leidos.com -s /vmbackup vmbackup
I had been using the esxcli storage nfs commands previously, and those were more destructive to the datastore settings.