You don't need to shut down anything, you can disconnect and reconnect your storage in this way (from console):
esxcfg-nas --del nfsname
esxcfg-nas --add nfsname --host nfsserver --share nfsexportname
You can also do this from virtualcenter interface, just remove nfs and then reconnect it.
Anyway, esx 3 should reconnect nfs storage automagically when it's available again, so, this is a little bit strange.
Anyway, hope this helps.
What do you do when the NFS mount has gone inactive and there was a VM up and running using that storage space? I get this...
\# esxcfg -d
Error performing operation: NFS Error: Unable to Unmount filesystem: Busy
In the VI client it's marked inactive and I can't get a console to the VM as the config file does not exist!
Also, I successfully removed and added another NFS mount from the console, but in the gui it's still marked as inactive for some reason?
Perhaps your firewall settings need to be tweaked? Maybe the reboot set them back to a state where they are blocking your NFS mounts?
At the command line as root, type esxcfg-nas -r. That'll reconnect to the NFS store. Then type esxcfg-nas -l, which will show you if the NFS volume is remounted properly.
You may also need to perform a service mgmt-vmware restart to get VC to see things properly, as well.
Also, verify that the NFS server is working fine. When a disconnect is made it tries to communicate with the NFS server. If it can not do so, it should not fail to disconnect. But if the communication is garbled perhaps it will.
Verify what you are receiving on your NFS Server.
NAS/NFS Datastore vmkernel portgroup does not require changes to the Service Console firewall to be used.
Message was edited by:
I am aware that you have to change the firewall to allow iSCSI, but not NFS?
It was my VC VM running on the NFS share, therefore I could not shutdown the VM. Also logging into the ESX host directly I could not shutdown the VM because the .vmx file could not be accessed as it was on the shared NFS storage that could not be accessed!
So I had to SSH into the ESX host and kill the vmware process, which is not good because the VM could end up corrupted... but this was the only way I could remove and reconnect the NFS share without it complaining 'already in use'.
My NFS server was definitely working fine because another client was connecting to the exports ok.
I see your issue, it sounds like there was an outstanding NFS lock on the share due to the running VM. THe lock was most likely on the VMDK. Any outstanding lock would keep the share from being remounted. I think you did the only thing you could do in this situation.
what process did you kill? I am stuck in the same situation
My NFS server was definitely working fine because
another client was connecting to the exports ok.I am aware that you have to change the firewall to
allow iSCSI, but not NFS?
It was my VC VM running on the NFS share, therefore I
could not shutdown the VM. Also logging into the ESX
host directly I could not shutdown the VM because the
.vmx file could not be accessed as it was on the
shared NFS storage that could not be accessed!
So I had to SSH into the ESX host and kill the vmware
process, which is not good because the VM could end
up corrupted... but this was the only way I could
remove and reconnect the NFS share without it
complaining 'already in use'.
I killed the process running my VM - you can find out which one using ps -ax and greping for your VM name...
Sorry about the late reply. I was looking for something else and stumbled on this thread. Did you ever try:
I'm having the same problem. i have 2 openfiler NAS using drbd and heartbeat for HA that give NFS shares to 2 ESX 3.5 servers.
When i do a failover between the NAS, it works correctly, but the ESX servers loose their connections to the NFS datastores, and
i can't delete them using esxcfg-nas -d because the vms that are stored in the shares are running.
Is there anyway to force the deletion of a NFS datastore from the ESX console?
Looks like a new patch was released for 3.5 that fixes Inactive NFS mount after a reboot especially if the NFS mount is referenced via DNS hostname:
Specific info from KB:
After restarting an ESX host, some NFS datastores might appear
inaccessible through the user interface although they are mounted and
accessible through the command line interface. This issue is more
likely to occur if an NFS datastore is configured using a host name or
a fully qualified domain name (FQDN) that requires DNS name resolution.
I have a setup with a Windows 2012 server configured as a NAS for my VMWare servers (it's all free equipment, so I do what I can with it). If I'm ever doing maintenance where I have to take down the Windows NAS and the ESX boxes together, the NAS never reconnects automatically. I've left it for hours and it just won't connect.
Diztorted's suggestion worked perfectly for me. The NAS reconnected immediately and was accessible as it should be. I'm not entirely sure why it isn't reconnecting by itself, but I'm completely happy with this workaround, especially since I don't restart the system often.