I mount a number of NFS datastores on my ESXi 5.5-hosts, but after every reboot the stores are inactive. I then have to unmount and add the stores again, but after next reboot the stores are inactive again. I have also tried to mount the stores from ESXCLI, but still the stores are inactive after reboot. Do I have to set the stores as persistent in some way when mounting them? Any hints on this?
The NFS-server in use is an NetApp-filer.
Please go through this KB , this would help you for getting out of this issue while using NetApp-filer
Hi and thanks for your answer!
I don't think this KB describes exactly my problem. The datastores never become unavailable when the ESXi-hosts are running, but the stores are never mounted after a reboot. After a reboot the stores are marked as unmounted/inactive. I have tried the adjust the
NFS.MaxQueueDepth-parameter, but the problem is still the same.
Research pointed me toward the Advanced Software settings in VSphere. Under NFS I change the following three settings:
Every “NFS.HeartbeatFrequency” (or 12 seconds) the ESXi server checks to see that the NFS datastore is reachable.
It appeared that ESXi stopped trying to connect to the NSF share before the physical connection was even available. By upping the low defaults, this could correct the issue. data store should be available after reboots and still available.
we're having the same problem. After a ESXi 5.5 host reboot all NFS mounts are inactive. When we run "esxcfg-nas -r" everything is fine again. Tried different NFS timeout settings but that doesn't solve the issue. We're only seeing it on one host :smileyconfused:, other two hosts in same cluster are fine?
Thanks for the tip on "esxcfg-nas -r". When I did that, I saw it was trying to mount a datastore related to Veeam by the hostname and since the VM running DNS hadn't been started yet it failed. Apparently it doesn't try to mount the rest of the NFS shares if it fails on one. Once I got DNS up and working, I re-ran the command and all the shares mounted without a problem. My hosts are 5.0 and 5.1 with a 5.5 vCenter by the way.
we have sort of th same issue when we reboot the server the datastore does not mount but if we "keep the existing signature" it mounts right up we have several others that when we reboot the datastore comes up with no issues, I read on another forum that someone hit the 'assign new signature" tab instead and the datastore mounting problem went away so now I come to my question can I copy my existing datastore to an external flash drive just in case renaming my datastore does not solve my issue or my datastore gets corrupted somehow when renaming it
Ran into the same issue. ESXi 5.5U2 with NFS datastores on NetApp CDOT v8.2 on a two node FAS3250 cluster. Establishing a connection is no problem, but rebooting the ESXi host causes the NFS datastores to not mount. They are shown as "unmountded/inaccessible" in the vsphere client, both native and web. What is very odd though is that if I go to browse them ... I can! So esxi says it is not mounted, and an ls of /vmfs/volumes shows that this is true as it does not show any of the NFS datastores, not the uuid nor the symbolic link from the datastore name to the uuid. But I can browse the datastore from the clients. Very odd.
So I did a 'grep -i nfs *.log' in the log directory to see if any clue might show up - and indeed there is a clue:
syslog.log:2014-10-03T21:18:12Z jumpstart: unhandled exception whilst processing restore-nfs-volumes: Unable to resolve hostname 'cluster-name.domain.uri'
So I 'unmounted' the offending NFS datastore - now please note that all my other NFS datastores are defined with the IP address and NOT the dns hostname. I have only the ONE datastore that uses the DNS hostname.
Then I rebooted. Bingo! All the datastores came up no problem.
Analyis : the datastore mounts occur before the network with the dns server network comes up. Thus the dependency on name resolution is not satisfied and the named datastore is not mounted. But sadly and very badly, NONE of the IP addressed NFS datastores comes up either. Very much an all or none situation.
Workaround/Fix : do not use the DNS hostname, instead use the IP address for your NFS datastores. At least until vmware can figure what is really going on and fix it.
Workaround/Fix #2 : add the ip address / hostname to the esxi host servers /etc/hosts file.
Systems Administrator II
Kelowna, BC, Canada
Message was edited by: Neilly
I have the same issue with ESXi5.5U2. NFS is generic RHEL 6 box.
IP NFS mounts come up fine.
DNS NFS mounts fail at reboot.
We need DNS NFS mounts to work for NFS server redundancy (failover with rsync on that end).
How do we file a ticket 🙂