VMware Cloud Community
JR_G
Contributor
Contributor

BUG: VMs Invalid After Reboot (NFS) / ESXi 6.7

I am using NFS for a DS. Whenever I reboot my VMs become invalid. A way around it is to ssh into ESXi (6.7) and use:

# vim-cmd /vmsvc/unregister ##

(## being the number of the, now, invalid VM)

Am I missing something here? This is for a home lab for testing purposes. I'm currently using an unmanaged gbit switch. I have seen some people have luck using a managed switch with pordtfast enabled. Is that the only way to fix this? Is this a bug? It almost seems like it reads the VM database prior to mounting the NFS DS? I've been trying to find something in the logs to help me out but I cannot find anything useful to determine where I should look to fix this problem. Any help would be appreciated.

The NFS DS is active when I enter the web ui. When I unregister (manually) the VM and register it by adding it again I noticed:

2019-08-24T03:22:43.384Z cpu0:2100450)WARNING: NFS41: NFS41FileDoCloseFile:3030: file handle close on obj 0x430611939fe0 failed: Stale file handle

2019-08-24T03:22:43.384Z cpu0:2100450)WARNING: NFS41: NFS41FileOpCloseFile:3493: NFS41FileCloseFile failed: Stale file handle

in the logs.

These begin once it is started but cease once it is running.

Tags (1)
0 Kudos
0 Replies