VMware Cloud Community
netlib
Enthusiast
Enthusiast

Able to mount NFS datastore on one ESXi 6.5 host but not the other

This is a long shot, but I am hoping that someone can suggest something to look at. I have two ESXi 6.5 hosts and one Windows 2016 Server.  All three machines are behind the same router.  I created an NFS share on the Windows server. I was able to mount it on one of the ESXi servers, but not the other. On the second one I received the popup message:

Failed to mount NFS datastore SWLIBNFS - NFS mount 192.168.123.20:SWLIBNFS failed: The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it.

Here are the details:

Two ESXi 6.5 Hosts:

  • 192.168.123.50
  • 192.168.123.141

On Windows 2016 server  (192.168.123.20) did this:

  • Added role: Server for NFS and rebooted
  • Added NFS Share to D:\SWLIB named SWLIBNFS
  • Kerberos boxes: unchecked
  • No server authentication: checked
  • Enabled unmapped user access: checked
  • Allow unmapped user access by UID/GID: selected
  • Added permissions for both ESX servers:
    • Permission: Read/Write
    • Root Access: Allowed

On ESXi .50 did this:

  • Selected datastores
  • New Datastore
  • Mount NFS datastore
  • Name: SWLIBNFS
  • NFS Server: 192.168.123.20
  • NFS share: SWLIBNFS
  • NFS 3
  • Result: mounted successfully

Next on .141 did the exact same thing but got the error message shown above.  Is there some limitation on an NFS datastore being mountable only on one host?

Thanks for any guidance.

4 Replies
a_p_
Leadership
Leadership

Although I assume that you've already done this, please double-check the permitted hosts, and permissions on the NFS server side.

If one ESXi host can connect but not the other, then it looks like a configuration issue on the Windows server to me.

André

netlib
Enthusiast
Enthusiast

You were right. On the Share permissions there was an entry for All Machines which was initially set to No Access.  When I set that to Allowed, I was able to add it to the second host.  Although why that was needed for one host and not the other is beyond me.

0 Kudos
a_p_
Leadership
Leadership

Glad to read that it's working now.

I can't unfortunately tell you for sure what exactly caused this behavior without seeing the settings. However, I assume that "No Access" is the general setting for not explicitly configured initiators, and that only one of them has initially been configured.

André

casaub
Enthusiast
Enthusiast

Would you mind sharing the cabling, please? Did you direct mount the NFS device to the ESXi host (NFS <--> ESXi) or did you use a switch (NFS <--> switch <--> ESXi)?

How did you specify the host/IP/network on the NFS server side - as a specific IP address, a network or just using an asterisk (*)?

Thanks

0 Kudos