VMware Cloud Community
MJDiAmore87
Contributor
Contributor

NFS Share (w/ functioning connection to Default Networking Stack) Fails to Connect on Custom TCP/IP Stack

In my lab environment I am in the process of attempting to transition my network shares from the main vLAN (running over 1G backhaul) onto a dedicated subnet with a 10G connection.

My NFS Server is currently exporting 4 shares.  The first 3 are exported to the management network on which my default TCP/IP stacks communicate on my 3 ESXi (6.7 U1b) servers over vmk0.
These shares are working as intended and are currently connected.  I have modified /etc/exports and run exportfs -ra to change the 4th share to the new subnet, on which a custom (4th) TCP/IP stack on each of the 3 ESX servers is connected over vmk1.

All ESXi servers can vmkping the NFS share over it's second IP on the 10G backhaul, and the NFS Server can ping back to all 3 of the ESX servers over its second interface appropriately.  However, if I try to connect the 4th share/datastore, the connection fails.   nc -z <NFS_2nd_IP> 2049 also gives no response / hangs (successful connection message received on 1st IP).

Is there some limitation that prevents the same NFS server from exposing shares on multiple interfaces?  Do I need to use one of the 2 pre-configured custom stacks to allow the appropriate services required for NFS?

0 Kudos
0 Replies