VMware Cloud Community
jbaird
Contributor
Contributor

vSphere 6.5/NFSv4.1 configuration and design

Hi all,

Most of the documentation/blog posts/etc I have found discuss pre-6.0 NFSv3 implementations and design.  Would someone mind commenting on my design proposal to see if it makes sense and follows best practices?

This is a (3) node deployment.  Each host has (4) 1Gbps interface.  My intention is to create two virtual distributed switches:

* dvSwitch1 (2x1Gbps) - management, FT, vMotion, NFS v4.1 for datastores (each on dedicated VLANs)

* dvSwitch2 (2x1Gbps) - VM traffic

As I understand it, NFSv4.1 introduces 'session trunking' support which will allow me to specify multiple IP's for my NFS server(s).  This enables multi-pathing and won't require me to use multiple subnets/VLANs to distribute NFS traffic across the physical interfaces that was previously required with NFSv3. 

My questions:

* Should I be concerned with how I handle IP addressing on my NFS server?  If I simply create multiple virtual interfaces on the NFS server, is that enough to take advantage of session trunking?  Do I need to worry about using IP's with unique least significant bits [1]?

* I'm still a bit confused on how to handle physical link aggregation.  Would it be best if I used LACP on each pair of physical links (one LACP for dvSwitch1, one for dvSwitch2)?  I would prefer to use LACP over LBT due to ease of switch configuration.  However, would using LACP over LBT affect the ability of session trunking to distribute load across both physical interfaces in the LACP bundle?

Feel free to point me to any updated white paper that covers NFSv4.1 and vSphere 6.5 if you know of one!

[1] NFS on vSphere Part 1 - A Few Misconceptions - Wahl Network

Thank you!

Reply
0 Kudos
3 Replies
SmokinJoe59
Enthusiast
Enthusiast

Not sure what your going to get, Dell purchased VMware and EMC and niether have a good NFS product.  NetApp does and that is a competitor...  FreeBSD and Linux could do it but I have found that some ppl are having issues where the mount point goes read only.  I recall it was a bug in the NFS 4.1 code that VMware is using and they have not fixed it as I recall.

The only thing I can suggest is to buy a unit from iXsystem and in the quote ask for a setup guarantee with NFS 4.1 and VMware ESXi 6.5

Reply
0 Kudos
jbaird
Contributor
Contributor

I'm planning on using Linux for this case, but I don't think that what I'm using for NFS is very relevant here.

Do you have any references/links for the bugs that you mention?

Reply
0 Kudos
SmokinJoe59
Enthusiast
Enthusiast

Here is a discussion :

NFS 4.1 on ESXi 6.5: unable to browse datastore

Here is the bug RECLAIM_COMPLETE :

Re: NFS 4.1 export mounts as read-only (RECLAIM_COMPLETE FS failed)

I think it only happens if your not using Kerberos

Reply
0 Kudos