VMware Cloud Community
Balteck71
Enthusiast
Enthusiast

NFS multipathing with vDS

Hello,

Can anyone help me find a suitable ref architecture to add a NFS v4.1 datastore with multipathing on vsphere distributed switch?

Actual configuration:

3 ESXi 6.5 hosts with 2 vDS connected to 4 pSwitches.

1 vDS is connected with 2 10gbe uplink all active NICs to 2 10gbe switches (10GBE VDS)

1 vDS is connected with 4 1gbe uplink all active NICs to 2 1gbe switches (1GBE VDS)

10 GBE VDS is configured with NIOC LBT with the following portgroup:

- fast VMs

- vMotion with vmk2 from each hosts

- vSAN with vmk1 from each hosts

1 GBE VDS is configured with NIOC LBT with the following portgroup:

- normal VMs

- Management with vmk0 from each hosts

Now my customer wishes to add an NFS storage for backup, scratch disk and datastore for testing VMs.

The NAS has 4 1gbe NICs and supports NFS 4.1 multipathing and I don't know what is the best practice for having performance and reliability.

First case:

create a new NFS port group on 10GBE VDS with a new vmk3 vmkernel port and assign 4 IPs on same subnet on NAS

Second case:

create a new NFS port group on 10GBE VDS with 4 new vmkernel ports (vm3,vmk4,vmk5,vmk6) and assign 4 IPs on different subnets on NAS (PVLAN here?)

Third case:

create 4 new NFS port groups on 10GBE VDS with a new vmkernel port each (vm3,vmk4,vmk5,vmk6) and assign 4 IPs on different subnets on NAS (normal VLAN here?)

Or maybe create vmk ports across all vDS (10GBE and 1GBE)?

Thank you very much for any help

0 Kudos
2 Replies
daphnissov
Immortal
Immortal

I'd recommend reviewing this document first and fully understanding how NFS 4.1 works and what vSphere does and does not support with it. However, understand that this support does not extend to pNFS and that's commonly forgotten. Cormac has a couple good articles on this starting here.

0 Kudos
Balteck71
Enthusiast
Enthusiast

Thank you very much.

I've already read that document, that's why I had some doubts about the best configuration.

Any guide refers to some 1gbe nics on esxi side and some 1gbe nics on NAS side.

Infact, for example, I've another customer that has a ESXi with two dedicated vSwitches with a vmkernel port and a 1gbe nic each, connected to two 1gbe nics on the NAS box and I got with NFS v4.1 around 2 gigabit transfer rate.

So multipathing and load balancing works good.

But here I have dual 10gbe NICs on a vDS or quad 1gbe NICs on another vDS (instead of normal vSwitch) with NIOC enabled LBT.

So what is the best case to adopt?

0 Kudos