VMware Cloud Community
msiem
Contributor
Contributor

vmkernel & custom TCP/IP stack & NFS - are they really necessary?

Greetings,

I've been reading about NFS & VMware and the best practices, though still find some of the aspects kind of confusing especially regarding network configuration. We use iSCSI on a daily basis, but there's an opportunity to backup old and deprecated VMs (plenty of them) using Synology NAS via NFS protocol.

I already read the following thread here and the article on why to use vmkernel with NFS, but it still seems as if it was not mandatory and was only about shortening the path and avoiding pushing the traffic through a router.

 

1. We use separate vmks for Mgmt, vMotion (vSwitch0), and two iSCSI fault domains (vSwitch1 & vSwitch2) and they all share the same default TCP/IP stack. The routing for the default stack has vmkernel gateway set along with DNS servers - if so:

Can I assign the default TCP/IP stack to a new vmk with a new, NAS-dedicated network but without setting the default gateway for it? If the default stack is assigned with external DNS servers it means that theoretically, I should be able to reach external NAS from the new vmk (NAS) the same way vmk0 (Mgmt) communiates with DNS servers (different subnets) even if the default stack has no default gateway in the same subnet as NAS.

 

2. Is creating a new TCP/IP (custom) stack and a new vmk for NAS the only right way to attach NAS storage to an ESXi host?

 

3. (An abstract example:) If I can communicate with server B from server A and they are in different subnets (on condition that the routing is set), can I theoretically communicate with the NAS server using vmk0 (Mgmt) only? It's kind of the same situation provided that the routing between the Mgmt VLAN and NAS VLAN is set and let's say I don't care about the routing congestion.

Regards,

 

VMware Cluster environment comprises of ESXi, 6.7.0, 16075168 hosts, vCenter 6.7.0 Build 16046713, and vSphere Client version 6.7.0.44000
0 Kudos
0 Replies