VMware Cloud Community
hennish
Hot Shot
Hot Shot

Is Load Based Teaming supported for NFS vmkernel ports?

(reposting here, since the Communities 'Move' feature doesn't seem to work)

Hi. My customer has 2 x 10 GbE uplinks per host, which we use in a dvSwitch that handles all types of traffic (mgmt, vMotion, NFS, VM-traffic).

Since they don't have LACP/Etherchannel on their switches, and all NFS exports (on NetApp) are on one single subnet/VLAN, I have two questions:

1. Is it at all supported (by VMware and/or NetApp) to run everything over one single vmkernel port on one single subnet? It's not listed as an alternative in http://www.netapp.com/us/library/technical-reports/tr-3749.html as far as I can see.

2. Is it supported to use Load Based Teaming on the port group of the above mentioned NFS vmkernel port?

I don't think the bandwidth will be an issue on these 10 GbE links, but I would like to run LBT if possible. If it's not supported for NFS traffic, at least I can run it on the VM port groups, which I assume will load balance the VMs to another uplink in case of traffic going above 75 % for more than 30 seconds.

If you have an answer to questions 1 and/or 2, please include a reference of some kind. I know LBT will probably "work just fine", but what I need is water tight supportability. Smiley Happy

Thanks in advance!

0 Kudos
3 Replies
chriswahl
Virtuoso
Virtuoso

1. Is it at all supported (by VMware and/or NetApp) to run everything over one single vmkernel port on one single subnet? It's not listed as an alternative in http://www.netapp.com/us/library/technical-reports/tr-3749.html as far as I can see.

I assume you mean a single vmkernel port for NFS traffic? This is a common practice and is included in many vendor build guides (Vblock comes to mind as one). I'm curious what your use case is for additional NFS vmkernel ports on a single subnet, as they would sit idle.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
hennish
Hot Shot
Hot Shot

Hi. Well, sort of. The four options that I have are:

1. Single vmk, single VLAN/subnet, load balancing using "IP Hash" and appropriate "teaming" (Etherchannel/Static LACP..) in the physical switches.

2. Dual vmk, dual VLANs/subnets, load balancing using "Port ID" and no "teaming" in the physical switches.

3. Single vmk, single VLAN/subnet, load balancing using "Port ID" and no "teaming" in the physical switches.

4. Single vmk, single VLAN/subnet, load balancing using "LBT" and no "teaming" in the physical switches.

Of the above options, only 1 and 2 are mentioned in NetApp's white paper on configuring and using vSphere with NetApp

(http://media.netapp.com/documents/tr-3749.pdf - paragraph 3.6 and 3.7), which made me wonder whether options 3 and 4 are supported/recommended or not.

0 Kudos
rickardnobel
Champion
Champion

Anders Olsson wrote:

3. Single vmk, single VLAN/subnet, load balancing using "Port ID" and no "teaming" in the physical switches.

4. Single vmk, single VLAN/subnet, load balancing using "LBT" and no "teaming" in the physical switches.

Of the above options, only 1 and 2 are mentioned in NetApp's white paper on configuring and using vSphere with NetApp, which made me wonder whether options 3 and 4 are supported/recommended or not.

For option 3 this would mean that your NFS traffic always will use a single 10 Gbit adapter and keep the other passive as redundancy. I did not really understood if you were also using the same physical NICs for VM traffic, vMotion and other? If so you have a certain risk that by unluck all of your most network intensive VMKs / VMs might be assigned to the same vmnic from the Port ID quite static distribution.

For option 4 then it would be same, only one link used by VMK for NFS, but you would get a guarantee that the network load is somewhat evenly distributed over your two 10 Gbit vmnics.

My VMware blog: www.rickardnobel.se
0 Kudos