VMware Cloud Community
powermc
Contributor
Contributor

Cannot get Netapp NFS to attach to ESXI 4.1

I am having an issue getting a NFS storage group configured on the latest release of ESXi 4.1. I have a blade with 2 physical NIC's and also HBA card with datastore1 for VMserver and swapfiles and my plan was to use NIC0 for VM traffic and management and the other NIC1 for NFS storage (datastore2) which will house the virtual servers. The two nics have IP's that are on seperate subnets but connected since on same layer 3 switch.   Basically I cannot attach the NFS datastore2 unless I put the managemet network port in the Netapp and when it attaches I monitor the traffic and everything goes through NIC0 and nothing even touches NIC1. I have tried putting both nics on the same vswitch0 and also tried adding another vswitch1 splitting the NIC's and adding the NFS to that vswitch and it does the same thing. I can ping all across either way. Am I missing something here? I never had this issue with other servers with ESX.

Capture1.PNG

I think I found a post out there with someone complaining of the same issue but I cannot track it down.

Also does this config look ok if you only had 2 Nic's? My plan is to have configuration as follows:

HBA FC Datastore1: VM Server and Swap Files

NFS Datastore2: VM Images

NIC0: Management Traffic and VM Traffic

NIC1: NFS Traffic

Thanks in advance

0 Kudos
4 Replies
AWo
Immortal
Immortal

Welcome to the forums!

Which load balancing policy do you use on the vSwitch? If you use the default one it is normal that the traffic only uses one NIC, as the kernel uses only one port on the switch and the load balancing is port based. In fact you do not get load balancing by using the other policies as well, as you have only two communication partners (VMkernel and NFS server) involved. To get load balancing with NFS you need more that one NFS server and the "Route based on ip hash." policy (Chek page 7 of that document below).

ESXi always uses the management IP for NFS access.

Check that document: http://vmware.com/files/pdf/techpaper/VMware-NFS-BestPractices-WP-EN.pdf

AWo

vExpert 2009/10/11 [:o]===[o:] [: ]o=o[ :] = Save forests! rent firewood! =
0 Kudos
powermc
Contributor
Contributor

Thanks for the reply AWo

So I am slightly confused with the response. I am not sure if that fully answers my question. So if I have it configured with two vswitches (see Attachment) why would it use a load balance policy as each one only has 1 nic and none of the load balancing options are selected. Basically what I am seeing is that all my traffic goes out VMNIC1 which has the management network and I cannot connect any storage unless I allow traffic from the management network. One thing I just realized is that 10.101.1 and 10.101.10 are on the same network since using /16 SM. So I assume this means ESXi thinks they are load balanced and its only using one vmnic1. Is there a way to setup a route so that any traffic destined to a specific IP will go out out one Vswitch/Vmnic? 

0 Kudos
Walfordr
Expert
Expert

powermc wrote:

Thanks for the reply AWo

So I am slightly confused with the response. I am not sure if that fully answers my question. So if I have it configured with two vswitches (see Attachment) why would it use a load balance policy as each one only has 1 nic and none of the load balancing options are selected. Basically what I am seeing is that all my traffic goes out VMNIC1 which has the management network and I cannot connect any storage unless I allow traffic from the management network. One thing I just realized is that 10.101.1 and 10.101.10 are on the same network since using /16 SM. So I assume this means ESXi thinks they are load balanced and its only using one vmnic1. Is there a way to setup a route so that any traffic destined to a specific IP will go out out one Vswitch/Vmnic? 

I have seen setup like this where the traffic will go through the 1st VMK port if they are on the same network.  I have not tested this across multiple vswitches.

Is this production? Can you change the mask on the SAN 10.101.10.x interface to /24 and use /24 on the NFS vmk port? If the Netapp and the esxi host is plugged into the same physical switch you would not need any routing to test this.

Robert -- BSIT, VCP3/VCP4, A+, MCP (Wow I haven't updated my profile since 4.1 days) -- Please consider awarding points for "helpful" and/or "correct" answers.
0 Kudos
AWo
Immortal
Immortal

powermc schrieb:

Thanks for the reply AWo

So I am slightly confused with the response. I am not sure if that fully answers my question. So if I have it configured with two vswitches (see Attachment) why would it use a load balance policy as each one only has 1 nic and none of the load balancing options are selected. Basically what I am seeing is that all my traffic goes out VMNIC1 which has the management network and I cannot connect any storage unless I allow traffic from the management network. One thing I just realized is that 10.101.1 and 10.101.10 are on the same network since using /16 SM. So I assume this means ESXi thinks they are load balanced and its only using one vmnic1. Is there a way to setup a route so that any traffic destined to a specific IP will go out out one Vswitch/Vmnic? 

I referred to you original graphic where one vSwitch with two NICS where shown and you stated that you tried both. Of course, there is no load balancing between two vSwitches. But there is alos none with one vSwitch and two pNIC's but only one NFS target.

You need to have a different IP subnet from the management network and more than one NFS targets if you want to split the traffic via more than one pNIC to have load balancing. That is what the document I posted the link of states, as well, on page 7.

Check KB 1006795 and 1007371, they state that there is no load balancing with one source (ESXi) and one target (NFS).

AWo

vExpert 2009/10/11 [:o]===[o:] [: ]o=o[ :] = Save forests! rent firewood! =
0 Kudos