12 Replies Latest reply on Oct 17, 2010 2:41 AM by staannoe

    NFS load balancing in LACP trunk?

    TBKDan Novice

      I am setting up three new Dell R710 servers running ESXi 4.0U1 embedded to connect to our new EMC NS-120 NAS via NFS. The R710's have six gigabit NICs, of which I have set up three two-port LACP link groups: one for virtual machine traffic, one for NFS/minimal iSCSI traffic, and one for VMotion traffic. While doing some research and configuration, I see that NFS will only go out via one port/IP by default which means that it will only go over one of the two gigabit links, essentially making the other link a pure standby. I want to be able to load balance the NFS traffic across these two gigabit links (active/active) to utilize the most bandwidth possible. Is there any way to tell vSphere to utilize two or more TCP connections to the EMC NFS export? Do I have to create two+ IP's on the EMC connected to the same NFS export and then split the VM's across the two? (ie, one NFS datastore accessed via two or more IP's so it will create two or more connections and therefore be load balanced across the two gigabit NICs)  Is this even possible? Any insight would be appreciated!

        • 1. Re: NFS load balancing in LACP trunk?
          vmroyale Guru
          vExpertUser Moderators

          Hello and welcome to the forums.

           

          Do I have to create two+ IP's on the EMC connected to the same NFS export and then split the VM's across the two? (ie, one NFS datastore accessed via two or more IP's so it will create two or more connections and therefore be load balanced across the two gigabit NICs)  Is this even possible? Any insight would be appreciated!

           

          Yes, this is exactly what you want to do. The links won't be truly load-balanced, but it will be far better than using a single link with a standby. 

           

          Good Luck!

          • 2. Re: NFS load balancing in LACP trunk?
            TBKDan Novice

            Thank you for the input and the welcome. Will I have any issues with accessing the same actual NFS store using different IP's from the same vSphere host?

            • 3. Re: NFS load balancing in LACP trunk?
              kjb007 Guru

              I'm not sure this will work.  Accessing the same export with two different nfs servers, and mounting them under different names.  At best, you can mount two different exports using a diff ip for each, and thereby using both paths in  your aggregated link..

               

               

               

               

              -KjB

              • 4. Re: NFS load balancing in LACP trunk?
                vmroyale Guru
                User ModeratorsvExpert

                Agreed "two different exports" with unique IPs - I read that earlier post incorrectly.

                • 5. Re: NFS load balancing in LACP trunk?
                  TBKDan Novice

                  Why does it need to be two separate nfs exports? (Just trying to consolidate as best I can)

                  • 6. Re: NFS load balancing in LACP trunk?
                    kjb007 Guru

                    I do not believe it is possible to mount the same export, using two different IP's.  That's why you would need two exports, in order to bind 1 export to 1 IP, and the 2nd to the 2nd IP.  Again, you should try it out, and post if you find differently.

                     

                     

                     

                     

                    -KjB

                    • 7. Re: NFS load balancing in LACP trunk?
                      TBKDan Novice

                      I'll give it a shot once the EMC is up and running (should be in a few hours ).

                      • 8. Re: NFS load balancing in LACP trunk?
                        LarryBlanco2 Hot Shot

                        Static EtherChannel.

                         

                        My setup is as follows:

                         

                        ESXi 4.0 U1, Cisco 3750 Switches, and NetApp NFS on the storage side.

                         

                        I have a total of 8 nics.  I divided the nics into 3 groups.   

                         

                        2 nics on vSwitch0 for Mgmt & vMotion

                        3 nics on vSwitch1 for VM's (Multiple port groups (3 VLANS))

                        3 nics on vSwitch2 for IP Storage (Mostly NFS, a little iSCSI)

                          (One vSwitch3 I also have a VM port group for iSCSI access from within the VM)

                         

                        Since I have 3 nics on my IPStorage port group I needed a way to be able to utilize all three nics and not have the server just use one for ingress and egress traffic.  This was done by:

                         

                        Setting up a static EtherChannel on the cisco switch (Port Channel).

                        Configuring the cisco switch to IP Hash

                        Configure the vSwitch to "Route based on IP Hash" as well.

                         

                        The next part is to create multiple datastores on the NFS device.  Each of my NFS datastores is about 500GB in size. Reason for this is that my larger luns are iSCSI and are access directly from the VM using the MS iSCSI initiator from the VM itself.   

                        My NetApp NAS has an address of, let say, 192.168.1.50.  So all my data stores are accessible by utilizing the address of "
                        192.168.1.50\NFS-Store#".  This will not be useful as the esx box and the cisco switch will always use the same nic/port to access the nas device.  This is due to the algorithm (IP HASH) to decide what link it'll go over.  So to resolve the issue, I added IP aliases to the NFS box. NetApp allows to have multiple Ip addresses pointing to the same NFS export, I suspect EMC would do the same. So, I added 2 aliases 51 & 52. Now my NFS datastores are accessible by using Ip address 192.168.1.50,.51, & .52.

                         

                        So I went ahead and added the datastores to the ESX box using the multiple IP addresses:

                         

                        Datastore1 =
                        192.168.1.50\NFS-Store1

                        Datastore2 =
                        192.168.1.51\NFS-Store2

                        Datastore3 =
                        192.168.1.52\NFS-Store3

                         

                        If you have more datastores it'll just repeat: Datastore4 =
                        192.168.1.50\NFS-Store4 and so on...

                         

                        Since having multiple datastores and address to each, the 3 nics on the ESX box dedicated to IP Storage get utilized.  It does not aggregate the bandwidth but it does use all three to send and recieve packets. So the fastest speed you will get is 1Gbit, theoretically, each way for traffic but, it is better than trying to cram all the traffic over 1 nic.

                         

                        I also enabled Jumbo Frames on the vSwitch as well as the vmNic for IP-Stroage. (need the best performance!)

                        I should mention that your NFS storage device should have EtherChannel setup on it as well. Otherwise, you'll be on the same boat just on the other end of it.

                         

                        Hope it helps!

                         

                        Larry B.

                         

                        I should mention that you should not use different addresses to access the same NFS share (datastore).  It is not supported and may cause you issues.

                        • 9. Re: NFS load balancing in LACP trunk?
                          TBKDan Novice
                          LarryBlanco2 wrote:

                          I should mention that you should not use different addresses to access the same NFS share (datastore).  It is not supported and may cause you issues.

                           

                          It is not supported by who? NetApp? EMC? VMWare?

                          • 10. Re: NFS load balancing in LACP trunk?
                            LarryBlanco2 Hot Shot

                            It's is actually in the NetApp documentation for VMWare.

                             

                            "Regarding NFS datastores each datastore should be connected only once from each ESX/ESXi server, and using the same netapp target IP address on each ESX/ESXi server.

                             

                            This may only apply to NetApp, although I can see why and how it can cause issues on NetApp as well as any other storage vendors appliance. Plus having it as stated above makes it look nice and clean. Easier to document as well.

                             

                            Larry B.

                            • 11. Re: NFS load balancing in LACP trunk?
                              TBKDan Novice

                              Well, I created one filesystem and exported it via NFS using three IP addresses, all on the same subnet. I then added it to VMWare. So far, VMWare has not complained and it looks like it's working. This is a greenfield environment so I'll be doing plenty of testing with HA/VMotion to make sure everything works fine... but so far, I think it might be ok

                              • 12. Re: NFS load balancing in LACP trunk?
                                staannoe Lurker

                                What were your results on the testing?