I have 4 1GB uplinks available from my ESXi 5.1 host to use for NFS. On the array side, I'm using only 1 IP for NFS. There is a 10Gb Etherchannel link between the array and the physical switches. However, on the links between the ESXi hosts and the physical switches no Etherchannel or LACP can be used.
Enterprise plus / load based teaming is available.
On the ESXi host:
How many vmkernel ports should I use?
Which failover policy should I use (active / active)?
Which load balancing policy should I use?
Thanks
With out LACP/Etherchannel - set up a virtual switch with 2 physical NICs set up as an active passive failove and keep the network teaming to Route Based on Virtual Port ID
Actually, after the previous post, it looks like now we do have the option to use link aggregation if we so choose. What would the best design be with link aggregation being an option?
Actually, with Load based teaming, I wouldn't need etherchannel, and I could send my uplinks to different switches for redundancy:
http://frankdenneman.nl/2011/02/24/ip-hash-versus-lbt/
Would this be a better solution?
any luck with that VMinator? i am testing something similar with a freenas 8.3 box and a couple esxi5.1 hosts. have two switches and multiple NICs for all, but only basic switches dont support LACP, trunking, etc. seems like i had it working for a minute then dropped out on me...i was also trying to put two NICs in a LAGG interface on freenas that simply does failover no bonding/round robin just to have a single IP to map to from ESXi.
TheVMinator,
I would personally stay away from LACP/Etherchannel if at all possible to keep it simple. LBT is definitely the way to go. Here is a great article on NFS and LBT seeing as I have not created one on my blog I will give Chris Wahl some cred here.
http://wahlnetwork.com/2012/04/30/nfs-on-vsphere-technical-deep-dive-on-load-based-teaming/