Hello - this is my first post here so please be gentle.
I have been trying to find information on setting up LACP within vSphere ESXi 5. Let me give a brief rundown of my environment and the goals we are trying to reach. We use 1GB nics on Brocade VDX 6710 fabric switches.
We have a small ESXi 5 vSphere cluster running standard edition. We have 6 physical nics in each host - 2 are teamed for VM network, 2 are teamed for vMotion/management, and the last two are vmKernal ports one two seperate iSCSI subnets connected using MPIO to our Compellent SAN. This has been our test bed and we have been using the nic teaming within ESXi 5 running "Route based on the originating virtual port ID" with no specific switch side config. We have other single servers using LACP 802.3ad configured on the host and the switch that work great - gives us some better failure protection as we use two switches and plug one link into each switch. We would like to do the same with the ESXi hosts.
Our new project is coming up to virtualize a much larger number of systems than what we are currently serving. What we are looking to do is expand our VM use to include a large number (30 - big for us) of SQL servers. The basic functions of these systems require a decent amount of backend SAN I/O. The physical servers we would be virtualizing would pack a density of around 4:1 or up to 8:1 with this conversion. We are worried that having just the 2 MPIO iSCSI nic paths won't be enough to support the increased I/O load.
We would like to know if using LACP on both iSCSI subnet connections and joining 2+ nics for each connection is viable in ESXi 5 and with iSCSI technology and what setup parameters we should configure to do this.
Also, this project would be using Enterprise Edition VMWare vSphere 5 - does DRS, or distributed switching introduce any further complications or benfits for this setup?
Thanks for any helpful input or direction to already published documentation.
Scott
I find using LACP / etherchannel to be rarely effective or useful in VMware environments.
For iSCSI storage, my standard configuration is to use 2 uplinks with iSCSI port binding. Here's screenshots of the config.
Usually iSCSI does not use LACP or NIC teaming. It simple use multi-path.
Check the Compellent documentation on how configure your vSphere enviroment.
Would there be a way to increase the available bandwidth to the iSCSI subnets that is recommended? Would I just add additional vmkernal ports on new IPs and then add them to the MPIO config on each VM?
I guess the question is - how do I add more bandwidth to my iSCSI connections on ESXi 5? Do I go more MPIO ports, LACP, or am I forced to jump to 10gb?
Also, is it best to do this configuration at the VMWare Host level or at the VM OS level? Ease of management aside - I'm just looking for a solution that will work and perform, not worried about how long it takes to setup or how complicated the process is.
You can certainly create multiple vmkernel interfaces to use with iSCSI and bind them to physical NICs. With round robin, you can use all interfaces effectively increasing your bandwidth by the interfaces you have in the server. You can scale the 1 GB connections as needed to add to your available bandwidth, without having to jump to 10 gb.
LACP is bound to src/dst pairs, so that typically does not help when you have a low of iSCSI targets.
iSCSI is typically more easily done at the host level to avoid multiple configurations at the vm level.
-KjB
I find using LACP / etherchannel to be rarely effective or useful in VMware environments.
For iSCSI storage, my standard configuration is to use 2 uplinks with iSCSI port binding. Here's screenshots of the config.
Thanks for the answers everyone.
The multiple vmkernal ports in a iSCSI binding sounds like the way to go for the datastores, plus the VMs can use their local MPIO setup to connect the various paths on top of that for the direct iSCSI SAN LUN mappings.
iSCSI uses its own networking when done properly (vmknic binding) as you have already discovered. LACP is of no use at all.