VMware Cloud Community
Serverman76
Contributor
Contributor
Jump to solution

vSphere 5 and iSCSI LACP

Hello - this is my first post here so please be gentle.

I have been trying to find information on setting up LACP within vSphere ESXi 5. Let me give a brief rundown of my environment and the goals we are trying to reach. We use 1GB nics on Brocade VDX 6710 fabric switches.

We have a small ESXi 5 vSphere cluster running standard edition. We have 6 physical nics in each host - 2 are teamed for VM network, 2 are teamed for vMotion/management, and the last two are vmKernal ports one two seperate iSCSI subnets connected using MPIO to our Compellent SAN. This has been our test bed and we have been using the nic teaming within ESXi 5 running "Route based on the originating virtual port ID" with no specific switch side config. We have other single servers using LACP 802.3ad configured on the host and the switch that work great - gives us some better failure protection as we use two switches and plug one link into each switch. We would like to do the same with the ESXi hosts.

Our new project is coming up to virtualize a much larger number of systems than what we are currently serving. What we are looking to do is expand our VM use to include a large number (30 - big for us) of SQL servers. The basic functions of these systems require a decent amount of backend SAN I/O. The physical servers we would be virtualizing would pack a density of around 4:1 or up to 8:1 with this conversion. We are worried that having just the 2 MPIO iSCSI nic paths won't be enough to support the increased I/O load.

We would like to know if using LACP on both iSCSI subnet connections and joining 2+ nics for each connection is viable in ESXi 5 and with iSCSI technology and what setup parameters we should configure to do this.

Also, this project would be using Enterprise Edition VMWare vSphere 5 - does DRS, or distributed switching introduce any further complications or benfits for this setup?

Thanks for any helpful input or direction to already published documentation.

Scott

0 Kudos
1 Solution

Accepted Solutions
chriswahl
Virtuoso
Virtuoso
Jump to solution

I find using LACP / etherchannel to be rarely effective or useful in VMware environments.

For iSCSI storage, my standard configuration is to use 2 uplinks with iSCSI port binding. Here's screenshots of the config.

iscsi-switch-layout.png

iscsi-portgroup-layout.png

iscsi-bindings.png

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators

View solution in original post

0 Kudos
6 Replies
AndreTheGiant
Immortal
Immortal
Jump to solution

Usually iSCSI does not use LACP or NIC teaming. It simple use multi-path.

Check the Compellent documentation on how configure your vSphere enviroment.

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Serverman76
Contributor
Contributor
Jump to solution

Would there be a way to increase the available bandwidth to the iSCSI subnets that is recommended? Would I just add additional vmkernal ports on new IPs and then add them to the MPIO config on each VM?

I guess the question is - how do I add more bandwidth to my iSCSI connections on ESXi 5? Do I go more MPIO ports, LACP, or am I forced to jump to 10gb?

Also, is it best to do this configuration at the VMWare Host level or at the VM OS level? Ease of management aside - I'm just looking for a solution that will work and perform, not worried about how long it takes to setup or how complicated the process is.

0 Kudos
kjb007
Immortal
Immortal
Jump to solution

You can certainly create multiple vmkernel interfaces to use with iSCSI and bind them to physical NICs.  With round robin, you can use all interfaces effectively increasing  your bandwidth by the interfaces you have in the server.  You can scale the 1 GB connections as needed to add to your available bandwidth, without having to jump to 10 gb.

LACP is bound to src/dst pairs, so that typically does not help when you have a low  of iSCSI targets.

iSCSI is typically more easily done at the host level to avoid multiple configurations at the vm level.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
chriswahl
Virtuoso
Virtuoso
Jump to solution

I find using LACP / etherchannel to be rarely effective or useful in VMware environments.

For iSCSI storage, my standard configuration is to use 2 uplinks with iSCSI port binding. Here's screenshots of the config.

iscsi-switch-layout.png

iscsi-portgroup-layout.png

iscsi-bindings.png

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos
Serverman76
Contributor
Contributor
Jump to solution

Thanks for the answers everyone.

The multiple vmkernal ports in a iSCSI binding sounds like the way to go for the datastores, plus the VMs can use their local MPIO setup to connect the various paths on top of that for the direct iSCSI SAN LUN mappings.

0 Kudos
gerdesj
Contributor
Contributor
Jump to solution

iSCSI uses its own networking when done properly (vmknic binding) as you have already discovered.  LACP is of no use at all.

Now I don't know the Compellant stuff but many SAN vendors provide Multipathing Extension Modules which might allow you to use multiple iSCSI paths sumultaneously.  This is more efficient than the Round Robin default VMWARE driver.
Configure multiple iSCSI vmknics as usual.  Install the extension (probably reboot the ESXi). Go into the path selection bit for each volume and set it to use the new module if necessary.
You may find that you now have Active (I/O) against each path rather than just one of them.  The (I/O) is the important bit here.
Not all SANs are the same so check the docs. 
If you can spare an ESXi host for testing then I suggest doing some throughput testing before and after.
Another thought - are you using "iSCSI offload" and/or jumbo frames.  If you haven't considered JF then note that nearly all postings I've seen forget to mention setting the do not fragment bit on the jumbo vmkping when they tell you how to test it.  Setting up JF in vSphere 5 is a lot simpler than in 4 - it can all be done through the GUI (apart from testing - you'll need to SSH to the host to run a vmkping).  Also note that if you use Broadcom 5709 - that can't do offload and jumbo frames at the same time.  I don't know about other NICS.
Cheers
Jon
0 Kudos