I have a single primary server with 4 physical NICs dedicated to that server in a vSwitch. I wanted to know the proper way to setup load balancing in vSphere to allow for a theoretical 4Gbps throughput. The options I have, as you would probably know:
The first two sound to me like they would not give me what I want since the physical NIC selection is based upon the originating VM, not the destination. So, since I only have one VM, the LB would only ever use one physical NIC and I'd be stuck with 1Gbps.
The third option sounds more promising, but I do not have much exposure to switch management. Would LACP be akin to creating a trunk on my 3Com 2848 baseline switch? It sounds likely, but I wanted to run it by the experts to make sure.
Thanks!
EDIT: I think I just found the killer. Quoted from the user manual for my switch:
Does this mean I am hopeless to accomplish my 4Gbps with this swithc and I HAVE to get a LACP switch?
You'll need either LACP or EtherChannel on your upstream switch to support the IP Hash configuration, according to the Install, Configure, Manage Volume 1 book for ESXi 5.0. With IP Hash, a VM's MAC address could appear on any of the physical interfaces available to the vSwitch (at least for outbound traffic), which would cause problems on the physical switch side unless some type of link aggregation method (LACP or EtherChannel) is enabled.
mike
You'll need either LACP or EtherChannel on your upstream switch to support the IP Hash configuration, according to the Install, Configure, Manage Volume 1 book for ESXi 5.0. With IP Hash, a VM's MAC address could appear on any of the physical interfaces available to the vSwitch (at least for outbound traffic), which would cause problems on the physical switch side unless some type of link aggregation method (LACP or EtherChannel) is enabled.
mike
Luckily I found the 3Com 2948 does have LACP and is affordable for me.
But with IP hash and LACP I should be able to get a 4Gbps channel from the VM to my clients? (shared between them)
You'll get more bandwidth than a single 1Gbps interface, but you I doubt you'll see 4Gbps due to overhead.
mike
Well I was more referring to "theoretical" 4Gbps...I realize there will be overhead. I should at least get 3Gbps out of it though right?
It doesn't quite work like that ...
Theoretical you could get 3Gbs, but then there should be a lot of VM's going over the bond/team ...
This discussion might be of interest to you: http://communities.vmware.com/message/2105268
So I'm confused by that, the last two posts seem to contradict each other. Which is it?
I have ONE VM attached to ONE vSwitch consisting of 4 NICs teamed by IP hash with all 4 links going to ONE physical switch which would be configured with LACP.
Will the VM be able to use all 4 NICs via teaming if there are enough clients or will the VM restrict itself to a single NIC?
Assuming I have a consistent 10+ clients would I be able to get at least 3Gbps+ from server shared to all clients? (I know each client would be limited to 1Gbps, but thats fine, I just need a wider downstream from server due to client load)
If these 10 clients are connecting to your VM that is using those 4 NICs, theoretically you could get 4Gb, in any case the traffic will be "load balanced" over these NICs ... I don't think 10 clients will really benefit though, unless the traffic is really heavy maybe.
I can't really think of real world examples so I can't really compare... Though I have set up LACP/Teaming before which works without issues if done correctly, but that wasn't to improve throughput/performance.
I'm using the server as a launch point for files for OS deployment. So the load would be heavy as clients increase, each client would access a few GB of data, and the load is pretty constant as clients are switched out. It's not an office environemnt, its production.
But from the sound of it, this will be a big improvement over our single gigabit connection shared amongst 30+ clients that I currently have. When the clients start nearing capacity, the network becomes almost unusable.
Thanks for the input guys!
Another quick question about LACP. I have a switch it seems with LACP (a 3Com 3824). I attached the IP Hash Teamed NICs to ports 1-4 and enabled LACP for those ports. Is that all I need to do? Should the aggregation be showing up anywhere or do I just trust that is is working? I can manually put them in a LAG, but the LAG tells me that LACP is not operational for that LAG.
I don't really know how the 3Com switches handle the LACP, I trust it's kinda like HP switches.
Maybe this KB will give you some hints: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100404...
But basically it should just be enabled yes ...
I trust this is for 5.1 where LACP is possible, otherwise you'll need to make it static.
Message was edited by: spravtek