VMware Cloud Community
dennes
Enthusiast
Enthusiast
Jump to solution

Port trunking (link aggregation) not working

Hi,

I have a single ESX 4 server (hp dl380g5), with 2 pNICS connected to an hp 2510-48g Gb switch. I created a trunk on the switch for ports 1 and 2 and hooked them up to both ESX pNICS.

I only have 1 vSwitch in ESX and local storage (no VMkernel) which has the Service Console and the VM network. I enabled Load balancing on IP Hash on this vSwitch (vSwitch0) to create a theoretical trunk over my 2 Gb NICs. Both NICA are part of vSwitch0

Now when i read/write data to the VM's, only on of the trunk ports seems to handle the data, and not both at the same time and withe the same datarate/utilization as i would expect.

Is this expected behaviour? Am i doing something wrong or misunderstanding anything here?

Thanks,

Dennes

0 Kudos
1 Solution

Accepted Solutions
MKguy
Virtuoso
Virtuoso
Jump to solution

I take it you are aware that with the IP-hashing mechanism, packets from one IP-X to another IP-Y are always transmitted thorugh only one physical port, but never through multiple ports at the same time?! Hence, you will never achieve an increase in throughput for a single ``conversation'' between IP-X and IP-Y, no matter how many physical uplinks your trunk has. This is due to potential layer 2 MAC learning and other issues.

Did you test it with a number connections of different source/destination IPs?

If you only tested a small number of connections, you could have been "unlucky" that the hashing produced the same pNIC uplink for all connections.

You could also check the esxtop network view to see which pNIC handles what kind/loads of traffic.

-- http://alpacapowered.wordpress.com

View solution in original post

0 Kudos
3 Replies
MKguy
Virtuoso
Virtuoso
Jump to solution

I take it you are aware that with the IP-hashing mechanism, packets from one IP-X to another IP-Y are always transmitted thorugh only one physical port, but never through multiple ports at the same time?! Hence, you will never achieve an increase in throughput for a single ``conversation'' between IP-X and IP-Y, no matter how many physical uplinks your trunk has. This is due to potential layer 2 MAC learning and other issues.

Did you test it with a number connections of different source/destination IPs?

If you only tested a small number of connections, you could have been "unlucky" that the hashing produced the same pNIC uplink for all connections.

You could also check the esxtop network view to see which pNIC handles what kind/loads of traffic.

-- http://alpacapowered.wordpress.com
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

This depends on your IPs.

IP simply takes does a mod on the IP address and uses the remainder to determine which NIC to use,

In its simplest terms . . If you have 2 NICs, the IP is divided by 2 and anything left over (ie 1, or 0) is used to determine the path to go.

So . . If you have even numbered NICs, you'll always get remainder 0 and always use NIC0

To test, set 2 VMs to have consecutive IPs, generate a bunch of load and monitor the outcome.

Also, make sure that your Physical switch is aware of the trunk and has the VLANs etc configured and channeled correctly.

One day I will virtualise myself . . .
dennes
Enthusiast
Enthusiast
Jump to solution

OK, thank you both for the clarification..

I was under the impression that it would trunk the 2 NICS as one fat pipe, but there seems to be a lot more going on than i thought Smiley Happy

So in the end, is it useful to set this (trunk on switchh and lb on ip hash) or is there not much difference in performance compared to the default setup (LB on originating port id) when you want to bind multiple NICs?

Thanks,

Dennes

0 Kudos