Hi .I have 4 esx host place into an external farm.
We asked to the network support to configure their switches in order to have link aggregation with two Nics .
So they moved the two Nics on the same Pswitch (Nortel) ..and after one day told me that Link Aggregation was up and running.
But how can I investigate in order to check if this is true or not ?
Because they asked me additional money (this kind of activities are extra budget) and I'd like to be sure that Link aggregation is real up and running ..
Please help me ..
Link aggregation??? If you mean that 2x 1GB nic will give you 2GB speed, that is not possible in ESX.
You can however connect 2 nics to a virtual switch, connect multiple VMs to that virtual switch and have more then 1GB throughput. But one datastream is never split over multiple nics.
What load balancing algorithm does the vswitch use?
If the pnics are hooked up to a 802.3ad trunk I don't believe you will have connectivity unless it's set to "route based on IP hash".
Message was edited by:
I'm very confuse ..My purpose was bond ? ,link? ,aggregate? that is ..
take 2 Nics on esx host and use their as 1 Nic with 2 GB bandwidth and over this link have trunks.
In this moment on my VC/configuration everything seem to be working with route base on ip hash ..I can migrate,create VM ..but I don't understand if now my bandwidth now is 2GB ..and (maybe is a stupid question) I read about lacp,ethernet channel ..an so on..and now you tell me that link aggregation is impossible ..
Please let me understand ..
Ok, I'll try to be more specific.
When a VM starts talking over the network to other VMs it depends if these VMs are on the same virtual switch. If VM-a and VM-b are on the same virtual switch in one ESX host and there is no routing involved, the data trafic will NOT go over the physical nics \!!! The max throughput can be more then 1 GBit, only memory speed is the limit.
When VM-A starts talking to VM-B (or physical server) and this one is on a different server, then we have network traffic. If both hosts (esx servers) have multiple 1GB nics, your max throughput will always be 1 GB !! So even if you bond 4x 1GB nics, your max is 1Gb from VM-A to VM-B.
Now when VM-A starts talking to VM-B AND VM-C.... depending on the balancing algorithme, you could have 1GB to VM-B and 1 GB to VM-C, hoping that traffic to VM-B goes through nic1 and VM-C goes through nic2. But still, max from one-to-one is 1GB.
Conclusion: You will not get more throughput with more nics looking at one-to-one connection. But if you have multiple VMs that can talk over multiple nics, this will give you more bandwith in total. For fault-tolerance it is always best to use multiple nics for one vswitch.
Hope this helps you. If I'm still not clear, please say so and I'll try to help.
If you've got port aggregation ("Multi Link Trunking" in Nortel-speak) turned on at the Nortel side, and "route based on IP-hash" on the ESX side, you're good to go.
You can verify if you're getting load balancing (more or less) by viewing the TX/RX stats of each bonded vmnic:
Check the TX/RX values for each vmnic in the team. They won't be identical, but they should indicate traffic distribution. If the RX counters are not incrementing on one or more vmnics, you don't have it set up on the Nortel.