VMware Cloud Community
marcelovvm
Contributor
Contributor

Link aggregation in ESXi 6.0U3 (free)

Dear friends,

     I have a question. I have a VM fileserver, Windows 2012R2, installed on an ESXi 6.0U3 (free version) and would like to increase the network width by adding one more NIC - link aggreagation. Previously this fileserver was installed directly on an HP server with a NIC teaming configuration with 4 interfaces of 1GB, using LACP. Our switch is an HPE 1920-48G Switch JG927A, which makes LACP.

     How can I configure this link aggregation scenario for, 4 interfaces of 1GB each using LACP, in ESXi? I have to add the 4 interfaces in the same vSwitch and connect it to VM? Or, do I add an NIC to each vSwitch and then plug the 4 vSwitch into the fileserver, and then configure the NIC teaming inside Windows 2012?

     Remembering that we can not use DS, because our version of ESXi is the free one that does not have DS.

Live long and prosper,

Marcelo Magalhães

Live long and prosper,
Marcelo Magalhães
Rio de Janeiro - BR
Reply
0 Kudos
10 Replies
Beingnsxpaddy
Enthusiast
Enthusiast

Hi marcelovvm​, You do not have to worry about the configuration with ESXi, just add all uplinks (physical NIC's) to the same vswitch and make the uplinks active/active.

That will make sure you have required bandwidth available.

In this screenshot I have kept them active passive. you need to make all as active.

pastedImage_0.png

Regards

Pradhuman

VCIX-NV, VCAP-NV, VCP2X-DCVNV

Regards Pradhuman VCIX-NV, VCAP-NV, vExpert, VCP2X-DCVNV If my Answer resolved your query don't forget to mark it as "Correct Answer".
Reply
0 Kudos
marcelovvm
Contributor
Contributor

Ok. I did it. But do I need to do any kind of configuration inside de Port Group? There isa panel, caled NIC Teaming, at the Port Group configuration, with Policy Exceptions check boxes and combo boxes. Do I configure something here?

Live long and prosper,
Marcelo Magalhães
Rio de Janeiro - BR
Reply
0 Kudos
sjesse
Leadership
Leadership

The defaults are fine for most people.

Reply
0 Kudos
marcelovvm
Contributor
Contributor

So just adding two NICs in my vSwitch and swapping the drive from vNic to vxnet3 (10GB), I already have link aggregation?

But I did not configure nothing in the NIC Teaming panel?

And in my HPE swtich, do I have to create link aggregation? Or do I leave it normal? As the physical switch will know that through two ports it can get to my server (which has an IP configured in NIC Teaming - Windows 2012)?

Live long and prosper,

Marcelo Magalhães

Live long and prosper,
Marcelo Magalhães
Rio de Janeiro - BR
Reply
0 Kudos
sjesse
Leadership
Leadership

You can't use lacp without a distributed switch, but you do have other options. Take a look at below for the requirements

VMware Knowledge Base

Reply
0 Kudos
Beingnsxpaddy
Enthusiast
Enthusiast

Your physical switch ports should be trunk with all vLANs available which would be passing thru that vSwitch. once you add physical NICs of the servers in uplink active/active model your work is done. No need to do anything on the NIC teaming panel. By default port groups will have uplink config same as vSwitch, until you change it specifically on a port group.

Hence Simple and robust configuration would be :

Trunk port configured on the physical switch ( If 4 uplinks, should be connected to 2 switches. Like first NIC first port on first switch, second port on second switch, second NIC first port on first switch and second on second switch.)

Add all physical NICs in active/active model on vswitch uplink config.

No need to do complicated configurations. You are good to go.

Regards

Pradhuman

VCIX-NV, VCAP-NV, VCP2X-DCVNV

If my Answer resolved your query dont forget to mark it as "Correct Answer"

Regards Pradhuman VCIX-NV, VCAP-NV, vExpert, VCP2X-DCVNV If my Answer resolved your query don't forget to mark it as "Correct Answer".
Reply
0 Kudos
marcelovvm
Contributor
Contributor

I do not use vlan and at the moment the concern is not with HA, but with performance. I need to increase the communication bandwidth between the server and the HP switch. So, when I configure the switch ports for link aggregation with LACP, I lose the connection (this is obvious since I'm not using vDS). But when I configure the link aggregation without LACP I also lose the connection between the switch and the server. I do not know what else to do!

Live long and prosper,
Marcelo Magalhães
Rio de Janeiro - BR
Reply
0 Kudos
sjesse
Leadership
Leadership

See if the loadbalacing method in teaming is set to route based on ip hash

Configure NIC Teaming, Failover, and Load Balancing on a vSphere Standard Switch or Standard Port Gr...

Route based on IP hash

Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash .

IP-based teaming requires that the physical switch is configured with EtherChannel.

Route Based on IP Hash

Reply
0 Kudos
sjesse
Leadership
Leadership

Read both of these though and this part of the second one too

Considerations

Description

Advantages

  • A more even distribution of the load compared to Route Based on Originating Virtual Port and Route Based on Source MAC Hash, as the virtual switch calculates the uplink for every packet.
  • A potentially higher throughput for virtual machines that communicate with multiple IP addresses.
Disadvantages
  • Highest resource consumption compared to the other load balancing algorithms.
  • The virtual switch is not aware of the actual load of the uplinks.
  • Requires changes on the physical network.
  • Complex to troubleshoot.

If the disadvantages outweigh the aggravates just use the defaults.and just use standard settings on the switches and have two uplinks. This uses Route Based on Originating Virtual Port  and look at the chart below

Advantages

  • An even distribution of traffic if the number virtual NICs is greater than the number of physical NICs in the team.
  • Low resource consumption, because in most cases the virtual switch calculates uplinks for virtual machines only once.
  • No changes on the physical switch are required.
Disadvantages
  • The virtual switch is not aware of the traffic load on the uplinks and it does not load balance the traffic to uplinks that are less used.
  • The bandwidth that is available to a virtual machine is limited to the speed of the uplink that is associated with the relevant port ID, unless the virtual machine has more than one virtual NI

which also increases bandwidth for the environment but specific vms are limited to the bandwith of one physical nic.

Reply
0 Kudos
Beingnsxpaddy
Enthusiast
Enthusiast

You dont have to do link aggregation on the switch level as each port connected to vswitch with teaming policy set to active active already aggregated the speed.

Regards Pradhuman VCIX-NV, VCAP-NV, vExpert, VCP2X-DCVNV If my Answer resolved your query don't forget to mark it as "Correct Answer".
Reply
0 Kudos