Dear friends,
I have a question. I have a VM fileserver, Windows 2012R2, installed on an ESXi 6.0U3 (free version) and would like to increase the network width by adding one more NIC - link aggreagation. Previously this fileserver was installed directly on an HP server with a NIC teaming configuration with 4 interfaces of 1GB, using LACP. Our switch is an HPE 1920-48G Switch JG927A, which makes LACP.
How can I configure this link aggregation scenario for, 4 interfaces of 1GB each using LACP, in ESXi? I have to add the 4 interfaces in the same vSwitch and connect it to VM? Or, do I add an NIC to each vSwitch and then plug the 4 vSwitch into the fileserver, and then configure the NIC teaming inside Windows 2012?
Remembering that we can not use DS, because our version of ESXi is the free one that does not have DS.
Live long and prosper,
Marcelo Magalhães
Hi marcelovvm, You do not have to worry about the configuration with ESXi, just add all uplinks (physical NIC's) to the same vswitch and make the uplinks active/active.
That will make sure you have required bandwidth available.
In this screenshot I have kept them active passive. you need to make all as active.
Regards
Pradhuman
VCIX-NV, VCAP-NV, VCP2X-DCVNV
Ok. I did it. But do I need to do any kind of configuration inside de Port Group? There isa panel, caled NIC Teaming, at the Port Group configuration, with Policy Exceptions check boxes and combo boxes. Do I configure something here?
The defaults are fine for most people.
So just adding two NICs in my vSwitch and swapping the drive from vNic to vxnet3 (10GB), I already have link aggregation?
But I did not configure nothing in the NIC Teaming panel?
And in my HPE swtich, do I have to create link aggregation? Or do I leave it normal? As the physical switch will know that through two ports it can get to my server (which has an IP configured in NIC Teaming - Windows 2012)?
Live long and prosper,
Marcelo Magalhães
You can't use lacp without a distributed switch, but you do have other options. Take a look at below for the requirements
Your physical switch ports should be trunk with all vLANs available which would be passing thru that vSwitch. once you add physical NICs of the servers in uplink active/active model your work is done. No need to do anything on the NIC teaming panel. By default port groups will have uplink config same as vSwitch, until you change it specifically on a port group.
Hence Simple and robust configuration would be :
Trunk port configured on the physical switch ( If 4 uplinks, should be connected to 2 switches. Like first NIC first port on first switch, second port on second switch, second NIC first port on first switch and second on second switch.)
Add all physical NICs in active/active model on vswitch uplink config.
No need to do complicated configurations. You are good to go.
Regards
Pradhuman
VCIX-NV, VCAP-NV, VCP2X-DCVNV
If my Answer resolved your query dont forget to mark it as "Correct Answer"
I do not use vlan and at the moment the concern is not with HA, but with performance. I need to increase the communication bandwidth between the server and the HP switch. So, when I configure the switch ports for link aggregation with LACP, I lose the connection (this is obvious since I'm not using vDS). But when I configure the link aggregation without LACP I also lose the connection between the switch and the server. I do not know what else to do!
See if the loadbalacing method in teaming is set to route based on ip hash
Route based on IP hash | Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash . IP-based teaming requires that the physical switch is configured with EtherChannel. |
---|---|
Read both of these though and this part of the second one too
Considerations | Description |
---|---|
Advantages |
|
Disadvantages |
|
If the disadvantages outweigh the aggravates just use the defaults.and just use standard settings on the switches and have two uplinks. This uses Route Based on Originating Virtual Port and look at the chart below
Advantages |
|
Disadvantages |
|
which also increases bandwidth for the environment but specific vms are limited to the bandwith of one physical nic.
You dont have to do link aggregation on the switch level as each port connected to vswitch with teaming policy set to active active already aggregated the speed.