VMware Cloud Community
ikt
Enthusiast
Enthusiast
Jump to solution

Optimal teaming/aggregate vSwitch ports against HP switch

I have only Essentials Plus so I'm stuck with vSwitches 😐

However I have a dual port copper NIC which I want to team / aggregate against a HP switch. What are the optimum settings on both sides to achieve best performance?  LACP or not, etc...   I hope I will see more throughput than 1 Gb over this link...?

Thanks a lot for comments and advice

Tor

Tags (1)
Reply
0 Kudos
1 Solution

Accepted Solutions
daphnissov
Immortal
Immortal
Jump to solution

Yes, multiple VMs will share a given uplink (selected by ESXi) in that case. However, again, this is not unique. All other teaming algorithms would behave the same way, so if you want to spread that out you'd simply add more vmnics to your team and put them all in an active state. ESXi will take it from there.

View solution in original post

Reply
0 Kudos
11 Replies
nileshm
Enthusiast
Enthusiast
Jump to solution

Please refer, LACP Support on a vSphere Distributed Switch

In my opinion LACP is the best option. But please note LACP is supported only on vSphere Distributed switch. vSphere Stahdard switch does not support LACP.

Thanks!

Reply
0 Kudos
HassanAlKak88
Expert
Expert
Jump to solution

Hello,

Kindly check the following vSphere Networking document: https://docs.vmware.com/en/VMware-vSphere/6.5/vsphere-esxi-vcenter-server-65-networking-guide.pdf

Please consider marking this answer "CORRECT" or "Helpful" if you think your question have been answered correctly.

Cheers,

VCIX6-NV|VCP-NV|VCP-DC|

@KakHassan

linkedin.com/in/hassanalkak


If my reply was helpful, I kindly ask you to like it and mark it as a solution

Regards,
Hassan Alkak
ikt
Enthusiast
Enthusiast
Jump to solution

nileshm: 

I stated expressively that I'm under the Essentials Plus license and I have no Distributes switch option.  So you might understand that LACP is not an option for me

Reply
0 Kudos
ikt
Enthusiast
Enthusiast
Jump to solution

I have read this document and know these principles. 

But I hoped to get tips and practical experiences on optimum settings for both parties while teaming vSwitch ports up against HP switch ports (as LACP is not an option form me here).

regards Tor

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

Your best option is going to be the option that's available within that license level, which is to simply use the "Route Based on Originating Virtual Port ID". This requires no special up-stream configuration. A VM's vNIC will select one physical adapter in the team and another VM will select another. You're still limited to the link speed of a single vmnic, but that's no different if you used a static LAG either.

ikt
Enthusiast
Enthusiast
Jump to solution

Ok, that was useful info.

But what about if more VMs than the number of team nics need network access?  Will two VMs have to share one team nic then?

regards Tor

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

Yes, multiple VMs will share a given uplink (selected by ESXi) in that case. However, again, this is not unique. All other teaming algorithms would behave the same way, so if you want to spread that out you'd simply add more vmnics to your team and put them all in an active state. ESXi will take it from there.

Reply
0 Kudos
Dave_the_Wave
Hot Shot
Hot Shot
Jump to solution

I asked a question about this too a while ago, my screenshots may help you:

How to use vmnic0 and vmnic1 properly for performance?

What I did was team up all my Physical Adapters, and then patched them all into a HP JE009A, all default settings.I was going to make some VLANs and other stuff, but so far works great out of the box.

Reply
0 Kudos
ikt
Enthusiast
Enthusiast
Jump to solution

Thanks for all useful inputs and comments.  I have just a final question:

I have four 'free' Gb copper nic uplinks.  Would it be bad practice to add ALL those to vSwitch0 (which already contains the default portgroups VM Network and Management Network) AND add 6-7 VM networks (using VLAN IDs) to the same switch?  This gives a lot of connections to this switch (cause some kind of congestion?) but the advantage is that we get four uplinks to distribute the load.  Any comments to this..?

My iSCSI are on separate portgroups / vSwitches so they are not a part of this scenario.

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

Do you *need* this many vmnics connected? If you aren't seeing congestion then don't make your infrastructure more complex. Separating virtual machine port groups through VLAN tags is fine, just remember that each upstream port needs to be a trunk with all the same VLANs allowed.

Reply
0 Kudos
ikt
Enthusiast
Enthusiast
Jump to solution

I just wondered if it was bad practice to mix vmkernel portgroups and VM network portgroups in one single switch having four uplinks.

The alternative is to assign two uplinks to vmkernel groups and the remaining uplinks to VM network groups1....

In my opinion that is a more restrictive solution, but I don't know what you big guys think about this 🙂

Reply
0 Kudos