VMware Cloud Community
rbmadison
Contributor
Contributor
Jump to solution

vSphere Teaming Question

We are deploying vSphere ESXi on blades with 4 1GB NICs connected to Cisco switches which have dual 10GB uplinks. We are going to be using the VMware vDS and not the Cisco 1000v vDS. My plan was to portchannel all 4 ports on the switches for each blade and load balance the NICs in the port group in the vSwitch. Will this theoretically give me a 4GB throughput to each blade? Is this the best way to do this and what load balancing method should I use. Or does anyone have a different or easier design? We want the most performance we can get out of the system. Forgive me but I'm not a Cisco or network guru and I've read alot of documents from Cisco and VMware as well as posts but don't know what would be the best direction for a vSphere deployment.

Thanks!

0 Kudos
1 Solution

Accepted Solutions
kjb007
Immortal
Immortal
Jump to solution

-

In a perfect situation, I recommend separating as much as is practical, while maintaining redundancy. If you had the luxury of an abundance of physical NICs, and a handful of VLANs over which your VMs travel, then yes, I would separate vmotion from management, and management from vm traffic. That would lead me to an ideal of 6 NICs, provided you have 1 vm network. More practically, with less NICs, I would team by vmotion and management traffic into 1 vSwitch, with VLANs, and dedicate 1 NICs as primary for vmotion, and standby for management, and vice versa for the 2nd NIC. That eliminates 2 NICs that would be required. I would also make sure to separate iSCSI as you do, with separate NICs teamed on a separate vSwitch. Then lastly, I would separate the vm traffic to its own vSwitch. Depending on how many NICs you have, I would separate the VM traffic over those pNICs. If you only have a couple, then VLANs over the pair would work well, which is what I do in some environments, and if I have more pNICs available, then I balance my vm traffic over those physical NICs, but still maintaining 2 pNIC teams. It's easier when it comes time to troubleshoot, to monitor 2 pNICs, then 3 or 4. Of course, it all depends, on your individual needs.

With stacked switches, you do have the ability to etherchannel/aggregate your switch ports across switches, so you would get the benefit of link aggregation. To get to the proper configuration for both transmit and receive load balancing, you need to configure both the switch (with an ether/port channel), that will give you the Rx load balancing, and team the NICs together on the vSphere side, using IP hash as your load balancing, that will give your Tx load balancing. With this config, you will be able to utilize more than one pNICs for your vm traffic. That being said, your alogrithm will allow vm communication src-dest combinations to balance over the physical NICs in the team. So, you're not really getting 2 GB of bandwidth, per se, by using 2 pNICs, but you have 2x1GB individual tunnels to the network.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB

View solution in original post

0 Kudos
6 Replies
kjb007
Immortal
Immortal
Jump to solution

I usually don't recommend teaming more than 2 physical NICs together in a bond. While this seems an easier thing to setup in the beginnnig, when you have to troubleshoot, it gets a bit more difficult to figure out which vm/path is going out which NIC. You can do this now, with esxtop, and such, but it's still a hassle.

I also don't like teaming vm network trafic with server management/console traffic, and separate these types by vSwitches, putting server management traffic over one vSwitch, and vm traffic over another vSwitch.

That being said, you can only portchannel interfaces going to the same switch (there are exceptions in the case of stackable switches, but then you're effectively making 1 switch out of multiple stacked switches), and it sounds like you have more than one switch you are connecting to. A portchannel, when configured, completes the switch side configuraiton, you will need to team the NICs on the ESX side, and modify the load balancing policy to IP hash to actually configure for 802.3ad load aggregation from the ESX side. Remember, with this configuration, you are allowing a vm to use more than a single 1 GB path to the network, but you're not going to actually see 4 GB of bandwidth. What you will be able to do is to talk to 4 different src-destination combinations, at 1 GB each, simultaneously.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

kjb-

Would you not recommend teaming two/three nics on the Esxi side and using vlans to segragate vmotion, management and vm traffic on one vswitch? I have a dedicated iSCSI network/vswitch so that will be on its own. What are you thoughts on that?

Thanks

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
rbmadison
Contributor
Contributor
Jump to solution

Thanks for the reply. You are correct that there are 4 switches and they are stacked. From your comments is there any actual benefit to teaming the NICs on the stacked switchs? Would it be easier to just team them on the vSwitch and load balance there? Since I'm not getting an increase in bandwidth and am only getting a dual paths to each VM I don't know if it's even worth it? Should I just create 2 vSwitches, one for management and one for VM traffic and select 2 NICs on each vSwitch and team and load balance that way? What would your design be?

Thanks again for the help.

Rick

0 Kudos
kjb007
Immortal
Immortal
Jump to solution

-

In a perfect situation, I recommend separating as much as is practical, while maintaining redundancy. If you had the luxury of an abundance of physical NICs, and a handful of VLANs over which your VMs travel, then yes, I would separate vmotion from management, and management from vm traffic. That would lead me to an ideal of 6 NICs, provided you have 1 vm network. More practically, with less NICs, I would team by vmotion and management traffic into 1 vSwitch, with VLANs, and dedicate 1 NICs as primary for vmotion, and standby for management, and vice versa for the 2nd NIC. That eliminates 2 NICs that would be required. I would also make sure to separate iSCSI as you do, with separate NICs teamed on a separate vSwitch. Then lastly, I would separate the vm traffic to its own vSwitch. Depending on how many NICs you have, I would separate the VM traffic over those pNICs. If you only have a couple, then VLANs over the pair would work well, which is what I do in some environments, and if I have more pNICs available, then I balance my vm traffic over those physical NICs, but still maintaining 2 pNIC teams. It's easier when it comes time to troubleshoot, to monitor 2 pNICs, then 3 or 4. Of course, it all depends, on your individual needs.

With stacked switches, you do have the ability to etherchannel/aggregate your switch ports across switches, so you would get the benefit of link aggregation. To get to the proper configuration for both transmit and receive load balancing, you need to configure both the switch (with an ether/port channel), that will give you the Rx load balancing, and team the NICs together on the vSphere side, using IP hash as your load balancing, that will give your Tx load balancing. With this config, you will be able to utilize more than one pNICs for your vm traffic. That being said, your alogrithm will allow vm communication src-dest combinations to balance over the physical NICs in the team. So, you're not really getting 2 GB of bandwidth, per se, by using 2 pNICs, but you have 2x1GB individual tunnels to the network.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
rbmadison
Contributor
Contributor
Jump to solution

Thanks for the help. The information is really making it easier to understand. We are doing some testing and we are running into issues. This is what we have done:

We have a blade chassis with 4 phyiscal switches stacked together. Each blade as 4 NICs and a port-channel group has been created on the physical switch side for all 4 ports the blades are connected to. We try booting ESXi 4.0 U1 and select all 4 NICs for the management network. Set a static IP on a VLAN that is accessable on that portchannel group but we can't connect? Since this is ESXi I cannot go to the console to set load balancing on the ESXi host to IP hash. Does anyone have any suggestions or is this a configuration that isn't suggested or supported? I really appreciate everyones help!

Rick

0 Kudos
kjb007
Immortal
Immortal
Jump to solution

What are your portchannel settings on the switch? You will need to set your portchannel mode to on. It can not use LACP, or any other dynamic protocol. ESX only supports static portchannel configuration.

You can start by disabling all but 1 NIC to start with, and set your configuration that way. This is one of those difficult to troubleshoot type of scenarios I wrote about above.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos