VMware Cloud Community
ChristianWickha
Contributor
Contributor

Multi-gigabit uplinks to VMs with Flex-10 LACP

I have the following hardware;

HP BL 490 G6 with Flex-10 (Firmware 2.12) connected to Cisco 3750 core switches (4 in a stack)

ESX 4.0 with Virtual Center 4, vNetwork Distributed Switch.

BL 490 blades have 2x LOM which have been set to 4x 5Gbps links.

Both Flex-10 modules have 3 uplinks to the Cisco 3750 switches, each at 1Gbps, all in one vNet. These are assigned to LOM1-a and LOM2-a (we have other links assigned to LOM1-b and LOM2-b).

We are trying to get multi-Gigabit connectivity to Virtual Machines in ESX4, using the above infrastructure. In HP Virtual Connect, we can see that 3 links are active (and 3 are standby) - so we would expect at least 3Gbps. These links have been configured on the Cisco swich with

interface Port-channel17 description HP Blades switchport trunk encapsulation dot1q switchport mode trunk interface GigabitEthernet1/0/49 description HP Blades switchport trunk encapsulation dot1q switchport mode trunk channel-protocol lacp channel-group 17 mode active

In the ESX console, we can see that the network cards are operating at 5000 Full Duplex, but none of the virtual machines are getting more than 1Gbps.

We have two network cards in a team in the dvPortGroup, each network card is set to 5Gbps and has 6x1Gbps links (3 active, 3 have gone into standby mode) and the network adapters are configured for "Route based on IP hash" (we also tried "Route based on originating port")

Can anyone help us to get more than 1Gbps to each VM?

0 Kudos
3 Replies
jpdicicco
Hot Shot
Hot Shot

What type of NIC are you using in your VMs? What are your vmnic speed settings on your virtual switches?

As an aside, why have you setup your network to have standby adapters at the VC level? If you create separate vNets, you can use them all at once. From VMware, you just set the load-balancing to portID or MAC hash, and you don't have MAC addr table issues on your switch. I'm doing this with 1GB VC modules...

JP

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
0 Kudos
ChristianWickha
Contributor
Contributor

Err, I said it twice in my original posting - the speed is 5Gbps and it shows as 5000 Full.

The vmnic is set to Auto negotiate. Options are either Auto Negotiate, 1Gbps or 10Gbps.

None of our adapters at a VC level are set to standby, but when we have multiple links going to one adapter, there are standby links.

When you set your 1GB VC Modules to use multiple links, do you get more than 1Gbps? If you do, then it means that I will copy your configuration settings.

0 Kudos
jpdicicco
Hot Shot
Hot Shot

Sorry for not being more clear. I'm asking what type of NIC you are using your guest OS (i.e. VMXNET, e1000, etc.). The virtual h/w can determine what NIC speed shows to the guest OS. Am I correct in understanding that your issue is the guest OS connection speed and not the vmnic/pnic speed to the VC module?

For the VC config, the key is to not use VC for redundancy directly, but to use VMware's failover capabilities. I'll try to explain it here with the why's, but there are many relevant parts:

Physical setup-

  • We have 8 interconnects: 1,2,5,6,7,8 are 1/10G ethernet; 3,4 are FC

  • Our hosts have the 2x1G LOM interfaces, and a quad port card in Mez 2, with 1 i/f connecting to each of bays 5-8

I have 1 Shared Uplink Set (SUS) for each VC module (since we cannot have active ports in 1 SUS from 2 different modules; I'm not sure how this breaks out on Flex-10 modules). This is the key to keeping all ports active in our config, that we have 1 SUS per group of uplinks that can be trunked as active ports.

On the switch side, we have a Cisco Cat6500, using etherchannel for aggregation. An example config is attached.

In VMware networking config, we have 1 LOM vmnic (vmnic0) and 1 vmnic from the mez. card vmnic (vmnic2) assigned to our standard vSwitch. This switch has our SC and VMotion connections with load-balancing set to active/standby as follows: SC=vmnic0/vmnic2; VMotion=vmnic2/vmnic0. So under normal conditions, each port has a dedicated vmnic (best practice).

The remaining vmnics are on a dvSwitch. All port groups use all vmnics active/active, with Port ID load balancing.

Summary / rules governing our setup-

  • 1 SUS per interface or group of interfaces that can be active/active

  • For each SUS that will be on a given vSwitch, every VLAN ID must be in each SUS (example in attached file "SUS networks list.png")

  • VMware load balancing CANNOT be set to IP hash. IP-hash will cause MAC thrashing on your physical switch!

Upsides-

  • All VC interfaces are active and usable at any given time

  • Load sharing occurs based on vSwitch "load balancing" setting of Port ID

Downsides-

  • The stacking links go unused (not such a bid deal)

  • When all uplinks in a module go offline, our vmnic is set to link down (based on Smart Link being enabled). Thus, we lose 1 uplink worth of bandwidth to the VC modules as well.

  • Lots of manual setup in VC.

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
0 Kudos