VMware Cloud Community
UnisonNZ
Contributor
Contributor

Network Configuration Recommendation - Teaming hp bl25p

Guys

We have a couple of hp BL25p blades that we are planning to run VI3 Enterprise on.

Our blade enclosure has the GbE2 inconnect switches, each switch is uplinked to our core switch using Link aggregation - 802.3ad over 4 ports per switch..

Each BL25p gets 2 NIC's from each switch (giving a total of 4), so I'm thinking on doing either of the following

Option A

Switch A Port 1 - Service Console

Switch A Port 2 - Virtual Machine Network - NIC Team

Switch B Port 1 - Virtual Machine Network - NIC Team

Switch B Port 2 - VMotion

Option B

Is to Team all four ports together, and have all services running over it.

So going with either option, I have a NIC Team, that spans over two switches. Is there any configuration that is required in the Physical switches to allow these to load balance?

Thanks

David

0 Kudos
9 Replies
eliot
Enthusiast
Enthusiast

i'm just pondering the exact same problem on the same kit.

In the real world, you cant ether channel to seperate physical switches, so i'm thinking you cant team the cards together down to seperate switches (in the chassis) either.

I was considering having two production lan vSwitches, one connected to each pSwitch (in the chassis) - with each vSwitch configured to use the other connection as its backup connection. Thereby acheiving a static load balance accross my pSwitches with failover without any etherchannel.

hope that makes sense...

0 Kudos
esiebert7625
Immortal
Immortal

Why load balance at the physical switch level, you can do this in a vSwitch instead.

Just create a vSwitch for the Service Console and assign one nic to it, another vSwitch for the VM Network and assign 2 nics to it and a third vSwitch for the VMotion with the last nic. VMotion requires a dedicated switch port so you should not team this with anything else. Your option A looks good, this is exactly how I do it on our ESX servers with 4 NICs.

Option A

Switch A Port 1 - Service Console

Switch A Port 2 - Virtual Machine Network - NIC Team

Switch B Port 1 - Virtual Machine Network - NIC Team

Switch B Port 2 - VMotion

Also check out these networking guides...

http://www.vmware.com/pdf/esx3_vlan_wp.pdf

http://download3.vmware.com/vmworld/2006/TAC9689-A.pdf

http://download3.vmware.com/vmworld/2006/tac9689-b.pdf

0 Kudos
eliot
Enthusiast
Enthusiast

But the 4 nics lead to two different physical switches (inside the blade chassis) therefore you cant load balance (etherchannel) between them. If i understand correctly.

0 Kudos
esiebert7625
Immortal
Immortal

As long as the ESX server can see 4 NICS you can load balance at the vSwitch level. When you create the vSwitch you just assign those two separate NICs to it. Then VMware handles the load balancing not the physical switch that it is plugged into. So you can have 2 NICs on two separate physical NICs plugged into 2 separate physical switches and do load balancing. The load balancing is done at the Vmware level and not at the physical switch level. Physical NICs can be connected to different physical switches as long as they remain in the same broadcast domain.

0 Kudos
eliot
Enthusiast
Enthusiast

OK.

I was basing my assumption on the fact that when you connect a pair (or more) nics from a vSwitch into a cisco switch,you create an etherchannel and then ensure the loadbalancing method for the pSwitch matches the vSwitch (ip hash, mac hash etc)

0 Kudos
esiebert7625
Immortal
Immortal

Yes, setting up a etherchannel or bonded ports on the physical switch is not necessary. If you bond two ports on the switch you are essentially only presenting one NIC to the server.

0 Kudos
UnisonNZ
Contributor
Contributor

Guys, great information...

How does the physical layer load balance incomming traffic?

Ta

David

0 Kudos
esiebert7625
Immortal
Immortal

I don't think it's true load balancing, once a VM attaches to a particular NIC it continues to use that same NIC. Pages 23-28 of this doc go into it in detail, http://download3.vmware.com/vmworld/2006/tac9689-b.pdf

\- Outbound NIC is chosen based on source MAC or originating port id

\- Client traffic is consistently sent to the same physical NIC until there is a failover

\- Replies are received on the same NIC as the physical switch learns the MAC/ switch port association

\- Better scaling if: no of vNICs > no of pNICs

\- VM cannot use more than one Physical NIC unless it has two or more virtual NICs

\- Outbound NIC is chosen based on Source-destination L3 address pair

\- Scalability is dependent on the no of TCP/IP sessions to unique destinations. No benefit for bulk transfer between hosts

\- Physical switch will see the client MAC on multiple ports

- Can disrupt MAC address learning on the physical switch

- Inbound traffic is unpredictable.

\- Broadcast / Multicast packets return to the VM through other NICs in the team

\- Most Guest OS's ignore duplicate packets

0 Kudos
jhanekom
Virtuoso
Virtuoso

At this point, Etherchannel is impractical for most blade environments using discrete, internal switches.

The concept of "using your NICs more efficiently" sounds very sexy on paper, but in practical terms you will find that very few ESX implementations use a substantial portion of a single Gigabit link on a regular basis (save for say, during backup windows), let alone a single VM filling two. In just about every case, port-based load balancing (which is the default) is more than sufficient in making effective use of your NICs.

Something else to consider: put your Service Console and VMotion functions on the same virtual switch and team the adapters. This isn't ESX 2.x - you have the ability to create a fault-tolerant connection for your service console, so why not use it? If you insist on separating VMotion, put it on a different VLAN.

Also, as to your "Option B", do not team more than two adapters together in a single vSwitch with all of them in an active state; others have reported performnce problems doing so.

0 Kudos