VMware Cloud Community
dellboy
Enthusiast
Enthusiast

Multiple Bonded Pairs and NIC allocation

I have 10 x GigE ports available (2 onboard, and 2 quad-port GigE) on my PE2950 III running ESX3.5.0 Update1 Host: 6 to divy up between Service Console(s) and vmkernel(s): iSCSI and VMotion; and 4 for VM traffic, and was hoping to use 802.3ad Link Aggregation to bond pairs of GigE NIC together for faster throughput.

VM TRAFFIC

Is it possible to create two bonded pairs, using 802.3ad Link Aggregation, and add them into the same vSwitch?

Option 1

If this is at all possible, I'm guessing we may have to create two port groups,one using the two pNICs in Bond1, the other using the two pNICs in Bond2 (both using Route based on IP Hash, and following physical switch and port requirements).

Option 2

Or would I have to create two seperate VM virtual switches - one for each Bonded pair

Option 3...?

iSCSI TRAFFIC

Well, we did want to do the same: use two bonded pairs with the software iSCSI Initiator, however after scouring through the communities etc, it looks like this isn't possible. And additionally, although 802.3ad is supported for iSCSI, it appears only one path is used (even with route based on IP Hash and physical ports configured)

With that, what options do I have for creating redundant paths between the ESX host and physical switches (two)? (By the sounds of it, I might be better off just using default load balancing, and split the physical NICs accross the two physical switches, and not even use Link Aggregation, but not having done this before, I'm not completely sure what my options are).

VMKernel and Service Console

The two remaining ports were going to be used for the Service Console, which would possibly share VMotion, however given my thoughts on iSCSI above, it may be better to dedicate two pNICs to iSCSI (scrapping 802.2ad) and two to VMotion (using 802.3ad):

10xGigE: 2 inbuilt, 2 quad-port):

  • Service Console (vmnic0, vmnic1)

  • Virtual Machines (Bond1: vmnic1, vmnic6; Bond2: vmnic3, vmnic7)

  • iSCSI/Service Console2 (vmnic4, vmnic8)

  • VMotion (Bond3: vmnic5, vmnic9)

Any insight/experiences/recommendations would be greatly appreciated (my apologies, for this turning into more than just a question about bonded pairs!)

Thanks

Matt

Message was edited by: Matt

0 Kudos
2 Replies
dellboy
Enthusiast
Enthusiast

Ok, so you can only have one Software iSCSI Initiator per ESX Host, so given the fact you have to patch a bonded pair into the same switch or stacked switch, you could potentially lost your redundant paths.

As such, I'm inclined to go with the nic allocation in the VMKernel and Service Console section above, although I'm not sure bonded pairs are necessary for VMotion.

0 Kudos
kjb007
Immortal
Immortal

Bonded pairs are not really "required" for anything. They will help in spreading load across physical interfaces, as well as provide redundancy in interface and switch.

That being said, separating all functions using pairs of NICs is a good idea. Espeically in your vm networks, as different VM's will go out different NICs, so as to not create a bottle-neck over any 1 interface.

I have 6 NICs available to me, and I use 1 pair for service console/vmotion, 1 pair for vm network 1, and 1 pair for vm network 2. You'll also get good separation, if you split into multiple vSwitches as well. So, keep your management traffic (sc/vmkernel) on separate switches from your vm's.

Good luck,

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB