VMware Cloud Community
jcorbin
Contributor
Contributor

Load Based Teaming Sanity Check

Can anyone give me a sanity check on this networking design for vsphere 6.5 (and any opinions on doing better)

Each ESXi host will have 4 connections to a primary switch and 4 to a failover

vSphere will use a single distributed switch with multiple portgroups (1 per VLAN)

Every other host will be opposite (i.e esx01 primary is sw1 fail is sw 2  esx02 prim is sw2 fail is sw1 etc)

I would like to run 4 trunk ports from the 2960's to each esxi host

vSphere - I'd like to use load based teaming on the 4 connections to the distributed switch

We have Enterprise plus license

We will use these VLANS

     management

     Storage (NFS) - this will phase out as we have most VM's running on VSAN datastores

     VM Network (includes outbound connectivity & DMZ View server)

     Test

     Dev

The overall driving factor is currently we have no routing going on.

We need to be able to get admins/testers/developers from their VDI sessions to Test or DEV, currently VDI is multi-homed as are most infrastructure services, very ugly

Thanks!

John

0 Kudos
4 Replies
daphnissov
Immortal
Immortal

Ok, trying to follow along. Let me ask questions/make comments in order.

Each ESXi host will have 4 connections to a primary switch and 4 to a failover

So 8 vmnics per ESXi host, correct? 10 GbE or 1 GbE?

Every other host will be opposite (i.e esx01 primary is sw1 fail is sw 2  esx02 prim is sw2 fail is sw1 etc)

Where odd-numbered ESXi hosts are all active to the same switch and even-numbered ESXi hosts are active to the other switch? Are these switches stacked?

The overall driving factor is currently we have no routing going on.

We need to be able to get admins/testers/developers from their VDI sessions to Test or DEV, currently VDI is multi-homed as are most infrastructure services, very ugly

Trying to understand how not having routing (which...how do you not have routing??) is going to be solved by anything you do on the vDS. Sounds like we're missing a piece or two.

0 Kudos
arieldaveport
Enthusiast
Enthusiast

You would probably not want to pin your hosts to alternating hosts like that. Mainly because it's more complicated than it needs to be and secondly because that could cause all your inter-host communication to traverse the stacking link rather than stay on the same switch. In the example below you would have odd number NICs connected to switch 1, evens to switch 2, if your switches are setup as a single stack you should be able to setup a port channel across both of them. Dell calls it MLAG, Cisco it's vPC. If it doesnt support that you would still cable it that way but you wouldn't be able to setup a LAG on your vDS. and would probably need to pich another load balancing algo.

pastedImage_0.png

-Ariel
0 Kudos
arieldaveport
Enthusiast
Enthusiast

Just realized that you are using 2960s, if you dont have stacking cables plug odds into sw1, evens into sw2. Setup 2-4 of trunk ports between the switches and set odds to active and evens to stand by. This should keep most of the traffic from having to cross those trunk links. If you really want to LAG setup 2 one to each switch.

-Ariel
0 Kudos
jcorbin
Contributor
Contributor

Ok, trying to follow along. Let me ask questions/make comments in order.

Each ESXi host will have 4 connections to a primary switch and 4 to a failover

So 8 vmnics per ESXi host, correct? 10 GbE or 1 GbE?   1GB NIC - 2 quad cards each host

Every other host will be opposite (i.e esx01 primary is sw1 fail is sw 2  esx02 prim is sw2 fail is sw1 etc)

Where odd-numbered ESXi hosts are all active to the same switch and even-numbered ESXi hosts are active to the other switch? Are these switches stacked?  No - 2960's no stacking

The overall driving factor is currently we have no routing going on.

We need to be able to get admins/testers/developers from their VDI sessions to Test or DEV, currently VDI is multi-homed as are most infrastructure services, very ugly

Trying to understand how not having routing (which...how do you not have routing??) is going to be solved by anything you do on the vDS. Sounds like we're missing a piece or two.

So not having routing is not directly related to the switch configs/setup. We have a barracuda FW that has 1 link right now, connects to a 2960, and everything in the lab connects to that 2960. So the 10.x.x networks in test and dev never see the 192.168.x network that VDI uses (currently there are NO VLANS). We multi-home the machines the VDI users need to get to and put a 192.168 nic on those also. This setup will allow 2 bonded links to come from the FW to the 2960's with the public networks, and a link each from FW to 2960's for TEST and for DEV. This will allow a rout to be placed on the FW between these networks, then using FW rules we can allow or deny traffic from specific hosts/VDI user workstations.

John

0 Kudos