Hi there,
I have 3 ESXI 5.5 hosts running in the vcenter server. They all have the following network card configuration:
2 x NICs = Management Network
2 x NICs = Storage vMotion Network
2 x NICs = DATA VMs network (the virtual machines)
All hosts are set with "IP HASH" in nic teaming as link status only.
They are all connected to a cisco 3850 stack L3 switch.
The switch configuration is like this:
interface GigabitEthernet1/0/1
description ESX1
switchport access vlan 1014
switchport mode access
channel-group 1 mode active
spanning-tree portfast
interface GigabitEthernet1/0/2
description ESX1
switchport access vlan 1016
switchport mode access
channel-group 2 mode active
spanning-tree portfast
interface GigabitEthernet1/0/3
description ESX1
switchport mode trunk
channel-group 4 mode active
spanning-tree portfast trunk
end
and the second stack is exactly the same.
From this documentation I have been reading; because I don't have a VDistributedSwitch I have to use Etherchannel with the following configuration:
channel-group XX mode on
But when I change the ports above on both stack members I lose connectivity to one of the storages. I haven't had the nerve to test the one host with virtual machines on it as this is a production system and I will need to schedule downtime.
All hosts presently are set with :
channel-group XX mode active
Which means there is no port bundle and I am occasionally received MAC_FLAPS on the switch side.
Can any of you experts offer some advice on this?
All this port groups are on the same vSwitch or there is a separated vSwitch for each Port Group ? If all are on the same switch, the problem is that VMware only support one single port chanell per vSwitch, and based on your configuration, do you have three port channells (po1, po2 and po4).
Hi,
Yes they are all on separate vSwitches, I missed that part out.
2 NICs per vSwitch and 1 port channel per vSwitch
With Cisco Catalyst switches you should be using channel-groups for LACP
interface range GigabitEthernet1/0/1-2
description ESX1
channel-group 1 mode active
or....
interface GigabitEthernet1/0/1
description ESX1
channel-group 1 mode active
interface GigabitEthernet1/0/2
description ESX1
channel-group 1 mode active
Then....
interface po1
description ESX1
switchport mode trunk
swtichport trunk allowed vlan 1014,1016
The first group of commands set the two ports to act together as a single port. The second group of commands set the parameters for the group.
Hi what you have stated are basic IOS commands which I am quite familiar with.
My question is on "access" and "trunk" ports what configuration works well with multiple esxi hosts.
Something just came to me and I cannot believe I missed this:
I have 3 hosts:
Host 1:
Mgt Network VLAN 15 x 2 NICs vSwitch 1 I bundle all NICs from vSwitch 1 on each host into one logical port group in the Cisco Switch
Vmotion Network VlAN 20 x 2 NICs vSwitch 2 I bundle all NICs from vSwitch 2 on each host into one logical port group in the Cisco Switch
VIrtual Machine Network VLAN 25 x 2 NICs vSwitch 3 I bundle all NICs from vSwitch 3 on each host into one logical port group in the Cisco Switch
Host 2:
Mgt Network VLAN 15 x 2 NICs vSwitch 1
Vmotion Network VlAN 20 x 2 NICs vSwitch 2
VIrtual Machine Network VLAN 25 x 2 NICs vSwitch 3
Host 3:
Mgt Network VLAN 15 x 2 NICs vSwitch 1
Vmotion Network VlAN 20 x 2 NICs vSwitch 2
VIrtual Machine Network VLAN 25 x 2 NICs vSwitch 3
Is the above config correct?
You still are limited to one etherchannel per host.
in all of my clusters we setup at least three vSwitches with two NICs connected to Cisco equipment with all of the ports set to trunks with specific allowed vlans.
for example, our management vSwitch has two vincs assigned to it (nic1 and nic2). Those are linked to Cisco switches with each port set to trunk (802.1q). We then specify that only our mgmt VLAN is allowed. We set the native to 999 and don't allow it on the port (prevent hopping out of vlans). we then let ESXi pick which nic to use.
We stopped using LACP and ether channels in 5.0. It simply was too much trouble and not worth the added complexity. We instead just hand ESXi multiple single member uplinks and let ESXi load balance now. It's far easier to maintain and add new hosts.