We configured a standalone ESXi host running a single VM (for now) and originally the box was a server (which is now the VM on the ESXi host). The server once had two NICs connected to two switches running HSRP where they were both configured as access ports in the same vlan. One of those ports were reconfigured as a trunk to the new ESXi host to carry the traffic for management and the VM network within the ESXi host. I want to add a redundant link, so I configured the port on the second switch the same way, as a trunk running dot1q. When I added a redundant link in the ESXi network configs, it added the NIC to the uplink, but when I opened the port on the second switch, the steady ping I had going to the VM dropped. It was set as the default NIC teaming (Route based on originating port ID), and even after I tried to change that to Route based on IP hash, it still dropped. What did I do wrong? How do I add a redundant link to the second switch?
Same VLANs allowed on both trunks?
Silly question, I know. But if ESXi management was still working and you only lost connection to your virtual machine, it looks as if only the management VLAN was allowed on the second switch.
Remember to also allow all neccessary VLANs on the uplink of the second physical switch 🙂
Please can you post the switchport configuration?
Hello,
Kindly share the switch config and advise if there is some link between switches or stack configuration or something like that.
Please consider marking this answer "CORRECT" or "Helpful" if you think your question have been answered correctly.
Cheers,
VCIX6-NV|VCP-NV|VCP-DC|
It's simple. Right now it's just one physical NIC going to a port on a Cisco switch but I'm looking to add another physical NIC to another port on a second switch running HSRP:
Vlan 100 configured (VMs)
Vlan 200 configured (ESXi hosts and management subnet)
HSRP Sw1:
interface GigabitEthernet1/0/25
description ESXi_NIC1
switchport trunk encapsulation dot1q
switchport mode trunk
end
Vlan 100 configured (VMs)
Vlan 200 configured (ESXi hosts and management subnet)
HSRP Sw2:
interface GigabitEthernet1/0/25
description ESXi_NIC2
switchport trunk encapsulation dot1q
switchport mode trunk
end
On the ESXi:
vSwitch0:
Port Groups (2 defaults):
VM Network (Assigned Vlan ID: 100)
Management Network (Assigned Vlan ID: 200)
vmnic0
vmnic1
NIC Teaming: Route based on originating port ID
Problem:
After adding an available NIC (vmnic1) to vswitch0, I enbaled the Cisco switchport after configuring it as a trunk like the first one, then the VM stopped pinging.
Inital setup:
Now the way I set it up originally, was to first connect the ESXi host to the physical switch as a host to an access port to stand it up and configure it, then once we got ESXi up and running, log in and assign VLAN IDs/numbers to the 2 port groups. When I do that, the traffic drops to the ESXi host until I change the physical switchport to a trunk, and then it takes somewhere around 30 seconds before the host comes back up and is accessible again. Once it does, I can create VMs on the VM Network (Vlan 100 / VM Network) which is a different subnet from the ESXi IP (Vlan 200 / Management Network) and both are trunked from the physical switch using one cable to one switch. I want to add redundancy using another NIC. Thanks in advance.
Do you have promiscuous mode and forged transmits set to accept on the vSwitches?
The switch port configurations seem to be ok, so unless the default VLAN on one of the switches is 100.
Did you already check the vSwitch configuration, e.g. whether the port groups inherit the vSwitch settings?
As a side not. To avoid the 30 - 45 sec. delay, you may add
spanning-tree portfast trunk
to the switch port's configuration.
André
They're set to the defaults so promiscuous mode is set as "reject" and forged transmits is "accept".
The switch only has the following options:
(config-if)#spanning-tree portfast ?
disable Disable portfast for this interface
edge Enable portfast edge on the interface
network Enable portfast network on the interface
The default vlan is not 100 on the switch, it is something else. So is the delay normal to expect? Does that mean I had it right and only needed to wait longer? I was skeptical about having to wait at all since I was expecting no downtime on the VM Network since I was just adding a redundant link. What should the NIC teaming setting be? Default is "route based on originating port ID".
Edge is the right one.
Try enabling forged promiscuous mode and see if that fixes it.
With the default policies, the port should actually not change with adding a new vmnic.
You may monitor the vmnic usage for the VMs using esxtop (press 'n' for networking) from the command line.
Anyway, the Cisco settings on different switches can differ. In your case it's likely
spanning-tree portfast edge trunk
which reduces the time that the spanning tree algorithm usually takes.
André