So I have a customer that whenever we reboot one of his ESX servers, NLB stops working. As long as all ESX servers are up multicast NLB works fine. I am using Cisco switches and I have verified my setup per the KB articles. The only thing I am doing a little different I think is that the vSwitch my VM's are using are trunk ports. Any ideas?
Thanks
-Craig
I have the same problem.
My NLB works fine with all my member servers running.
If I reboot or shut down one member, the NLB will stop replying to a ping and return a "Reply from 172.*.*.* TTL expired in transit" for about 15 seconds but always starts replying again.
This will happen randomly with any of the NLB member servers, and does not consistently happen with the same server
I'm using NLB in multicast mode.
Each of my NLB servers on their own host server.
The NLB vswitch is on a trunk port.
Have a ticket open but no callback yet.
Got an answer from Cisco.
Either create the EEM scripts as described below, or just don't shut the servers down.
May investigate replacing Windows NLB with a hardware load balancer like F5.
The issue is that Microsoft broke the RFC by using a
multicast mac address with a unicast Ip address. Cisco worked around
this by adding a static ARP entry. This static ARP entry makes a static
CEF entry for forwarding.
Typically, when a host is not there, the CEF entry will be removed when
the ARP entry times out. This causes the router to drop the packet
instead and (optionally) send back an ICMP unreachable.
So I wandered over to the routing protocols folks instead of requeuing
the case to them to ask about the issue. There only suggestion was to
track the host and when it is removed, have the router install a route
to null0 to drop the packet.
http://www.cisco.com/en/US/docs/ios/12_3/12_3x/12_3xe/feature/guide/dbac
kupx.html#wp1071672
For me, the workaround causes this issue. Don't let the server go down
is the easiest path to resolution and the least complicated. But if you
really want to work, you can schedule competing EEM scripts with the
track object to monitor each and every host. This monitors the value of
the last ip sla test (1=OK, 2=lost connectivity to host). Each host
would have to be a separate tracked object (and the number on the end
would be the ip sla number). Each host would have 2 competing scripts.
event manager applet host_10_2_3_4_up
event snmp oid 1.3.6.1.4.1.9.9.42.1.2.10.1.2.1 get-type exact entry-op
ge entry-val 2 exit-op eq exit-val 1 poll-interval 5
action 1.0 cli command "enable"
action 2.0 cli command "config t"
action 2.0 cli command "no ip route 10.2.3.4 255.255.255.255 null0"
event manager applet host_10_2_3_4_down
event snmp oid 1.3.6.1.4.1.9.9.42.1.2.10.1.2.1 get-type exact entry-op
eq entry-val 1 exit-op ge exit-val 2 poll-interval 5
action 1.0 cli command "enable"
action 2.0 cli command "config t"
action 2.0 cli command "ip route 10.2.3.4 255.255.255.255 null0"
ip sla 1
icmp-echo 10.2.3.4
timeout 2000
threshold 2
frequency 60
ip sla schedule 1 life forever start-time now
Essentially, the issue now is it works when the host is up, and does
recursive lookup when it is not until the TTL is zero and the packet is
dropped.
You might want to overlook this testing situation and keep the host up
as the easiest resolution.