VMware Cloud Community
NathanEly
Contributor
Contributor

Windows NLB

First of all, let me state that I despise windows NLB.

Given that - I'd like to know how I can get it working when each guest node is 'living' on separate ESX hosts.

I have tried unicast with multiple NICs, multicast, etc. I cannot get it working. Once I enable it, one host cannot reach the other. I have read a few articles on configuring it, but they are of no help. These refer mainly to disabling 'notify switch' settings.

Anyone ever configured this before on VI3?

Thanks

Reply
0 Kudos
10 Replies
LarsLiljeroth
Expert
Expert

We are kind of looking for the same answer.

For now we have our NLB servers running on the same host and sometimes it works, sometimes it doesn't .

So i really hope some of the big guys have a few notes on this one Smiley Wink

// Lars Liljeroth -------------- *If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
Oakland
Contributor
Contributor

Try connecting from a 3rd host. We've had this problem before I think you need to connect in from another 3rd machine to see both machines and configure.

Reply
0 Kudos
NathanEly
Contributor
Contributor

yeah - I've tried that, too. I was able to connect to the cluster through a 3rd server, and all nodes looked good initially. When I began testing (i.e. drain stops, etc), the node(s) would not reconverge when bringing it back to the original state.

:confused_face:

FWIW, I'd just like to get an idea of other's configurations:

1) Single NIC

2) Multiple NIC

3) Unicast

4) Multicast

Thanks

Reply
0 Kudos
formulator
Enthusiast
Enthusiast

We have a somewhat complicated NLB config for one of our production environemnts to load balance requests to a couple of app servers. The app servers are on different hosts and it works fairly well enven though I also hate MS NLB.

We use single NIC and multicast, the only special thing I needed was to add a static ARP entry in our Catalyst 3750 stack for every VIP so that it can be reached from other subnets. This maybe a problem for other Cisco devices because of the type of MAC NLB generates.

NathanEly
Contributor
Contributor

Yep - adding the ARP entry worked for us in Multicast mode; however, everytime the Cisco gear is restarted, you have to remember to manually add the entry. Not ideal, but it's working...

Thanks everyone.

Reply
0 Kudos
formulator
Enthusiast
Enthusiast

I don't see why you would need to re-add the static arp entries after power cycling the router or switch if it's saved to the startup config.

Reply
0 Kudos
NathanEly
Contributor
Contributor

I had conflicting opinions from two different engineers. We're able to load it into the default running config, so it will be there next time the routers are restarted.

Basically, you're right.

Reply
0 Kudos
Nokin
Enthusiast
Enthusiast

I am at a client site that has an NLB cluster configured for some time. However it has apparantly never worked properly. When connecting to NLB manager from a system that is not part of the cluster the messages will show a "loading configuration from host xxxxx" from both nodes of the cluster but under the management pane only one of the two nodes are displayed. Status is converged.

The configuration is

2 nodes

Each Node is Unicast

Cluster NIC is in portgroup with "notify switch" set to no

Each node has 2 NICS

Cluster IP xx.xx.xx.5 (and reflected in NIC settings of both nodes - with correct cluster generated MAC)

Node1 Dedicated ip xx.xx.xx.20

Node 1 second NIC (for non-cluster access) xx.xx.xx.100

Node2 Dedicated IP xx.xx.xx.21

Node2 second NIC (for non-cluster access) xx.xx.xx.101

I am no NLB expert but it looks like it is configured properly. Only thing I found VMware related was the notify switch setting but it did not appear to make a difference.

Any suggestions

Reply
0 Kudos
stuart_ling
Contributor
Contributor

We have an NLB configuration using multicast with a single NIC and it works okay. The multicast settings are set to use IGMP. The ESX hosts are in a 4 node DRS/HA cluster and 2 of the servers are connected to one cisco 6500 switch and the other two to another 6500 switch. IGMP is enabled on the switch and the switches can see the MAC address of the NLB cluster as it moves around the ESX servers. On the router modules for the 6500 there is a static arp entry to map the NLB ip address to the NLB MAC address. Does this help you out?

Reply
0 Kudos
MarkE100
Enthusiast
Enthusiast

Agree with Stuart, ive been using NLB across ESX 3.0x clusters for a while and had no issues at all, works a treat

Using a single nic, Cluster mode Multicast, IGMP not enabled.

As mentioned a static ARP entry needs to be added to the Cisco switches connected to the ESX servers for the Cluster Network Address e.g.

>arp <IP Addess> <AAAA.BBBB.CCCC> ARPA

>end

>wr

where AAAA.BBBB.CCCC is the Cluster Network Address

Reply
0 Kudos