I am trying to use nlb to connect two vm's that serve as sharepoint 2007 front end web servers. The two vm's reside on different ESX hosts. I have created two vNIC's in each vm, one with an IP address and gateway, one with IP address and no gateway but on the same vlan. I've tried unicast and multicast, and what seems to happen is that once the nlb hosts converge, they lose connectivity to eachother thus breaking the bond. I've tried using both vNIC's as the cluster addresses for the hosts, but it happens to fail either way. Has anyone succeeded at this? I've got a dev environment where the two vm's are on the same vSwitch and it works ok, but that won't fly for prod. Thanks for any advice.
btw, both on ESX 3.0.2 hosts, servers are win2k3 r2 sp2 x64.
check out this link:
If you configure NLB unicast mode, all the members of the NLB cluster must be on the same virtual switch. You should be able to get this to work by using NLB multicast mode as described in the KB article.
I was following that, turns out it is working but only within the subnet that it is configured. How to I access the cluster IP from a differnet subnet?
Both NICS should still include the gateway. Instead of using load balance, install the quality service and make a bridge for the NIC's. That way both NICS are bonded, and it will establish the same thing, and it doesn't require both NICs be on the same subnet.
Right click NIC, select the other NIC (and they should be be selected) and click bridge. Then the NICS will be together on a bridge.
i don't understand how bonding two vnics on each front end server would create a balance between two vm's. Can you elaborate please?
It doesn't, but then neither does load balance service on one VM affect another VM. Load balance refers to outgoing data on a particular NIC. If a NIC is busy it defers to the other, keep traffic distributed across the NICS.
You can't load balance 2 VM's using internal services on a VM, for that you need a load balancer device.
You were having disconnects, and I was attempting to fix your issue with that and make it simpler, since a bridge uses 1 IP instead of 2 and it bonds 2 NICS forming a team. It could help with your issue.
I think we are talking about two different things. I am try to set up Microsoft Network Load Balancing on two front end web servers. If one goes down, clients will be redirected to the other. I'm not worried about sharing the load between two nics on the same server. One of the best practices for NLB is to have two NICS, one for the cluster, the other for background traffic. It doesn't matter which of the two nics on each server I bond to the NLB Cluster, i can only access the website in the same subnet that the cluster exists. For instance, my server lives in vlan 128 and my computer (client) lives in vlan 40. Doesn't work. However, another server in 128 can access the cluster without problems.
I can get it to work on the same vswitch, using unicast. The multicast just doesn't do it. Not sure why, but it only works in the same subnet. Might be a setting on the physical switches somewhere.
MS NLB unicast will only work if both VMs are on the same host. Multicast needs a router to work.
We have both types working here, multicast for the web servers and unicast another application.
here are a couple of links for you
issue regarding duplicate GUID on Nic cards in cloned machines
Technet NLB good article on NLB
also if you are using NLB accross hosts you will need to add a static ARP entry in to your central switches configuration to stop spanning tree. this is most probably what you are suffering form
Tom Howarth
VMware Communities User Moderator