VMware Cloud Community
NuggetGTR
VMware Employee
VMware Employee

Forcing Unicast traffic externally

Ok just had a question i hope someone can clear up for me.

with windows NLB running in unicast I know ESX will not forward the packets externally if there is a virtual machine with a matching mac address running on the host were the request originated. even says in the white paper

Because the virtual switch operates with complete data about the underlying MAC addresses of the

virtual NICs inside each virtual machine, it always correctly forwards packets containing a MAC address

matching that of a running virtual machine. *As a result of this behavior, the virtual switch does not*

forward traffic destined for the Network Load Balancing MAC address outside the virtual environment

+into the physical network, because it is able to forward it to a local virtual machine.+

if the NLB node was on a different port group and or vswitch to the requesting machine would the traffic get forced externally or will it still forward it internally?

Im assuming it will force it externally but i think i have some on different port groups but the same vlan/network and i still never see it go external.

Cheers

________________________________________ Blog: http://virtualiseme.net.au VCDX #201 Author of Mastering vRealize Operations Manager
Tags (3)
0 Kudos
2 Replies
VTsukanov
Virtuoso
Virtuoso

Why you can not use multicast mode? We have implemented several installations NLB clusters in this mode without any problem

Also take a look at Microsoft NLB not working properly in Unicast Mode

0 Kudos
NuggetGTR
VMware Employee
VMware Employee

Haha i would like to use multicast.... infact I would like for them to use F5's. basically there is many reasons as to why unicast is being used and the shear size and the 100's of nlb clusters all with about 4 to 8 nodes make the manual arp entries a nightmare... this is one of the main reasons.

Hopefully they will slowly move over to hardware load balancing but looking for a short term fix that doesnt involve a complete redesign.

as it sits now NLB is working fine, we just have to make sure that a client server wanting an NLB cluster VIP is not on the same host as one of the nodes of that cluster(or else it breaks because only the node on that esx host will get the request). so allot of our environment has DRS disabled because of this(DRS disabled on the machine level not cluster) this is made harder by the fact that some critical application paths involve nlb cluster talking with nlb cluster which then talk to 2 different nlb clusters hahaha

man windows NLB and ESX together haunts my dreams!!!!

________________________________________ Blog: http://virtualiseme.net.au VCDX #201 Author of Mastering vRealize Operations Manager
0 Kudos