Hi,
I have enabled HairPinning in my vSphere enviornment by going through the following steps:
1) I created a virtual Distributed Switch (vDS) and added the ESXi host to it.
2) Assigned two virtual nics to one of the VMs in the ESXi host (VM_HairPin) and connected one of them (eth0) to a port group (external) having a direct connection with the uplink adapter.
3) For all the other VMs in that ESXi host and 2nd nic of VM_HairPin (eth1), I added them to a separate port group (isolated) and made no connection with the uplink adapter.
4) In order to enable the HairPin VM to receive the traffic from all the other VMs, I edited the per-port properties and allowed promiscuous mode in VM_HairPin.
5) After that, I used NATing to enable routing through the HairPin VM given here:
http://www.revsys.com/writings/quicktips/nat.html
Now, when I run iperf client on one of the VMs (VM1) and try to communicate with a machine in the external network using command:
$ iperf -c 192.168.1.1xxx
I get a Bandwidth of 660 Mbps when both the machines (VM_HairPin and VM1) are using E1000 Adapter and the performance drops down to about 64 Kbps when I connect them to VMXNET3 adapter. I read on several blogs that the performance is greatly enhanced while using VMXNET3 adapter.but that's not the case here.
Also when I compare the performance by directly communicating a VM with an external network machine (without hairpinning) the results are as follows:
Bandwith with E1000 Adapter - 863 Mbps
Bandwith with VMXNET3 Adapter - 892 Mbps
I was wondering what is the reason for this drop in performance while using VMXNET3 adapter in the HairPinned VMs..
Any help would be much appreciated.
Thanks.
What I remember is that VMXNET3 isn't supported for all types of guest OS. It has very limited scope. I think linx OS which is used by your hairpin VM isn't supported (you can check VMware HCL for this).
Is your VM a linux VM? If so, the following may be the reason:
JP
Hi TelakEng,
Thanks much for your reply.
I am doing the becnchmarking with the TCP packets using iperf. Also, the results while tesing the performance with the native VM were not affected by using VMXNET3 adapter (rather there was a slight improvement fron 863 to 892 Mbps). The problem arises when I do the benchmarking through the HairPinned VM..