I have one request from our customer and thought I will discuss here.
As per their new policy, even the VM's in the same VLAN should not communicate within the vSwitch rather all communication should hit the core switch and return to intended VM.
I would appreciate if someone shed some light on this as to whether this is a valid request and can be achievable. If yes, what tools involved to achieve this.
Thanks in advance.
There may be some solutions through dVS Northbound API Programming that make possible to behave like a passthrough module, but in general NSX provides distributed policies at the VNIC level that provides performance and microsegmentation benefits within the Hypervisor, keeping traffic local within the Hypervisor. If VM moves to another host (possible a remote DC with Cross-Vcententer NSX), the policy is carried with the VM.
From a technical perspective this request defeats the purpose of Virtual Switches . Assuming we have workload mobility ,when VM reside on same vlan and same host we expect a better speed and less network latency with or without NSX,also sending all the traffic to the core without any valid reason defeats the physical network design as well.
Can you let us know the exact reason they are looking for this new policy & what they want to achieve this by having this traffic flowing via Core switch ?
Is it because they have the ACL in the core switch ?
This customer is a Bank and their network team designed this strange policy and now the VMware administrator looking for help in this regard. As per their network policy every traffic, be it the VM's on the same host should hit the core switch. They have a firewall in between so that every traffic is filtered. Thanks.
I figured it would be something like that. I'm curious how they're accomplishing a true firewall inside the physical switch, but going with the theory that they really have that for a moment...
The only way to do what they want is to pass through the hypervisor's NICs to the VM. That eliminates the virtual switch altogether and the VM would put its L2 traffic straight onto the physical switch just like a physical server would. Some NICs only allow a 1:1 VM to physical NIC port mapping, while others allow multiple VMs to share a single physical NIC. The problem is, when you do this, you give up a lot of functionality on the VM. Most importantly, the VM can no longer vmotion (unless you're specifically running UCS and VM-FEX which is rare). So they have to *really* want this.
My general advice would be to look at NSX distributed firewall. It will perform the same function they're after, but in the virtual switch, and without loss of vmotion or other functionality.
I understand your question all too well. I have such customers also and I was in a design-meeting just two days ago, talking about this very subject. This is a typical case of a security department having monitoring/IPS and firewalling requirements, but for whom the use of virtualization poses difficulties reaching such goals.
What I did, and maybe it helps you, is suggest using NSX Distributed firewalling but instead of managing the rule-base from within the vCenterNSX WebGUI (which they will dislike doing, for technical and organizational reasons), couple NSX to solutions like CheckPoint or Palo Alto for example, where the NSX firewall rules are created and maintained from those product's GUI's, but injected into NSX. That way, their regular perimeter firewalls and NSX are managed from one platform.
That is the only way to achieve their requirements of being able to control everything that a VM does, even between Layer-2 neighbours on the same physical host, while retaining the controls that such external firewall solutions offer.
The suggestion of using DirectPath I/O has so many disadvantages, it would not go that route.