since my backup network is totally isolated, I would like to separate my production traffic and backup traffic in to two different TZ with dedicated VTEPs. is it possible? if possible please share the docs.
While you can add ESXi cluster to more than one transport zone, you cannot dedicate VTEP to particular TZ.
But why try to do this in first place? You are trying to do physical isolation in virtual world.
Logical switches are isolated from each other. Different TZs (even on same VTEP) provide even more isolation, and you can leverage DFW to segment and isolate your environment even more.
example,Backup network is totally isolated from prod with dedicated core switches and due to security reason, I should not allow the backup traffic in to prod network. the only thing, i can do is p2v bridging using vxlan supported hardware. however, it will still use the same vteps.
Another choice may be using Vlan based port group for backup traffic, since on the same host Vlan and Vxlan based Port Groups may coexist. (It is possible to use dFW Rules on Vlan based port groups as well). If putting the Backup vNIC on a VXLAN is not a requirement, the Backup vNICs may be collected on a Backup Port Group which is VLAN based, and it may be given seperate physical Uplinks for this Port Group. This may also have performance benefits.
As a Cluster based Choice, during host preparation phase, a Single vDS that is common for alll ESXi host should be chosen. The VTEP Ports are vmkernel type interfaces that are bound to this vDS (or dVS-distributed vswitch). There are Single VTEP and Dual VTEP Designs for ESXi hosts, but Multiple VTEP is mostly for Increasing the total throughput of VXLAN backed port groups. There is one-to-one mapping between VTEPs and Pysical Uplinks on each host.
If both Backup vNIC and Data VNIC port groups have to be VXLAN based, there may be a mechanism that allows for fixing backup vNICs to a certain physical uplink. (i.e chose one VTEP for Data traffic, and the other VTEP for backup traffic.
This link explains about multiple VTEPs on a single host, main aim is about increasing the total throughput of VTEP traffic that includes VXLAN backed VMs. . Load Balance-SRCMAC or Route based on originating Port are the Supported Teaming and Failover Modes for Multi-VTEP Support, and most probably it will choose any of the 4 Physical uplinks mixing backup and data Physical Uplinks.
Transport VLAN After the NSX installation is complete, each vSphere host will have a new vmkernel NIC specifically used by NSX as the VXLAN Tunnel End Point (VTEP). When virtual machines on different hosts are attached to NSX virtual networks and need to communicate, the source host VTEP will encapsulate VM traffic with a standard VXLAN header and send it to the destination host VTEP over the Transport VLAN. Each host will have one VTEP, or multiple VTEPs, depending on the VXLAN vmknic teaming policy you've chosen.
LACP = Single VTEP using multiple uplinks
Fail Over = Single VTEP using one uplink
Load Balance = Multiple VTEPs each using one uplink
Another possible way is to have 2 Distributed switches, each with its own set of uplinks, one switch for VXLAN-based production traffic, another switch for VLAN-based backup traffic.
During cluster preparation for NSX you will have to select VDS for VXLAN-traffic, select production one
But keep following limitations in mind: