vmotion, I think not technically possible to give it a VNI
I imagine vnis means indepdent logical switches only for workload vms
vmotion kernal interface (dvs backed) will be utilized by all the vms connected with various logical switches/vnis for vmotion
With respect to vmotion with top of the rack being l2-l3 boundary and esxi clusters/hosts are scattered across dc.
Therefor, for vmotion to work, vmotion vlan should extend to other esxi hosts over layer 3.
now question is how pocket moves for vmotion
let me tell technical gaps that I have to understand nsx 🙂
outbound traffic
east west traffic .. what is the path
is it DLR that decides if it is east west or south north or within the same host
since every esxi host has vtep, arp, mac address information for a transport zone, does it mean traffic will not go to esg but will
get encasulated by vtep and send towards l3 gateway on the physical network device.
another question which vlan svi this traffic will hit and how..vxlan kernal interface is assigned vlan, but the
question is .would it mean I would need to extend this transport vlan across all the esxi hosts under a single transport zone
same question for vmotion using vtep ..same thing exten vtep associate vlan everywhere .. this is something not very useful if true.
would appriciate your comments
vmotion, I think not technically possible to give it a VNI
I imagine vnis means indepdent logical switches only for workload vms
vmotion kernal interface (dvs backed) will be utilized by all the vms connected with various logical switches/vnis for vmotion
With respect to vmotion with top of the rack being l2-l3 boundary and esxi clusters/hosts are scattered across dc.
Therefor, for vmotion to work, vmotion vlan should extend to other esxi hosts over layer 3.
now question is how pocket moves for vmotion
We need to use vmkernel stack for VMotion/FT/Mgmt etc . However whether we need L3/L2 for VMotion is totally a design choice . For eg : If your existing setup is stretched via OTV for all workloads . Except for VMkernel traffic for all other workloads you can leverage VXLAN via NSX . Technically you can break extended l2 and run it purely like a L3 network and that way you don't need to stretch vlan . Remember VMotion is supported over l3 . Only challenging part would subnet change on one of the the complete DC stack . Not all components would support IP change. So watch out for that and plan it accordingly.
outbound traffic
east west traffic .. what is the path
is it DLR that decides if it is east west or south north or within the same host
Yes,as long as you are using DLR that would be the deciding factor because for VM that would be the first hop. If we use Edge as first hop,traffic will flow in N-S if Edge and respective Src/Dest VM are on different blades.
since every esxi host has vtep, arp, mac address information for a transport zone, does it mean traffic will not go to esg but will
get encasulated by vtep and send towards l3 gateway on the physical network device.
ESG would be paired with external L3 g/w via one of the supported protocol. ARP/MAC,VTEP learning is a different process(Whether you are learning for the first time or it is already populated) and actual traffic flow is different
. To make the explanation simple.VM's on two blades if there is a communication demand. L2/L3 data traffic will always leave the respective blade.
another question which vlan svi this traffic will hit and how..vxlan kernal interface is assigned vlan, but the
question is .would it mean I would need to extend this transport vlan across all the esxi hosts under a single transport zone
VLAN is optional for vtep You should certainly read this thread VTEP VLAN limitation