HI to all.
I would like to understand how traffic is routed through vss.
Let's assume 2 scenarios:
Network Netmask Gateway Interface
10.0.10.10 255.255.255.0 Local Subnet vmk0
172.16.1.10 255.255.255.0 Local Subnet vmk1
default 0.0.0.0 10.0.10.1 vmk0
In the second scenario in/out L2 Production traffic, use VSS2 , right ?
In/Out L3 Production traffic, which vss will use?
It should always use vss2, even if the routing table, has the gw on vmk0 that is on vss0, but why ?
Could you explain to me how it works?
Best Practices, which are ... Mgmt and Production on the same vss?
Thanks so much
In the second scenario in/out L2 Production traffic, use VSS2 , right ?
Yes. "Production" = virtual machine traffic in this case.
In/Out L3 Production traffic, which vss will use?
It should always use vss2, even if the routing table, has the gw on vmk0 that is on vss0, but why ?
Also vSS 2, because only those port groups are configured there. It makes no difference of the routing table because that is for kernel services, not virtual machines.
Do not confuse kernel services such as Management and vMotion with virtual machine traffic. The two are separate types of traffic and are handled differently. Wherever the port groups are configured and VMs joined to them, those VMs will use the uplinks assigned to the switch on which the port group lives for all in/egress traffic. Again, routing tables inside the kernel have no effect on the handling of these traffic flows because they're not applicable.
In the second scenario in/out L2 Production traffic, use VSS2 , right ?
Yes. "Production" = virtual machine traffic in this case.
In/Out L3 Production traffic, which vss will use?
It should always use vss2, even if the routing table, has the gw on vmk0 that is on vss0, but why ?
Also vSS 2, because only those port groups are configured there. It makes no difference of the routing table because that is for kernel services, not virtual machines.
Do not confuse kernel services such as Management and vMotion with virtual machine traffic. The two are separate types of traffic and are handled differently. Wherever the port groups are configured and VMs joined to them, those VMs will use the uplinks assigned to the switch on which the port group lives for all in/egress traffic. Again, routing tables inside the kernel have no effect on the handling of these traffic flows because they're not applicable.
ok then the traffic of the VMs (L2-L3), is sent to the uplinks connected to the physical switches which then will route the traffic according to their routing table, right?
what is BP, if there were any?
Mgmt Traffic and VM Traffic on the same VSS or better on two separate vss?
ok then the traffic of the VMs (L2-L3), is sent to the uplinks connected to the physical switches which then will route the traffic according to their routing table, right?
Yes. The exception is with two VMs connected to the same port group that wish to communicate with each other (L2). This traffic does not egress the ESXi host but is internally switched. A second exception is with NSX and the DLR. If VM A wishes to communicate with VM B and both are connected to the internal DLR, that traffic (L3) is switched at the kernel level and does not egress the ESXi host.
what is BP, if there were any?
What does "BP" mean?
Mgmt Traffic and VM Traffic on the same VSS or better on two separate vss?
The answer is "it depends". Very often this is done, but you may not want to do this for a few reasons. One reason could be if you use a backup application that uses NBD mode to pull data through the management interface, you might want that on a set of dedicated uplinks and vSS to isolate the traffic.
BP = Best Practies sorry....
Thanks for your explanations.