We are planning to create two distributed switches in the same vcenter.
First two ports of an ESXi server belong to uplink of one VDS
Third and fourth ports of the same ESXi server belong to the uplink of second VDS
We want both these networks to be isolated. That's why we are not trunking all the VLANs through a single port.
Can you please confirm if this configuration is supported?
For Vmotion do we need to create different vmkernel adapters on each switches or a single vmkernel is enough?
Please advise if any additional recommendations need to be followed for this configuration.
ESXi and Vcenter version is 6.5.
Network isolation for VDS design highly depends
on your network security considerations. When you need to isolate the existing network traffics on the virtual infrastructure as same as the physical networking, so it's recommended to create more than one VDS and also separate the uplinks. It means you need to isolate the existing VLANs & Subnets, in both of the virtual and physical networking infrastructures.
For choosing another VMKernel port for the vMotion traffic, you should calculate the transfer rate(I/O) and decide based on your many factors like the speed rate of the existing physical uplinks. For example, if you have 10Gbps p-NICs, no need to create different VMKernels. Although deciding to separate the VMKernel services or using the same VMKernel port is depends on the following factors:
1. Security, Maybe you should plan to separate the Management, vMotion, FT, Replication subnets or even VLAN ID.
2. Existing SAN Storage structure and storage communication design (physical HBA, SAN Switch and so on).
3. Scalability, For future extending the network infrastructures.
4. Using the separate physical switches to completely split the vMotion traffics.
it seems to me a normal configuration where you want to keep networks divided at the hardware level.
i have 8 switches spread across 52 host in 4 clusters - each cluster has 2 switches with 2 uplinks per switch.
vmotion has different vmkernelports because of faster 25GBe-uplinks, but this is not a must.
works without any problems
Moderator: Moved to vSphere vNetwork