Testing distributed vswitches on homelab systems (three servers, each has 4 NIC's.) I have the switch and its port group working using two NIC's for VM traffic. I created a new port group on VLAN 30 and assigned it NIC4. Each host gets a DHCP address from the switch (on 10.0.5.x) different from my 192.168.5.x internal.
When I vMotion a VM, NIC1 (port0) on the two servers, which already have traffic, spring in and begin to move the powered of VM, ignoring the vMotion NIC. Each host shows vmk0 on the regular vSwitch and vmk1 on the vMotion port group on the vMotion stack.
From each host I can vmkping -I vmk1 -S vmotion 10.0.5.102 and each host replies/works.
What am I doing wrong so that vSphere is ignoring the vMotion stack and is using the regular stack to motion the VM's? My license allows vMotion.
HI @gmerideth
If i understand correctly the VMs are powered off, when they didn't use the VMotion network right?
The migration of powered off vms called as cold migration. By default the data for cold migration, snapshots or cloning are using the management interface. This traffic called provisioning traffic. You can have dedicated interface(s) for provisioning traffic, then that interface will be used.
Regards,
HI @gmerideth
If i understand correctly the VMs are powered off, when they didn't use the VMotion network right?
The migration of powered off vms called as cold migration. By default the data for cold migration, snapshots or cloning are using the management interface. This traffic called provisioning traffic. You can have dedicated interface(s) for provisioning traffic, then that interface will be used.
Regards,
>>> ... and begin to move the powered of VM.
The vMotion VMkernel port is used, when you migrate powered on VMs, i.e. do a live migration.
André