I have two Dell R710 with 4 on-board NIC, and 4 additional Intel Quad-port NIC, Physical vCenter 4.1, and an EqualLogic 4000 SAN, 2 Dell Powerconnect 5424 switches
I have attached screenshots of my vSwitches.
When I attempted to migrate a vm from one host server to another I recieved the following error:
The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network.
Check the vMotion network settings and physical network configuration.
vMotion migration [168428044:1307464788623913] failed to create a connection with remote host <10.10.2.11>:
The ESX hosts failed to connect over the VMotion network
Migration [168428044:1307464788623913] failed to connect to remote host <10.10.2.11>:
Timeout
ports g1 - g12 on both switches are in VLAN 100 (dedicated for iSCSI)
port g20 on both switches is in VLAN 200 (dedicated for vMotion traffic)
I have not yet setup a trunk between the two switches to allow VLAN traffic to pass from one to the other (I don't really know Dell Powerconnect switches).
Any suggestions on a possible root cause?
Have you tried with a vmkping between the two vMotion interface?
About Dell PowerConnect switch they are very simple.
You can configure by web interface under switch / VLAN
Define VLAN and then define membership.
For trunk port you must first change port mode from access to trunk.
Andre
Have a look at the folowing kb:
Good luck.
Regards
Franck
I have not yet setup a trunk between the two switches to allow VLAN traffic to pass from one to the other (I don't really know Dell Powerconnect switches).
Is this not the source of your problem?
If each switch only has one port (G20) for VLAN 200 - I assume that each host is connected to a separate switch - so you have not created a route between these - therefore vMotion fails.
How would one setup said route between said hosts? If I had a VLAN trunk between the two switches wouldn't that constitute for a "route?" Or is this something hat may need to be handled via a router/firewall/L3 Switch? Can I assign static routes to the esxi host server?
It appears my issues are with the powerconnect switches, and some sort of "routing" between the two interfaces. vmkping is not able to contact the IP of the opposing interface.
Now I need to know why? ...and to address this issue whether it's with the physical switch, or some tcp/static routing that needs to be applied somewhere.
I figured it out! I assigned the vMotion network its own IP in its own subnet, but I left it in the same VLAN as the rest of my LAN.
To add my two cents, if you do it on nested ESXi servers (virtual ones) remember to edit properties of Virtual Machine Port Group where your nested ESX(i)s are connected on your physical ESX(i) and set tagging to 4095 (enabling trunking - all VLAN ids will be allowed). Otherwise your tagged traffic will be stopped there.
--
Pawel