VMware Cloud Community
infarmed
Contributor
Contributor

Failed to migrate vm's from ESXi 5.0 host to 5.5 host .

Hi,

I have upgraded vcenter/ vsphere from 5.0 to 5.5.

With vcenter upgrade i have no problems, everything went smooth. The problem is in the hosts. I have upgraded 3 hosts from 5.0 to 5.5 without problems, but I can't migrate vm's from an host with 5.0 to an host with 5.5.

I receive the following:

"... 
  The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration.
  vMotion migration [-1062728319:1381258372348912] failed to create a connection with remote host <192.168.13.135>: The ESX hosts failed to connect over the VMotion network
  Migration [-1062728319:1381258372348912] failed to connect to remote host <192.168.13.135> from host <192.168.13.129>: Timeout
  The vMotion failed because the destination host did not receive data from the source host on the vMotion network. Please check your vMotion network settings and physical network configuration and ensure they are correct.
..."

The migration works without problems between hosts with the same version.

Any ideas?

Thanks

Carlos Baptista

20 Replies
abhilashhb
VMware Employee
VMware Employee

Can you ping the management up from 5.0 host to 5.5 host and vice versa?

Is vmotion enabled on vmkernel workgroups on both the hosts?

Abhilash B
LinkedIn : https://www.linkedin.com/in/abhilashhb/

0 Kudos
admin
Immortal
Immortal

Hi,

Did you get a chance to check this KB.?

VMware KB: vMotion fails with connection errors

Thanks,
Avinash

0 Kudos
infarmed
Contributor
Contributor

Thanks for the reply

I can ping (vmkping) all the interfaces between hosts with the same version, but it only fails if I try to ping vmotion interface on different versions:

host 5.5 ping host 5.0

     vmotion interface

     ~ # vmkping 192.168.13.124
     PING 192.168.13.124 (192.168.13.124): 56 data bytes
     --- 192.168.13.124 ping statistics ---
     3 packets transmitted, 0 packets received, 100% packet loss
    

     management interface

     ~ # vmkping 192.168.43.124
     PING 192.168.43.124 (192.168.43.124): 56 data bytes
     64 bytes from 192.168.43.124: icmp_seq=0 ttl=64 time=0.451 ms
     64 bytes from 192.168.43.124: icmp_seq=1 ttl=64 time=0.507 ms

host 5.5 ping host 5.5

     ~ # vmkping 192.168.13.134
     PING 192.168.13.134 (192.168.13.134): 56 data bytes
     64 bytes from 192.168.13.134: icmp_seq=0 ttl=64 time=0.478 ms
     64 bytes from 192.168.13.134: icmp_seq=1 ttl=64 time=0.478 ms
     64 bytes from 192.168.13.134: icmp_seq=2 ttl=64 time=0.359 ms

     --- 192.168.13.134 ping statistics ---
     3 packets transmitted, 3 packets received, 0% packet loss
     round-trip min/avg/max = 0.359/0.438/0.478 ms

     ~ # vmkping 192.168.43.134
     PING 192.168.43.134 (192.168.43.134): 56 data bytes
     64 bytes from 192.168.43.134: icmp_seq=0 ttl=64 time=0.427 ms
     64 bytes from 192.168.43.134: icmp_seq=1 ttl=64 time=0.373 ms
     64 bytes from 192.168.43.134: icmp_seq=2 ttl=64 time=0.356 ms

     --- 192.168.43.134 ping statistics ---
     3 packets transmitted, 3 packets received, 0% packet loss
     round-trip min/avg/max = 0.356/0.385/0.427 ms

host 5.0 ping host 5.0

~ # vmkping 192.168.13.125
PING 192.168.13.125 (192.168.13.125): 56 data bytes
64 bytes from 192.168.13.125: icmp_seq=0 ttl=64 time=0.363 ms
64 bytes from 192.168.13.125: icmp_seq=1 ttl=64 time=0.230 ms
64 bytes from 192.168.13.125: icmp_seq=2 ttl=64 time=0.204 ms

--- 192.168.13.125 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.204/0.266/0.363 ms

~ # vmkping 192.168.43.135
PING 192.168.43.135 (192.168.43.135): 56 data bytes
64 bytes from 192.168.43.135: icmp_seq=0 ttl=64 time=0.419 ms
64 bytes from 192.168.43.135: icmp_seq=1 ttl=64 time=0.426 ms

--- 192.168.43.135 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.419/0.422/0.426 ms

Abhilash what do you mean with "Is vmotion enabled on vmkernel workgroups on both the hosts?"

What are vmkernel workgroups ?

Thanks

Carlos Baptista

0 Kudos
infarmed
Contributor
Contributor

Hi,

I have redone the process with the same results. I have installed a version 5.0 cluster with 4 hosts and then upgraded vcenter and to 2 hosts to 5.5.

I can't ping vmotion ip from different versions, only on the same version.

May be a bug?

Thanks

Carlos Baptista

0 Kudos
infarmed
Contributor
Contributor

Update:

Now I have 2 hosts 5.0, 2 hosts 5.1 and 2 hosts 5.5.

The vmotion doesn't work from 5.0 to any destination. I can migrate vm's from 5.1 to 5.5 and from 5.5 to 5.1. I can't migrate them from 5.0 to 5.1 or 5.5. So the problem is in 5.0.

Any suggestions?

Thanks

0 Kudos
mnaylor85
Contributor
Contributor

I ran into this same issue and resolved it by doing the following:

We had 2 physical NICs in active state on the vSwitch.  I made one of them a "Standby Adapter" in the NIC Teaming settings on the vSwitch and this immediately allowed vMotion to start working.

Hope this helps someone else with this error.

Mike

MisterFrog
Contributor
Contributor

We had the same issue after the migration of one ESX from 5.0 to 5.5.

The mnaylor85's solution works for us as well.

0 Kudos
ebooysen
Contributor
Contributor

Same here, only difference is I upgraded from 5.1 to 5.5.

I tried the "one nic in standby" suggestion, but it didn't work for me.

Some help would be greatly appreciated.

0 Kudos
DHeinen82
Contributor
Contributor

We've got the same issue here.

The solution from mnaylor85 doesn't work for us either.

Is there anybody who have more experience with this problem or have other solutions?

Google also doesn't know a lot about this.

Greets

==========================

UPDATE1: If you shut down the vm's on the 5.0 host you could move them to a 5.5 host! Strange....

==========================

UPDATE2: With the trick from PJ i could move the vm' between the hosts. But now, if you moved one on a new host, the NIC of the vm isn't working anymore...

==========================

0 Kudos
pratjain
VMware Employee
VMware Employee

Is the vMotion network different from Management network . If yes can you try enabling vMotion on Management Network and check if that helps.

Check vMotion on Management network somewhat similar to the screenshot below and un-check on vMotion Network.

Let me know if this helps. Also see if you can re-create the vMotion Portgroup or the whole vSwitch on the hosts and see if it makes a difference.

mdac_checkboxes.png

Regards, PJ If you find this or any other answer useful please mark the answer as correct or helpful.
0 Kudos
DHeinen82
Contributor
Contributor

That`s the answer of the Problem!

You're awesome!!!! Thanks a lot!

I changed the vMotion Network to the Management Network and it worked!

You saved my day!

0 Kudos
DHeinen82
Contributor
Contributor

So your Trick worked as well.

I could migrate vm's from one host to another.

But if i do that the vm is not reachable through the network???

I start thinking my ESXi Farm is trying to make fun of me...for today i hate my ESXi Server farm... :smileyangry:

I recreated the whole vSwitch. But it doesn't Work...

Any ideas?

0 Kudos
ngchunchit
Contributor
Contributor

I am upgrading 7 ESXi servers from 5.0 to 5.5. vMotion not working as the vmkernel for vMotion automatic disabled after upgrade for all 7 ESXi. vMotion is working again after I enable the vMotion for vmkernel.

0 Kudos
DHeinen82
Contributor
Contributor

!!!This solution is for emulex/broadcom Network Adapters!!!

For us this worked finally:

esxcli system module set --enabled:false --module=elxnet

esxcli system module set --enabled=true --module=be2net

regards

ch1ta
Hot Shot
Hot Shot

We’ve also come across the said issue previously. Unfortunately, back in the day, we were not able to find the solution and just migrated required VMs, using Quick Migration shipped in veeam free edition; nice tool to bypass whatever problem you confront while migrate VMs.

Cheers.

0 Kudos
HawkieMan
Enthusiast
Enthusiast

Check Gateway and subnet mask on Your vmotion network.

I noticed one of your hosts are 192.168.43.* and the others are 192.168.13.*, and assuming the hostmask is 255.255.255.0 they would not be in same subnet.and in that case you would need Routing enabled.

0 Kudos
iteric
Contributor
Contributor

@DHeinen82

you've really made ​​my day Smiley Happy

0 Kudos
jalak
Contributor
Contributor

I have same issue . We have 1 cluster of 2 esxi 4.1 host.

After upgrade 1 of the host to esxi 5.5 via update manager , the vmotion doesn't work.

vmkping request time out .

Manage to resolve the issue by disable and enable back the vmotion .

0 Kudos
fjmarquez
Contributor
Contributor

From vCenter:

1) entering into a host > configuration > networking.

2) Select properties for the vSwitch which have vmotion

3) "Edit" for vSwitch configuration.

4) Go to Nic Teaming section and check that Load Balancing method is set to "Route based on IP hash".

It should be enough to allow migration from host 5.0 to 5.5 without have to set one of two NIC to standby mode.

0 Kudos