4 Replies Latest reply on Mar 2, 2017 3:51 AM by lalitsharma

    vmotion woes vmk_preemptive_fail

    alwaysnodowntime Enthusiast

      Hello community! I've just started to have issues with a  host in my cluster.  There are 3 nodes in which 1 node will not allow a vm to vmotion off the server nor will it let the server vmotion to it.  The vmotion network is fine.  I've tested dns, vmkping and ping as which are were successful. I also restarted the management agents as well.  I've taken a look at esxtop and don't see anything that is jumping out at me.

       

      When I attempt to vmotion whether "high priority" or "standard" vmotion starts at 3 percent and then to at 9 percent. Shortly thereafter I get the following message:

       

      Migrate virtual machine
      VMViewTrans1.lldc.us19.local
      A general
      system error
      occurred:
      Failed to initialize migration at source. 
      Error 0xbad00a4. 
      VMotion failed to start due to lack of cpu or memory resources.

       

       

       

      The vmkerrcode 0xbad00a4 which reports " vmk_preemptive_fail"

       

      I have plenty of cpu and memory shown on each server and they are balanced as far as the cluster is shown with 3 connected. each having 98gb and the cpu's barely are working. I'd say maybe 2% at most on each.  When I power off a machine, I can vmotion it to another host and vice versa but not powered on.  When I try to vmotion back with a powered on machine as a test, I get the following below message.  Please remember, this is only 1 host having the problem. I can vmotion fine between the other two, both servers are identical and vmotion worked fine before.  The resource pools have never changed and I have no reservations whatsoever.

       

       

      Failed to initialize migration at source.  Error 195887268.  VMotion failed to start due to lack of cpu or memory resources.
      Failed to allocate migration heap.
      Failed to create a migrate heap of size 30879984: Not found
      Failed to reserve a migration memory slice.  This host has reached its concurrent migration limit.  Please wait for another migration to complete before retrying.
      The vMotion failed because the destination host did not receive data from the source host on the vMotion network.  Please check your vMotion network settings and physical network configuration and ensure they are correct.

       

       

      Thanks in advance for any advice on how to resolve.

        • 1. Re: vmotion woes vmk_preemptive_fail
          alwaysnodowntime Enthusiast

          update. i have resolved this issue by rebooting the host. I was trying not to have any downtime but since I could only vmotion the vms off the host powered off, I had no choice.  Issue is resolved.

          • 2. Re: vmotion woes vmk_preemptive_fail
            JasonPearceAtRiverview Novice

            I'm having a similar issue on ESXi 5.1 U1 hosts. For one of the hosts in my cluster, I can vMotion Powered Off VMs to any host in the cluster, but not Powered On VMs. Powered On VMs generate these errors when a vMotion is attempted to any host in the cluster.

             

            Migration to host <172.22.xxx.xxx> failed with error Connection closed by remote host, possibly due to timeout (195887167).

            vMotion migration [-1407835923:1378328194401787] failed to send init message to the remote host <172.22.xxx.xxx>

            vMotion migration [-1407835923:1378328194401787] (0-71648912871752) failed to receive 68/68 bytes from the remote host <172.22.xxx.xxx>: Connection closed by remote host, possibly due to timeout

            Failed to start migration: VMotion failed to start due to lack of cpu or memory resources.

            Failed to allocate migration heap.

            Failed to create a migrate heap of size 85290174: Not found

            Failed to reserve a migration memory slice. This host has reached its concurrent migration limit. Please wait for another migration to complete before retrying.

             

            This host serves VMware View desktops. How would I non-disruptively remove these powered on desktops from this host so that I may reboot it. I was thinking I could attempt to put the host in Maintenance Mode and then use VMware View to rebuild the desktop pools. Any other ideas?

            • 3. Re: vmotion woes vmk_preemptive_fail
              rogerlundarray Lurker

              Any other cases of this?

              • 4. Re: vmotion woes vmk_preemptive_fail
                lalitsharma Lurker

                I have another case of very same issue affecting our ESXi 5.5 infrastructure

                While doing vmotion the VMs, I continously getting this error msgs.... other than HOST REBOOT that I cannot afford, anyone can suggest something? Will be a great help.

                ::

                Failed to start migration: VMotion failed to start due to lack of cpu or memory resources.

                Failed to allocate migration heap.

                Failed to create a migrate heap of size 43401850: Not found

                Failed to reserve a migration memory slice. This host has reached its concurrent migration

                ::