VMware Cloud Community
briggsb
Contributor
Contributor

Migrate VM fails? Could not complete network copy for file...

Hi,

I am not sure why this has stopped working, because we have done this before without issue.

If I try and migrate a powered down VM to another host and datastore, I get an error...

'Could not complete network copy for file /vmfs/PATH/FILE.vmdk'

I am not sure where to start with diagnosing this, can anybody help me out please?

I did try this article, it didn't help - VMware KB: Storage migration fails with the error: Could not complete network copy for file

Many thanks, Alan

0 Kudos
8 Replies
Sateesh_vCloud

From the given scenario possible reasons are:

1)  Snapshots - check if there any? Disks consolidation pending?

2)  ESXi agents are not responding (you tried with that KB already)

3)   FILE.vmdk used by other Application/Lock (you mentioned VM in power-off state so there is no lock by theory)

     Is there backup software given access for the Data stores?

4)   Change the destination ESXi/Data store and give a try

5)   Finally do cold V-Motion first followed by Storage V-Motion

Let me know how it goes ...

------------------------------------------------------------------------- Follow me @ www.vmwareguruz.com Please consider marking this answer "correct" or "helpful" if you found it useful T. Sateesh VCIX-NV, VCAP 5-DCA/DCD,VCP 6-NV,VCP 5 DCV/Cloud/DT, ZCP IBM India Pvt. Ltd
0 Kudos
briggsb
Contributor
Contributor

Thanks for the reply.

1. There are no snapshots. We don't use them, we use Veeam which cleans them up anyway.

2. Yes, I tried the KB, I'm pretty sure I did it all correctly.

3. No locks, I assume. I will reboot the source host (it's a test server anyway) just incase.

4. I can't change the destination datastore, as I only have enough free space on one of the datastores. Thinking about it, the datastore is the newest thing I introduced into this infrastructure. It's a Synology NAS box, running off one of the single hosts (the one I'm trying to move to) as an iSCSI device. Are there any logs I can check, too see what might be the cause? Any issues migrating to a NAS box? (the source server is internal SCSI disks)

5. cold v-motion? This server is not part of the cluster, so I am using the migrate feature within the vSphere client. It will be a cold migration anyway won't it?

Thanks, Alan

0 Kudos
briggsb
Contributor
Contributor

OK, I created a test VM on the source host. I powered it off, tried the migration to the same destination host, and it fails too. It fails on the live datastore as well as the Synology NAS. Same error.

Any ideas anyone? This is halting my progress!

Thanks, Alan

0 Kudos
briggsb
Contributor
Contributor

ANYONE???

0 Kudos
Alistar
Expert
Expert

Hi there,

can you please post a vmkernel.log from the source host which contains the timeframe of a failed migration? It could tell us a lot.

Thanks in advance.

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
0 Kudos
briggsb
Contributor
Contributor

I used the export facility to export the logs, I assume this is the correct way of getting to the vmkernel log?

I'm afraid it's rather empty!

2015-04-08T14:43:52.976Z cpu11:8203)<4>hpsa 0000:05:00.0: Device:C5:B0:T0:L1 Command:0x85 CC:05/20/00 Illegal Request.

2015-04-08T14:43:52.976Z cpu7:8199)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x85 (0x4124014fc800, 9293) to dev "naa.600508b1001c9d95f47e71b54935b277" on path "vmhba0:C0:T0:L1" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2015-04-08T14:43:52.976Z cpu7:8199)ScsiDeviceIO: 2331: Cmd(0x4124014fc800) 0x85, CmdSN 0x360 from world 9293 to dev "naa.600508b1001c9d95f47e71b54935b277" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2015-04-08T14:43:52.986Z cpu7:8199)ScsiDeviceIO: 2331: Cmd(0x4124014fc800) 0x4d, CmdSN 0x361 from world 9293 to dev "naa.600508b1001c9d95f47e71b54935b277" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2015-04-08T14:43:52.987Z cpu7:8199)ScsiDeviceIO: 2331: Cmd(0x4124014fc800) 0x1a, CmdSN 0x362 from world 9293 to dev "naa.600508b1001c9d95f47e71b54935b277" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

I hope it makes some sense to somebody?

Thanks, Alan

0 Kudos
Alistar
Expert
Expert

Hmm, these are iscsi commands that wouldn't cause a cold migration to fail. We need something the migration daemon would tell - something like "migrating xxx to host yyy". If you could post the whole vmkernel.log that'd be nice..

Could you please elaborate further on

It's a Synology NAS box, running off one of the single hosts (the one I'm trying to move to) as an iSCSI device."

I do not quite understand how the NAS box is interconnected with the rest of your infrastructure.

Thanks for the answers in advance!

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
0 Kudos
briggsb
Contributor
Contributor

Hi, and thanks for contributing to this post, it's driving me mad!

First thing, the VMKERNEL.LOG is full of the same kind of stuff, are you sure it's the VMKERNEL.LOG file I should be looking in? The only lines in there are the same sort of thing I already posted, just blocks of the same sort of output, every 30mins by the looks of it. Nothing in there about the migration, the failed migration timeframe writes no more or less in that file.

The synology NAS is attached directly into the NIC of one of the physical hosts. So it's accessible only to that single host server. This is the destination VM. But we tried a test migration to the shared storage (the SAN) instead of the Synology, and got the same problem. I can probably rule out the synology, though this was the last thing of significance that we changed in our setup. So what we have is a LIVE CLUSTER, 2 x host servers, accessing the shared storage (SAN). This is all networked together for HA failover etc. One of these servers also has the sinology hanging off the back of it, on it's own (isolated network). The server we are migrating from, is located in another area of the building. It's also on the same network, but does not have access to the shared storage. It's not in the HA cluster, but is still managed by vcentre. It has internal disks for it's DS.

So, migrating from the server in the other area of the building, fails regardless of the destination host (either of the 2) or DS (SAN or synology).

Thanks, Alan


0 Kudos