VMware Cloud Community
cliff962
Contributor
Contributor

EXPECTED FILE_DATA message. Got: SESSION_COMPLETE

My VMs (XP, Linux) run perfectly on ESXi 4 on an IBM x3400.

They start, stop, suspend OK and have not had any problems - except:

I can't get them OFF the server. If I use the datastore browser I can download about half of any of the VMs and I get this error:

EXPECTED FILE_DATA message. Got: SESSION_COMPLETE

It doesn't work with VMWare converter either - gets about half way and I get an error.

Also tried Fast SCP - same problem - gets about half way and I get an error.

How can I get these off the server for backup?

Thanks

0 Kudos
10 Replies
lcockcroft
Contributor
Contributor

I have the same issue. Did you ever get it resolved?

0 Kudos
cliff962
Contributor
Contributor

Unfotunately no.

I have 2 VMs that are stuck on the server - they both previously ran on VMWare Server 1.8 on Linux.

I have another 2 VMs that ran on VMWare Server 1.8 on Windows XP - no problem with these, but they are now on a different server.

All of these were placed on the ESXi servers with converter standalone.

The VMs run perfectly as far as I can tell - but that is not much good if I can't back them up.

So I'll keep trying. Good luck!

0 Kudos
rmcs-chad
Contributor
Contributor

Ditto! I'm having the same issue on my ESX4i box. I have 2 VM's which are stored on a iSCSI LUN. The VM that is giving me problems is divided up into 3 disks; SCSI 0:0 is a 100GB system disk with Server 2008, SCSI 1:0 is a 50GB disk with Exchange databases, and SCSI 2:0 is a 160GB disk with data files.

Backup of the VM fails at the same point each time. I have tried copying the .vmdk directly from vSphere Client. I have also tried using Trilead VM Explorer. Error message from VM Explorer is: "Could not read enough data (got 64414003200, expected 107374182400)". For me, the backups fail after copying about 62GB of data.

If I exclude SCSI 0:0 from the backup, it completes fine and copies the other .vmdk files with no problems.

As other users have posted, the VM seems to function OK. Obviously I'm a little worried as I can't seem to get a complete backup and the one .vmdk that bombs is the system drive.

Where should we start as to troubleshooting?

0 Kudos
cliff962
Contributor
Contributor

Sounds like this is happening on several types of systems.

I have a really simple setup - 2 x 250GB sata set up as raid 0 on a IBM x3400 tower server.

Datastore 1 covers the available space on the Disk.

One of my VMs bombs at about 8GB (of 15GB), the other at about 52GB (of 80GB)

The problem occurs whether the VMs are powerd up, suspended or turned off.

I have tried copying via tha datastore browser, with VMware Converter and with Fast SCP. All fail

I cannot even copy an VM into another location on the dame datastore (on the same disk).

But I CAN put another VM on the Disk (same datastore) from outside ESXi, and it runs perfectly.

So the problem must lie somewhere in the Disk/s or in ESXi, I am suspecting ESXi becacause prior to running ESXi the same server ran CentOS perfectly.

I have been part-time trying to get these VMs off this server for several weeks now, eventually I gave up which is why I posted.

I have not found any helpful documentation yet.

Does anybody know if ESXi comes with disk checking/repair tools?

Do I have to go into maintenance mode and run disk checking from the command prompt?

0 Kudos
lcockcroft
Contributor
Contributor

I used SCP to move the files over from one esxi server to the other.

Check out this page.

http://ukstokes.com/blog/2008/03/17/how-to-migrate-a-vm-using-scp/

rmcs-chad
Contributor
Contributor

I would try that if I had multiple ESXi servers to copy between. I need a solution that copies the files off the server and saves them to an NTFS partition.

0 Kudos
cliff962
Contributor
Contributor

Thanks for the tip, but SCP did not work for me. Again half way through I get an I/O error.

However I did solve my problem.

Because the VMs run perfectly on my ESXi install, it occurred to me that I could install converter standalone on each of the running VMs.

I then exported the "this physical computer" (which is really the VM) with converter standalone to another ESXi, or to a VMware Server standalone.

Had to select scsi disks in the export setup but BINGO, it works beautifully. The VMs fired up first go on the new hosts.

So my problem is solved completely. I have a full backup solution that does not require me to even turn off the VM.

I think my I/O errors are something to do with RAID, but I have no idea what.

This is not a problem on another server with an identical setup.

0 Kudos
cliff962
Contributor
Contributor

If the VM is running OK, install converter standalone ON THE VM. Then you can export the running VM ("this physical computer") to any VMware system you like, including a VMware Server etc as a standalone VM. Make sure to select SCSI disks if you are moving to another ESXi.

This does not answer why the problem occurs in the first place, but at least you can get a backup of the VM.

0 Kudos
TechFreakZ
Contributor
Contributor

Interesting...

For a while I have been noticing a "General Fault" relating to the VMFS file system in the event log on my esxi server, but since everything was working just fine, I thought there was nothing to worry about. Now I come to transfer my VM's from that machine to a new system and all of sudden I am having the same problem as mentioned above.

It occurs to me that perhaps this is a VMFS filesystem problem. Maybe it only presents itself when what may be a corrupted VMDK file is accesed & read by any means other than from inside the VM

Are there any tools out there for correcting errors in the VMFS filesystem from within ESXi?

Cheers

TFZ

0 Kudos
rmcs-chad
Contributor
Contributor

Just thought I would post a reply as to how I was able to "workaround" this issue.  This topic seems to be viewed quite often, so I thought I would revisit the issue.

I'm still not sure as to the root cause of the issue, only what worked for me.  Some people suggested using VMConverter to move the VM from one ESXi server to another.  At the time I didn't have another server, but I was getting frustrated with this issue and build a 2nd ESXi server just to see if that would work.  Unfortunately, for me, it did NOT.  Others may have sucesss, but for me the process errored out in somewhat similar fashion.

What DID work for me was to install an imaging solution inside the VM and image the entire system.  Once the image was created, I was able to completely remove the "corrupted" VM and recreate on my ESXi server.  Then I restored the image and everything worked as normal!  Now I can copy individual .vmdk files as I please.

Hope this helps any who are still experiencing this issue.

0 Kudos