I have a Redhat 7.6 on Rhel. I am trying to convert the machine to vmware using the 6.2.0 version of vmware converter. I have done this repeatedly with little problems when the OS is 7.5 5 months ago.
Here are the steps that I do prior to starting the conversion...
1. Make sure root login is enabled via ssh
2. Make sure connection between the windows machine running the vmware converter, the vmware cluster tp be hosting the newly converted machine and the original rhel based redhat machine is good.
3. Make sure all unnecessary processes running on the machine have been stopped.
4. Make sure /var/tmp is unmounted
5. Make sure no processes are using /tmp
6. Remount /tmp as executable (mount -o remount,exec /tmp)
Start up the converter, set ip options, along with helper ip.
The converter chugs along till it gets to 98%, and the last message in Task Progess is "Creating initial ramdisk".
The task times out giving an error of
"SysCommandWrapper: Error encountered in SysCommand: SysCommand failed to wait /usr/lib/vmware-converter/chrootAndExec.sh to terminate"
I seem to have no problems with any 7.5 machine, but this happens on all the 7.6 machines that I have tried.
I have searched the error log file and google for anyone else having this error with 7.6 with no luck.
Any comments or hints of action would be helpful. If I am posting in the wrong place, let me know the proper place to ask my question.
TIA
Dave McFerren
What versions are you running of:
vCenter -
ESXi Host -
Converter -
VMware Tools -
Also, how much memory does the VM have allocated to it?
We are seeing this with CentOS 7.6 also.
vcenter 6.5/esxi 6.5
converter 6.2.0 build-8466193
I tried it initially giving VM 16GB and tried it again with 32GB - no difference.
Hello,
I'm experiencing the same issue with Centos 7.6. Did you have any luck solving this?
We have support so I sent them the log files to look at - they responded with this today:
/usr/lib/vmware-converter/chrootAndExec.sh to terminate
2019-02-27T18:11:22.227Z error vmware-converter-helper[F2FB3B70] [Originator@6876 sub=task-1] SysCommandWrapper: Error encountered in SysCommand: SysCommand failed to wait /usr/lib/vmware-converter/chrootAndExec.sh to terminate
2019-02-27T18:11:22.228Z error vmware-converter-helper[F2FB3B70] [Originator@6876 sub=task-1] SysCommandWrapper: Error encountered in SysCommand: SysCommand failed to wait /usr/lib/vmware-converter/chrootAndExec.sh to terminate
2019-02-27T18:11:22.228Z warning vmware-converter-helper[F2FB3B70] [Originator@6876 sub=task-1] SysCommandWrapper ignores exception in destructor: SysCommandWrapper: Error encountered in SysCommand: SysCommand failed to wait /usr/lib/vmware-converter/chrootAndExec.sh to terminate
2019-02-27T18:11:22.228Z error vmware-converter-helper[F2FB3B70] [Originator@6876 sub=task-1] Patching initrd image failed, trying to recover old config/initrd file
Above is the error causing the failure, we have a couple of similar cases for the same, VMware at the moment is working on it we might be opening a PR for this and will update you on the status of the case once I have additional information.
I will update the thread when/if they ever get back to me with a fix.
I am replying to my own question to give an update. VMware support came back to me, and had me boot the to-be-converted machine onto an older kernel, and try to v2v while running the older kernel.
As I have successfully run the converter against other 7.5 machines, this sounded promising.
I booted the 7.6 machine onto the latest 7.5 kernel instead of the original 7.6 kernel. The machine booted properly, but the conversion failed as before. The machine times out after getting to 98%, and the same error of...
FAILED: An error occurred during the conversion: 'SysCommandWrapper: Error encountered in SysCommand:
SysCommand failed to wait /usr/lib/vmware-converter/chrootAndExec.sh to terminate'
Still awaiting the response from VMware.
Confirmed/reproduced on Centos 7.6 build 1810. This is my first use of vCenter Converter Standalone, so I was chasing my tail.
Exact same issue here. Previously worked on this same CentOS machine before moving to 7.6 (we didn't go live on the converted VM the first time, hence why we converted again)
Any updates from VMware support?
I'm on hold awaiting a fix. Can I offer up a set of logs to help in the debug? dgmcferre, any news from your support request?
We see same issue with P2V under RHEL 7.6 and latest yum updates.
We could reproduce it with several different machines.
A WMware SR was opened on 13 february 2019 and we are still waiting for a fix.
I also had this issue, I couldn't wait on a fix so if you are desperate you can perform the following.
(If you can get into rescue mode, you may be able to skip the Veeam step and try the repair on the existing failed conversion)
Use Veeam Backup & Replication. If you don't have it you can download a 60 day trial for evaluation.
Quick description of how to get this migrated.
Perform a Linux based Agent backup of the Physical Centos 7.6 Server. Once the backup completes right click on the backup and 'export to virtual disk', select VMDK.
Upload the VMDK disks to your VMware datastore. Then attach these to the failed VM that died on 98% replacing the existing disks from the conversion. (you could just create a new VM)
On initial boot it failed with "dracut-initqueue timeout starting timeout scripts" errors mounting the root volume.
Boot into rescue mode and rebuild with the following command.
(replace the below intramfs with the version you are using)
dracut -f /boot/initramfs-3.10.0-514.26.2.e17.x86_64.img 3.10.0-514.26.2.e17.x86_64
After this completes VM will boot without issues. If you get an error in the VM properties of vmware stating the disk is 0mb even though its booting fine. Migrate the VM to another datastore and the disk descriptor will fix itself up.
Hope this helps, but hoping for support for 7.6 as I have a few of these to do.
Regards
Luke
Hi,
Our issue was solved yesterday by the VMware support with a provisory workaround :
Workaround:
Before migration:
yum downgrade lvm2-libs-2.02.180-10.el7_6.2.x86_64 lvm2-2.02.180-10.el7_6.2.x86_64 device-mapper-event-libs-1.02.149-10.el7_6.2.x86_64 device-mapper-1.02.149-10.el7_6.2.x86_64 device-mapper-libs-1.02.149-10.el7_6.2.x86_64 device-mapper-event-1.02.149-10.el7_6.2.x86_64
After migration, do a yum upgrade of these packages.
HTH,
Dominique
Thanks Dominique,
I will give this a crack today.
Regards
Luke
this didn't work. Or I didn't follow this correctly.
I booted from CD iso into rescue mode, got the disk and chrooted into it. Then ran the drucut command.
This is the error:
Failed to install module BusLogic
I also reconfiged the grub2, but that wasn't it. The menu comes up fine.
I can confirm the downgrade of the packages did work. It first balked me because there was a python library for gnome or something was in the way, so I deleted that, then downgraded. Then the conversion worked fine.
Edit /etc/dracut.conf. It looks like converter adds two lines to this file. You will see a line that begins with add_drivers, and you will see BusLogic in the list. Dracut kept throwing errors about not being able to load the module BusLogic. So frustrating.
Running
$ yum downgrade lvm2-libs-2.02.180-10.el7_6.2.x86_64 lvm2-2.02.180-10.el7_6.2.x86_64 device-mapper-event-libs-1.02.149-10.el7_6.2.x86_64 device-mapper-1.02.149-10.el7_6.2.x86_64 device-mapper-libs-1.02.149-10.el7_6.2.x86_64 device-mapper-event-1.02.149-10.el7_6.2.x86_64 --skip-broken
results in these flagged dependency problems:
Error: Package: 7:lvm2-python-libs-2.02.180-10.el7_6.3.x86_64 (@updates)
Requires: lvm2-libs = 7:2.02.180-10.el7_6.3
Removing: 7:lvm2-libs-2.02.180-10.el7_6.3.x86_64 (@updates)
lvm2-libs = 7:2.02.180-10.el7_6.3
Downgraded By: 7:lvm2-libs-2.02.180-10.el7_6.2.x86_64 (updates)
lvm2-libs = 7:2.02.180-10.el7_6.2
Available: 7:lvm2-libs-2.02.180-8.el7.i686 (base)
lvm2-libs = 7:2.02.180-8.el7
Available: 7:lvm2-libs-2.02.180-10.el7_6.1.i686 (updates)
lvm2-libs = 7:2.02.180-10.el7_6.1
Error: Package: 7:lvm2-cluster-2.02.180-10.el7_6.3.x86_64 (@updates)
Requires: lvm2 = 7:2.02.180-10.el7_6.3
Removing: 7:lvm2-2.02.180-10.el7_6.3.x86_64 (@updates)
lvm2 = 7:2.02.180-10.el7_6.3
Downgraded By: 7:lvm2-2.02.180-10.el7_6.2.x86_64 (updates)
lvm2 = 7:2.02.180-10.el7_6.2
Available: 7:lvm2-2.02.180-8.el7.x86_64 (base)
lvm2 = 7:2.02.180-8.el7
Available: 7:lvm2-2.02.180-10.el7_6.1.x86_64 (updates)
lvm2 = 7:2.02.180-10.el7_6.1
Running the yum downgrade with --skip-broken indicates that these four v6.2 packages wont be installed.
Packages skipped because of dependency problems:
7:device-mapper-event-1.02.149-10.el7_6.2.x86_64 from updates
7:device-mapper-event-libs-1.02.149-10.el7_6.2.x86_64 from updates
7:lvm2-2.02.180-10.el7_6.2.x86_64 from updates
7:lvm2-libs-2.02.180-10.el7_6.2.x86_64 from updates
I did not run this as I'm guessing that my system likely wont boot. I do have a backup to fall back to, but that's no fun. The vCenter converter 6.2.0 is from December 2018. When is a new version expected, and would this bug (compatibility issue with RH7.6 / Centos 7.6) likely be fixed?
Other thoughts and suggestions?
thx,
cjn
I really am dead in the water for this conversion. Any help?
thx,
cjn
Have you read this KB article?
Hello,
Yes I have read the KB article. But potentially downgrading several hundred server before a conversion and upgrade afterwards seems neither fun nor very efficient.
Is there at an estimate, when there will be a updated Version of VMware vCenter Converter Standalone with RHEL 7.6 support?
Thanks!