I understand what it is. I didn't know that you couldn't use a gui.
Boot a new VMware virtual machine with a rescue CD and use netcat to stream the disk from the running container to the VM running from the rescue CD. You would still need to recreate the boot partition and fit in things like driver modules etc. If you have multiple different OS containers it may be more trouble than it is worth. Could try converting the host machine to virtual including all the containers.
You can try using VMware Standalone Converter against the running OpenVz container. ?????
OpenVZ does not contain any kernel/boot files - so that the converter can not be used.
What OS?
Converter can be installed directly in the OS. There is a Linux version which is unfortunately a GUI.
"OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated containers (otherwise known as VEs or VPSs) on a single physical server...Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files." (c) http://wiki.openvz.org
But container use /boot (kernel etc) from host server
And all our containers do not have GUI.
I understand what it is. I didn't know that you couldn't use a gui.
Boot a new VMware virtual machine with a rescue CD and use netcat to stream the disk from the running container to the VM running from the rescue CD. You would still need to recreate the boot partition and fit in things like driver modules etc. If you have multiple different OS containers it may be more trouble than it is worth. Could try converting the host machine to virtual including all the containers.
Here my short course of action - perhaps someone will be helpful.
1. create 2Gb clean image on openvz host
dd if=/dev/zero of=/opt2/VHD/speed.img bs=516096c count=4000
fdisk -u -C4000 -S63 -H16 /opt2/VHD/speed.img
2. mount image, create fs
losetup -o32256 /dev/loop0 /opt2/VHD/speed.img
mke2fs -b1024 /dev/loop0 2015968
tune2fs -j /dev/loop0
3. mount image with fs
mount -text3 /dev/loop0 /mnt/speed/
4. copy container files to image
cp -av /var/lib/vz/private/131/* /mnt/speed/
5. umount image
6. convert image to VMDK format
kvm-img convert -f raw speed.img -O vmdk speed.vmdk
7. on esxi host create VM without hdd
8. copy speed.vmdk to esxi (best way - use FTP)
9. on esxi convert VMDK image to thin format
vmkfstools -i speed.vmdk -d thin speed-thin.vmdk
10. connect speed-thin.vmdk to VM (edit VM properties->add hard disk-> use existing...), start VM, boot from rescue CD:
- vi /etc/network/interfaces
- vi /etc/apt/sources.list
- apt-get update
- apt-get install linux-image-2.6-686 grub-pc
- add in /etc/inittab
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6
- echo "/dev/sda1 / ext3 defaults,errors=remount-ro 0 1" > /etc/fstab
- update-grub
11. boot VM normally
I realized this is a very old post, and thank you for the information inoyat, I appreciate all the work you put into it. I have followed the instructions, and I am able to boot off of a rescue cd, then moun the hard drive, and see the files from the converted openvz machine, I'm using a single esxi 6.7 based VM, with centos 6 32 bit (i686) as that was the type of the old vm. I've been able to chroot to the old drive, and I can get online, and I can use yum to install packages in the chrooted environment.
I have done these steps after conversion:
1. Booted from Centos 6.8 live cd
2. mkdir /tmp/speed
3. mount /dev/sda1 /tmp/speed
4. chroot /tmp/speed
5. yum install kernel, grub (and dependencies)
Inside of /boot I have:
vmlinux-2.6.32-754.30.2.el6.i686
initramfs-2.6.32-754.30.2.el6.i686.img
/boot/grub/grub.conf:
------------
default=0
timeout=10
splashimage=(hd0,1)/boot/grub/splash.xpm.gz
title Centos 6.8 i686
root(hd0,1)
kernel /boot/vmlinux-6.2.32-642.el6.i686 ro root=/dev/sda1 rhgb noqiet
initrd /boot/initramfs/2.6.32-754.30.2.el6.i686.img
------
In the chrooted environment when I look at /dev there is no sda1, there's just /dev/simfs - the same file structure as what existed on the origional machine
/etc/fstab has this:
------------------------------------------------------------
/dev/sda1 / ext3 defaults,errors=remount-ro 0 1
------------------------------------------------------------
/etc/inittab has this:
-----------------------------------------------------------
id:3:initdefault:
1:2345:respawn:/sbn/mingetty tty1
2:2345:respawn:/sbn/mingetty tty2
3:2345:respawn:/sbn/mingetty tty3
4:2345:respawn:/sbn/mingetty tty4
5:2345:respawn:/sbn/mingetty tty5
6:2345:respawn:/sbn/mingetty tty6
-----------------------------------------------------
When I boot up the guest I get this:
Booting from local disk ...
Network boot form VMware VMXNET3
PXE-EC8: !PXE structure was not found in UNDI driver code segment
PXE-MOF: Exiting Intel PXE Rom.
Operating system not found
I've tried changing the scsi controller to all options under editing machine settings, to all the options that hasn't made a difference.
I can't tell if the guest VM is not seeing the hard drive at all, or if it is seeing the drive and cannot boot from it because there's a problem with grub or the partition or something.
When I try to do grub-install /dev/simfs, I get: no suitable drive was found in the generated device map. The same message comes up for /dev/sda1 and /dev/sda
Any help would greatly be appreciated.
Chris
Hi
I never looked into OpenVz containers ...
but 3 years ago I made a LiveCD that I used for this job:
- boot into Ubuntu LiveCD running on a new VM with a newly created empty vmdk.
- that LiveCD then partitioned the new vmdk, installed grub and syslinux and copied a kernel and init ramdisk in place.
- in that case I then configured that vmdk to chainload into a squash.fs image.
So I used a LiveCD with "Admin-tools" to preconfigure a copy of itself to autostart from a newly setup vmdk.
Sounds like a quite similar task ....
Have not looked into that installer-ISO for years ... but if you dont get any other help I can look it up if you can wait a few days.
Ulli
If you could look into it, I would appreciate it. Because of the system requirements of the progress database that the old system is using, it is using Centos 6.8 i386, on a VM that's a 32 bit Centos 6 base configuration. I can try to run your steps, but I don't know if the individual commands line up with the tutorial that I followed before.
I've tried to tackle the problem from two directions:
1. Convert the existing working openvz container into an openvz container to run there with all servies installed and working
2. Create a new vmware container running the same OS and services, then try to re-configure / re-install progress 10.1C as it was set up before. I've never done a bare install of this database before, and our support from this vendor has been expired for 3+ years so there are no support options without paying an arm and a leg for a "best effort - no guarantees" output. I've been reading through the progress getting started and installation guide to see if I can duplicate the steps that were once taken to get the system up and going, but I'll try your steps too.
Just for some background the Existing machine is an AMD opteron based Openvz host with 2 cores and 4 GB of ram, and the Container can see all cores and all ram. the new machine is a dual Xeon silver 2.1Ghz machine with 256 GB of ram, where I gave it the same number of cores and ram for option #2.
I look forward to any insight you have on the process, and anyhelp anyone else can provide in the conversion process.