VMware Cloud Community
micy01
Contributor
Contributor

SUSE with LVM doesn't boot

Hello everybody.

I have converted openSUSE 10.2 64-bit by Converter 5 Standalone. Type of coversion was by volumes.

The converter created 2 virtual disks. The first with /boot partition and the second with rest of volumes. At the and there was an error with the configuration of the virtual machine.

I repeated the conversion with the option "don't configurate". Then I booted VM to rescue, made chroot, mkinitrd but the volume group "system" was not recognized.

Where is the problem?

Thanks a lot.

Rescue

rescue.jpg

Boot

boot.jpg

0 Kudos
12 Replies
patanassov
VMware Employee
VMware Employee

Hello

Converter creates a separate virtual disk for LVM volume groups but keeps the /boot volume as basic volume. That's why you had 2 virtual disks. This is normal and expected.

As for the reconfiguration error - you ought to post the logs for someone to be able to have an idea was has gone wrong.

You can also try again to manually rebuild initrd image, but do not uncheck the reconfigure option. If the error is during reconfig, then all the data have been successfully copied. OTOH reconfig does more than rebuilding initrd image (e.g. patch /etc/fstab) which you have missed.

HTH

Plamen

0 Kudos
micy01
Contributor
Contributor

Hi,

thanks. I will try it again and post the log or more information.

micy01

0 Kudos
micy01
Contributor
Contributor

Hi,

I tried to convert the physical machine again. I switched on the option "configure the VM" after converting. At the end there was the error:

20 Error: Unable to reconfigure the destination virtual machine.

    FAILED: An error occurred during the conversion: ' * got kernel major revision as ERROR:
    kernel version has to be in format 2.6.*, version 2.6.18.8-0.3-default is not supported (return code 1)'

19 Creating initial ramdisk (initrd).

18 Patching the mount point entries in fstab.

17 Installing the GRUB boot loader.

16 Starting the reconfiguration of the destination virtual machine.

Then I booted the VM. The situation was the same as on the 2nd screen of the my original post.

I booted the VM to rescue and made chroot. I listed /etc/fstab. It looks like this:

chroot-fstab.jpg

This is the original fstab from the physical machine:

/dev/system/system   /                    reiserfs   acl,user_xattr        1 1

/dev/sda1            /boot                reiserfs   acl,user_xattr        1 2

/dev/system/home     /home                reiserfs   acl,user_xattr        1 2

/dev/system/opt      /opt                 reiserfs   acl,user_xattr        1 2

/dev/system/srv      /srv                 reiserfs   acl,user_xattr        1 2

/dev/system/tmp      /tmp                 reiserfs   acl,user_xattr        1 2

/dev/system/usr      /usr                 reiserfs   acl,user_xattr        1 2

/dev/system/var      /var                 reiserfs   acl,user_xattr        1 2

/dev/system/swap     swap                 swap       defaults              0 0

proc                 /proc                proc       defaults              0 0

sysfs                /sys                 sysfs      noauto                0 0

debugfs              /sys/kernel/debug    debugfs    noauto                0 0

usbfs                /proc/bus/usb        usbfs      noauto                0 0

devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

#/dev/system/documentrepository /srv/documentrepository reiserfs   acl,user_xattr        1 2

/dev/md0             /srv/documentrepository ext3    defaults              1 2

What is wrong?

Thanks

0 Kudos
patanassov
VMware Employee
VMware Employee

This is strange

There is a regression in converter 5.1 that reports a kernel version error when the source is not from the 3 supported distros (RHEL, SLES, Ubuntu). I expected openSuse to be treated as SLES but may be i am wrong. You may try converter 5.0 if that was the issue.

The fstab on the destination is the same as on the source. This is unexpected, especially that the log highlights read patching has passed.

By the way, you cannot clone /dev/md0 (it should be skipped)

Can you pls upload the logs? Export from the failed task, not the job. If you don't want to upload the whole bundle, just the helperVM log. You should dig a little to find it - there is a worker task bundle in the task bundle, an agent task bundle in the worker bundle, and there lurks the helperVM log 🙂

0 Kudos
micy01
Contributor
Contributor

I know that I can't convert md. I added the red message from the status tab to log highlights of the converter. :smileylaugh:

I prepared the logs before. They are here:

0 Kudos
micy01
Contributor
Contributor

And another:

0 Kudos
micy01
Contributor
Contributor

Hi,

I have tried to convert the openSUSE by converter 5.0.1 but a result was the same. A reconfiguration the VM was OK, then I got to rescue, made mkinitrd, grub-install, tried to repare /etc/fstab, but VG system wasn't found.

I don't know what to do. The kernel isn't supported for the converter perhaps.

I will have to install a new VM with the same version of openSUSE and packages and move an application from the physical machine to the VM.

micy

0 Kudos
patanassov
VMware Employee
VMware Employee

Failing with the same error in 5.0 is strange, I haven't seen such a thing 😞

Apart from reinstalling from scratch, I can only suggest as alternatives:
  - further investigation why LVM is not working. However I have no idea what specifically to look for.

  - converting without LVM. E.g. you could convert only root and boot volumes w/o LVM, then set it up, create the other volumes, and copy the data.

Regards

0 Kudos
micy01
Contributor
Contributor

Hi,

I have tried the conversion again but with the option "To basic" (without LVM) in the converter.

Then I have to make fsck of every FS and the VM boots normally.

Then I wanted to move FS on LVM. I execited this steps (briefly):

1. Make VG of the name vg00, LV and ext3 for FS.

2. Mount the root FS under /rootlv directory, copy root FS to this directory, mount next new FS under the appropriate directories in /rootlv and copy this ones too.

3. Boot to rescue, mount the new root FS under /mnt and next FS under /mnt, make chroot.

4. Fix /etc/fstab, fix /boot/grub/menu.lst, execute "grub-install /dev/sda", execute mkinitrd ( with the option "lvm2" too).

5. Reboot.

But the resolution is the same as in my the first post - the volume group vg00 is not found.

Where is the problem? I think mkinitrd creates initrd wich doesn't know LVM (bad kernel, libraries, modules, ...).

Do you have any idea?

Thanks a lot.

Milan

0 Kudos
patanassov
VMware Employee
VMware Employee

I am sorry but I have no idea. It doesn't seem converter related, rather something specific with this Suse machine.

I would try to look for some more information that could give a hint why LVm isn't working; perhaps dmesg?

Just to confirm, the /boot volume a basic one, not logical, isn't it?

0 Kudos
micy01
Contributor
Contributor

Yes, /dev/sda1.

0 Kudos
micy01
Contributor
Contributor

I would interest in the problem. But I leave the server without LVM because of his short live. It will be replaced by a new one in some month.

Thanks for your time and when you have the solution of the problem above let me know.

Milan

0 Kudos