I tried a hot clone of a RHEL 4 server via VMware Converter v4.0.1 to a ESXi 3.5u4 host, but I discovered that even though LVM on the source is supported, LVM is just not "preserved" in the target environment. Instead every LVM volume is converted into a partition. (See data file snippet for before and after changes)
I saw these articles but they are just too unrealistic for multiple server conversions:
Does anyone have the steps for easily converting back to an LVM volume group configuration rather than having multiple VM disks assigned to the VM?
The only other option I see would be to build out a new guest VM, reconfigure it (i.e add packages, custom application installs, etc) then have to do an rsync of the data from the physical server over to the virtual one.
I have been searching all day to find out an answer. I am a Vmware and Storage guy not a linux admin. I noticed all source server's lvm's stripped out in the new VM after conversion. Nothing in the user guide about this however in the release notes, this is mentioned;
"The number of LVM logical volumes per volume group is limited to 12 for powered-on Linux sources "
"During the conversion of powered-on Linux machines, Converter Standalone converts LVM volume groups into new disks on the target virtual machine. The number of LVM logical volumes on a source LVM volume group cannot exceed 12. "
What I am struggling to understand is why Converter does this? As far as I understand LVM enables you to be flexible with your volumes eg have a partition on multiple physical disks, extend the partitions easily, create soft raid(I don't recommend this). Overall can we achieve the same results without using LVM? Vsphere we can extend the vmdk, then use fdisk to extend disk and extend the partition using ext3 tools? On the other hand LVM is the fault volume manager in Redhat 5. Should we look for a way to convert this back to LVM or not?