Cosz3
Enthusiast
Enthusiast

vmdk image (converted from qcow2 or raw image ) can not be boot up on vcenter.

Jump to solution

Hello,

I am using both Openstack with vSphere. I am stuck in the image problem till now.

I have some qcow2 and raw format image in Openstack, I try to convert these images to vmdk use command (here I use lsilogic adapter):

for raw : qemu-img convert -f raw -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk

for qcow2 : qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic  centos70.dsk centos70.vmdk

Then I register these vmdk image with Openstack glance like this way (I indicated glance service with IsiLogic adapter):

openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk

Then I try to use these image to start up vm in vcenter:

openstack server create --image test-image --flavor 1 --network 4217efe4-f2e4-4e68-8419-52117384016c test-vm

Finally I check the vm on Vceneter, I found the vm is hang on forever like below:

QQ20180307-0.jpg

I suspect the image does not have some drivers, I refer to this doc: http://pubs.vmware.com/esx254/admin/wwhelp/wwhimpl/common/html/wwhelp.htm?context=admin&file=esx25ad...  to check the vm which is boot from the same raw (qcow2) image on another KVM hypervisor. I can found the driver.

I do not know what is wrong, while the vm is booting. anyone met the same problem before, how do you solve the problem, please help, thanks in advance.

1 Solution

Accepted Solutions
Cosz3
Enthusiast
Enthusiast

Thanks for all your helps. I  have changed some code, and fixed the problem.

View solution in original post

0 Kudos
11 Replies
daphnissov
Immortal
Immortal

From the documentation here.

VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty image after the qemu-img conversion, the command to upload the VMDK disk should be something like:

$ openstack image create \ --container-format bare --disk-format vmdk \ --property vmware_disktype="sparse" \ --property vmware_adaptertype="ide" \ trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk 

Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set to ide in the previous command.

.....

Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the vmware_adaptertype property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit the vmware_adaptertype property if you are certain that the image adapter type is lsiLogic.

0 Kudos
Cosz3
Enthusiast
Enthusiast

Hello daphnissov, I noticed what you pointed. in fact, I have already changed adapter type when I converted the image from raw/qcow2 to vmdk.

here as you can see, the adapter_type is "lsilogic" is not "ide"

for raw : qemu-img convert -f raw -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk

for qcow2 : qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic  centos70.dsk centos70.vmdk

therefore, the image register command also changed to "lsiLogic"

openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk

check the image with head -20 command, it shows ddb.adapterType = "lsilogic":

[root@172-18-211-195 shengping]# head -20 centos70.vmdk

KDMV��

# Disk DescriptorFile

version=1

CID=76133d86

parentCID=ffffffff

createType="monolithicSparse"

# Extent description

RW 16777216 SPARSE "centos70.vmdk"

# The Disk Data Base

#DDB

ddb.virtualHWVersion = "4"

ddb.geometry.cylinders = "1044"

ddb.geometry.heads = "255"

ddb.geometry.sectors = "63"

ddb.adapterType = "lsilogic"

I think it is ok with the glance image register commands, or if there is something wrong, I may not notice?

0 Kudos
Cosz3
Enthusiast
Enthusiast

I guess, is it the problem that  kvm use vd* (such as vda vdb) for disk name, but vsphere use sd* (such as sda sdb) for disk name. therefore, I suspect using a image compatible for kvm may not be booted up on vsphere, since the guest machine can not find the disk which named sd*?  is there any way to change the disk name from vd* to sd*?

0 Kudos
daphnissov
Immortal
Immortal

Quoting again from the documentation snippet I provided:

VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with an IDE adapter type.

They are ALWAYS using an IDE adapter type. So in your conversion, specify IDE rather than lsiLogic and see if it boots.

0 Kudos
Cosz3
Enthusiast
Enthusiast

Indeed, the vm can boot up with "ide" and "sparse".

qemu-img convert -f qcow2 -O vmdk centos74.qcow2 centos74-ide-from-qcow2.vmdk

--------------------------------------------------------------

head -20 centos74-ide-from-qcow2.vmdk

KDMV��

# Disk DescriptorFile

version=1

CID=270f282d

parentCID=ffffffff

createType="monolithicSparse"

# Extent description

RW 41943040 SPARSE "centos74-ide-from-qcow2.vmdk"

# The Disk Data Base

#DDB

ddb.virtualHWVersion = "4"

ddb.geometry.cylinders = "41610"

ddb.geometry.heads = "16"

ddb.geometry.sectors = "63"

ddb.adapterType = "ide"

--------------------------------------------------------------

openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="ide" --property vmware_disktype="sparse" centos74-ide-from-qcow2 < centos74-ide-from-qcow2.vmdk

--------------------------------------------------------------

openstack server create --image centos74-ide-from-qcow2 --flavor 1 --network 4217efe4-f2e4-4e68-8419-52117384016c centos74-ide-from-qcow2

The vm booted up before pending quite a while.

F9C01F32-DB1B-4835-B095-3F9EE99D7155.jpg

then it started.

63B1879C-D9A6-460C-9432-558781C3B3DC.jpg

However, I am trying to attach a iscsi lun to this vm, but it can not be attached. I checked the vm, it does not have any iscsi controller configured, shown as below:

CEAC98C4-7486-48F0-9DA0-DC69EAB75F18.jpg

I did configure a software iscsi adapter in vsphere, and it can discover the target and lun. something like this

34A1819E-CA58-41B4-AC32-DBFB1BECF883.jpg

I also checked the Openstack code two days ago, only the registered Glance image have this property vmware_adaptertype="lsiLogic", the vm booted up can I have a SCSI Adapter configured, which a iscsi target lun can be attached to.

The Openstack code will check if the image have this adapter vmware_adaptertype="lsiLogic" , if it is there, then vmx will have SCSI Adapters configuration, otherwise it wont.

F689B6C3-B282-4439-B1B6-D1FECF1DF0D5.jpg

That is why I will put this property vmware_adaptertype="lsiLogic" in below command, rather than "ide".

openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk

But I still want to use the previous KVM used raw/qcow2 images rather than create new images. thus I try to convert previous images to  adapter_type=lsilogic  and vmdk.  But I found the vm can not be boot up (as the question I asked before).

That is the whole background for the question I asked initially.

I am confused:

1. is this the Openstack vmwareapi driver bug? since I found a quite few bugs in the code.

2. is it possible to convert my previous KVM qcow2 image with lsiLogic and preallocated (as the doc said, this pair may work), I tried these commands, but it seems still sparse monolithicSparse:

qemu-img create -f qcow2 -o preallocation=metadata  centos74-test.qcow2 1G

Formatting 'centos74-test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16

qemu-img convert -p -f qcow2 -O vmdk -o adapter_type=lsilogic centos74-test.qcow2 centos74-lsi-pre-from-qcow2.vmdk

    (100.00/100%)

head -20 centos74-lsi-pre-from-qcow2.vmdk

KDMV ���

# Disk DescriptorFile

version=1

CID=a7b129d9

parentCID=ffffffff

createType="monolithicSparse"

# Extent description

RW 2097152 SPARSE "centos74-lsi-pre-from-qcow2.vmdk"

# The Disk Data Base

#DDB

ddb.virtualHWVersion = "4"

ddb.geometry.cylinders = "130"

ddb.geometry.heads = "255"

ddb.geometry.sectors = "63"

ddb.adapterType = "lsilogic"

"&*.26:>BFJNRVZ^bfjnrvz~��������������������������������

0 Kudos
Cosz3
Enthusiast
Enthusiast

I use centos vmdk with lsiLogic and sparse, "random: crng init done", and hang forever.

0363AA53-8B36-4AFA-B40E-FFF36C82B678.jpg

I found these on google:

Debian hangs at boot with "random: crng init done"?

Bug #1685794 “Boot delayed for about 90 seconds until 'random: c...” : Bugs : linux package : Ubuntu

0 Kudos
daphnissov
Immortal
Immortal

Use IDE and let the VM boot. After it's booted, you can shut it down and add an additional controller that is a SCSI-type controller. Ensure VMware tools is installing the driver for it. You can have two controllers on the same VM and then use the SCSI controller to attach your device.

0 Kudos
Cosz3
Enthusiast
Enthusiast

yes, I did so yesterday. the volume can be attached, but I am still curious about why it can not be "sparse" and "lsilogic" pair. I will check the code again.

0 Kudos
daphnissov
Immortal
Immortal

The reason it can't be is what I quoted twice already:

VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with an IDE adapter type.

So anytime you use qemu-img to do a conversion, you are *always* getting an IDE adapter type because it doesn't support injecting any SCSI controller drivers. And because of that, when you do openstack image create you have to use the same type of controller. You can't just pick any controller you like.

Cosz3
Enthusiast
Enthusiast

Thanks for all your helps. I  have changed some code, and fixed the problem.

View solution in original post

0 Kudos