Hello,
I am using both Openstack with vSphere. I am stuck in the image problem till now.
I have some qcow2 and raw format image in Openstack, I try to convert these images to vmdk use command (here I use lsilogic adapter):
for raw : qemu-img convert -f raw -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk
for qcow2 : qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk
Then I register these vmdk image with Openstack glance like this way (I indicated glance service with IsiLogic adapter):
openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk
Then I try to use these image to start up vm in vcenter:
openstack server create --image test-image --flavor 1 --network 4217efe4-f2e4-4e68-8419-52117384016c test-vm
Finally I check the vm on Vceneter, I found the vm is hang on forever like below:
I suspect the image does not have some drivers, I refer to this doc: http://pubs.vmware.com/esx254/admin/wwhelp/wwhimpl/common/html/wwhelp.htm?context=admin&file=esx25ad... to check the vm which is boot from the same raw (qcow2) image on another KVM hypervisor. I can found the driver.
I do not know what is wrong, while the vm is booting. anyone met the same problem before, how do you solve the problem, please help, thanks in advance.
Thanks for all your helps. I have changed some code, and fixed the problem.
From the documentation here.
VMDK disks converted through
qemu-img
arealways
monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty image after theqemu-img
conversion, the command to upload the VMDK disk should be something like:$ openstack image create \ --container-format bare --disk-format vmdk \ --property vmware_disktype="sparse" \ --property vmware_adaptertype="ide" \ trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk
Note that the vmware_disktype
is set to sparse
and the vmware_adaptertype
is set to ide
in the previous command.
.....
Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the
vmware_adaptertype
property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit thevmware_adaptertype
property if you are certain that the image adapter type is lsiLogic.
Hello daphnissov, I noticed what you pointed. in fact, I have already changed adapter type when I converted the image from raw/qcow2 to vmdk.
here as you can see, the adapter_type is "lsilogic" is not "ide"
for raw : qemu-img convert -f raw -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk
for qcow2 : qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic centos70.dsk centos70.vmdk
therefore, the image register command also changed to "lsiLogic"
openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk
check the image with head -20 command, it shows ddb.adapterType = "lsilogic":
[root@172-18-211-195 shengping]# head -20 centos70.vmdk
KDMV��
# Disk DescriptorFile
version=1
CID=76133d86
parentCID=ffffffff
createType="monolithicSparse"
# Extent description
RW 16777216 SPARSE "centos70.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "4"
ddb.geometry.cylinders = "1044"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"
I think it is ok with the glance image register commands, or if there is something wrong, I may not notice?
I guess, is it the problem that kvm use vd* (such as vda vdb) for disk name, but vsphere use sd* (such as sda sdb) for disk name. therefore, I suspect using a image compatible for kvm may not be booted up on vsphere, since the guest machine can not find the disk which named sd*? is there any way to change the disk name from vd* to sd*?
Quoting again from the documentation snippet I provided:
VMDK disks converted through
qemu-img
arealways
monolithic sparse VMDK disks with an IDE adapter type.
They are ALWAYS using an IDE adapter type. So in your conversion, specify IDE rather than lsiLogic and see if it boots.
Indeed, the vm can boot up with "ide" and "sparse".
qemu-img convert -f qcow2 -O vmdk centos74.qcow2 centos74-ide-from-qcow2.vmdk
--------------------------------------------------------------
head -20 centos74-ide-from-qcow2.vmdk
KDMV��
�
# Disk DescriptorFile
version=1
CID=270f282d
parentCID=ffffffff
createType="monolithicSparse"
# Extent description
RW 41943040 SPARSE "centos74-ide-from-qcow2.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "4"
ddb.geometry.cylinders = "41610"
ddb.geometry.heads = "16"
ddb.geometry.sectors = "63"
ddb.adapterType = "ide"
--------------------------------------------------------------
openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="ide" --property vmware_disktype="sparse" centos74-ide-from-qcow2 < centos74-ide-from-qcow2.vmdk
--------------------------------------------------------------
openstack server create --image centos74-ide-from-qcow2 --flavor 1 --network 4217efe4-f2e4-4e68-8419-52117384016c centos74-ide-from-qcow2
The vm booted up before pending quite a while.
then it started.
However, I am trying to attach a iscsi lun to this vm, but it can not be attached. I checked the vm, it does not have any iscsi controller configured, shown as below:
I did configure a software iscsi adapter in vsphere, and it can discover the target and lun. something like this
I also checked the Openstack code two days ago, only the registered Glance image have this property vmware_adaptertype="lsiLogic", the vm booted up can I have a SCSI Adapter configured, which a iscsi target lun can be attached to.
The Openstack code will check if the image have this adapter vmware_adaptertype="lsiLogic" , if it is there, then vmx will have SCSI Adapters configuration, otherwise it wont.
That is why I will put this property vmware_adaptertype="lsiLogic" in below command, rather than "ide".
openstack image create --disk-format vmdk --container-format bare --property vmware_adaptertype="lsiLogic" --property vmware_disktype="sparse" test-image < centos70.vmdk
But I still want to use the previous KVM used raw/qcow2 images rather than create new images. thus I try to convert previous images to adapter_type=lsilogic and vmdk. But I found the vm can not be boot up (as the question I asked before).
That is the whole background for the question I asked initially.
I am confused:
1. is this the Openstack vmwareapi driver bug? since I found a quite few bugs in the code.
2. is it possible to convert my previous KVM qcow2 image with lsiLogic and preallocated (as the doc said, this pair may work), I tried these commands, but it seems still sparse monolithicSparse:
qemu-img create -f qcow2 -o preallocation=metadata centos74-test.qcow2 1G
Formatting 'centos74-test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
qemu-img convert -p -f qcow2 -O vmdk -o adapter_type=lsilogic centos74-test.qcow2 centos74-lsi-pre-from-qcow2.vmdk
(100.00/100%)
head -20 centos74-lsi-pre-from-qcow2.vmdk
KDMV ���
# Disk DescriptorFile
version=1
CID=a7b129d9
parentCID=ffffffff
createType="monolithicSparse"
# Extent description
RW 2097152 SPARSE "centos74-lsi-pre-from-qcow2.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "4"
ddb.geometry.cylinders = "130"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"
"&*.26:>BFJNRVZ^bfjnrvz~��������������������������������
I use centos vmdk with lsiLogic and sparse, "random: crng init done", and hang forever.
I found these on google:
Debian hangs at boot with "random: crng init done"?
Bug #1685794 “Boot delayed for about 90 seconds until 'random: c...” : Bugs : linux package : Ubuntu
Use IDE and let the VM boot. After it's booted, you can shut it down and add an additional controller that is a SCSI-type controller. Ensure VMware tools is installing the driver for it. You can have two controllers on the same VM and then use the SCSI controller to attach your device.
yes, I did so yesterday. the volume can be attached, but I am still curious about why it can not be "sparse" and "lsilogic" pair. I will check the code again.
The reason it can't be is what I quoted twice already:
VMDK disks converted through
qemu-img
arealways
monolithic sparse VMDK disks with an IDE adapter type.
So anytime you use qemu-img to do a conversion, you are *always* getting an IDE adapter type because it doesn't support injecting any SCSI controller drivers. And because of that, when you do openstack image create you have to use the same type of controller. You can't just pick any controller you like.
Thanks for all your helps. I have changed some code, and fixed the problem.