VMware Communities
BillT2
Contributor
Contributor

Having trouble importing VM

I was provided a VM created under vSphere in ovf-format.  It appears to import fine into my copy of VMware Workstation 10.0.1.  When I try to power on, it fails.  For each of the 4 virtual drives in the image, I get a message like:

Disk 'C:\Users\username\Documents\Virtual Machines\Vmname\Vmname-disk2.vmdk' cannot be opened for writing. It might be shared with some other VM.

Cannot open the disk for writing

Cannot open the disk 'C:\Users\username\Documents\Virtual Machines\Vmname\Vmname-disk2.vmdk' or one of the snapshot disks it depends on.

I've checked, and I have full access under Windows to the 'C:\Users\username\Documents\Virtual Machines\Vmname' folder.

What might I be missing and/or what should I check?

workstation

Thanks,

Bill

Tags (1)
Reply
0 Kudos
12 Replies
a_p_
Leadership
Leadership

Welcome to the Community,

do you see any related errors/warnings in the VM's vmware.log file?

André

Reply
0 Kudos
BillT2
Contributor
Contributor

The log file does contain messages for each drive, but they seem basically identical to the pop-up warning messages that appear during power on.  One of the lines from the log file is:

2014-01-29T11:44:57.754-05:00| Worker#0| I120: [msg.disk.noBackEnd] Cannot open the disk 'C:\Users\username\Documents\Virtual Machines\Vmname\Vmname-disk1.vmdk' or one of the snapshot disks it depends on.

(This appears for drive1, drive2, drive3, and drive4.)

Then, at the end, the log finishes with:

2014-01-29T11:44:58.534-05:00| vmx| I120: [msg.disk.noBackEnd] Cannot open the disk 'C:\Users\username\Documents\Virtual Machines\Vmname\Vmname-disk2.vmdk' or one of the snapshot disks it depends on.

2014-01-29T11:44:58.534-05:00| vmx| I120: [msg.moduletable.powerOnFailed] Module DiskEarly power on failed.

2014-01-29T11:44:58.534-05:00| vmx| I120: [msg.vmx.poweron.failed] Failed to start the virtual machine.

2014-01-29T11:44:58.534-05:00| vmx| I120: ----------------------------------------

2014-01-29T11:45:01.167-05:00| vmx| I120: Vix: [6124 mainDispatch.c:3985]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0

2014-01-29T11:45:01.167-05:00| vmx| I120: Transitioned vmx/execState/val to poweredOff

2014-01-29T11:45:01.168-05:00| vmx| I120: Vix: [6124 mainDispatch.c:3985]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=0 additionalError=0

2014-01-29T11:45:01.168-05:00| vmx| I120: Vix: [6124 mainDispatch.c:4024]: Error VIX_E_FAIL in VMAutomation_ReportPowerOpFinished(): Unknown error

2014-01-29T11:45:01.168-05:00| vmx| I120: Vix: [6124 mainDispatch.c:3985]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0

2014-01-29T11:45:01.168-05:00| vmx| I120: Transitioned vmx/execState/val to poweredOff

2014-01-29T11:45:01.169-05:00| vmx| I120: VMIOP: Exit

2014-01-29T11:45:01.170-05:00| vmx| I120: Vix: [6124 mainDispatch.c:861]: VMAutomation_LateShutdown()

2014-01-29T11:45:01.170-05:00| vmx| I120: Vix: [6124 mainDispatch.c:811]: VMAutomationCloseListenerSocket. Closing listener socket.

2014-01-29T11:45:01.170-05:00| vmx| I120: Flushing VMX VMDB connections

2014-01-29T11:45:01.200-05:00| vmx| I120: VmdbDbRemoveCnx: Removing Cnx from Db for '/db/connection/#1/'

2014-01-29T11:45:01.200-05:00| vmx| I120: VmdbCnxDisconnect: Disconnect: closed pipe for pub cnx '/db/connection/#1/' (0)

2014-01-29T11:45:01.204-05:00| vmx| I120: VMX exit (0).

2014-01-29T11:45:01.204-05:00| vmx| I120: AIOMGR-S : stat o=4 r=24 w=0 i=0 br=223232 bw=0

2014-01-29T11:45:01.204-05:00| vmx| I120: OBJLIB-LIB: ObjLib cleanup done.

2014-01-29T11:45:01.204-05:00| vmx| I120: FileTrack_Exit: done

Reply
0 Kudos
a_p_
Leadership
Leadership

It's actually the part before the error message which is important. Please use the advanced editor an attach the vmware.log file to a reply post.


André

Reply
0 Kudos
BillT2
Contributor
Contributor

Got it.  New to Community, so I apologize in advance if I don't follow protocol.  Please note, the log below is directly from the folder containing the VM.  "username" above is "wat" and "Vname" above is "20140109_sl2013-db"

Reply
0 Kudos
a_p_
Leadership
Leadership

That's indeed weird!? I think we can rule out files system permissions (you already checked them and the vmware.log gets created). Did you also check the file attributes for the .vmdk file (e.g. read-only)?

What you may try is to start VMware Workstation with "Run as Administrator" to see whether UAC is involved. Another possible issue could ba an on-access virus scan application (just a guess).

André

Reply
0 Kudos
BillT2
Contributor
Contributor

Thank you for investing time and efforts in my issue.  I was hopeful I might be making a simple mistake that could be corrected easily, but I'll keep digging.


In specific follow-up:

I right-clicked the 4 *.vmdk files and clicked properties.  Read-only is "clear" (=no) for all of them.

I start VMware Workstation, with run-as Administrator (FWIW, my Domain account that I've been logged into for my testing has local administrator rights on the laptop), and get same errors, and the image does not power-on.

It's been a while since I spent a lot of time on any involved VMware setup, so it's quite possible I'm forgetting to do something basic and simple.

I turned off my on-access virus-scanning (Malwarebytes), and got the same error.

As one further data point, I have 5 images from the same published.  Three boot without a problem -- but two of them generate these errors on every vmdk-file when I try to power-on after importing.

Thanks,

Bill

Reply
0 Kudos
a_p_
Leadership
Leadership

With either the VM's tab closed or VMware Workstation closed, are there any files with an .lck extension in the VM's folder? If yes, delete them.

André

Reply
0 Kudos
BillT2
Contributor
Contributor

No *.lck files or folders after I shut down VMware workstation.  These are only there while I'm running Workstation (which I'd expect).

Bill

Reply
0 Kudos
BillT2
Contributor
Contributor

OK, I'm really not experienced here, but I decided to compare disk-related sections of the ovf-file for a VM that worked, vs. one for the VM that failed.

One that worked contained this in the ovf-xml:

      <Item>

        <rasd:Address>0</rasd:Address>

        <rasd:Description>SCSI Controller</rasd:Description>

        <rasd:ElementName>SCSI controller 0</rasd:ElementName>

        <rasd:InstanceID>3</rasd:InstanceID>

        <rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>

        <rasd:ResourceType>6</rasd:ResourceType>

      </Item>

      <Item>

        <rasd:Address>1</rasd:Address>

        <rasd:Description>IDE Controller</rasd:Description>

        <rasd:ElementName>IDE 1</rasd:ElementName>

        <rasd:InstanceID>4</rasd:InstanceID>

        <rasd:ResourceType>5</rasd:ResourceType>

      </Item>

The one that failed (which all my posts above relate to) contained:

      <Item>

        <rasd:AddressOnParent>0</rasd:AddressOnParent>

        <rasd:ElementName>Hard disk 1</rasd:ElementName>

        <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>

        <rasd:InstanceID>8</rasd:InstanceID>

        <rasd:Parent>3</rasd:Parent>

        <rasd:ResourceType>17</rasd:ResourceType>

        <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />

      </Item>

      <Item>

        <rasd:AddressOnParent>1</rasd:AddressOnParent>

        <rasd:ElementName>Hard disk 2</rasd:ElementName>

        <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>

        <rasd:InstanceID>9</rasd:InstanceID>

        <rasd:Parent>3</rasd:Parent>

        <rasd:ResourceType>17</rasd:ResourceType>

        <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />

      </Item>

      <Item>

        <rasd:AddressOnParent>2</rasd:AddressOnParent>

        <rasd:ElementName>Hard disk 3</rasd:ElementName>

        <rasd:HostResource>ovf:/disk/vmdisk3</rasd:HostResource>

        <rasd:InstanceID>10</rasd:InstanceID>

        <rasd:Parent>3</rasd:Parent>

        <rasd:ResourceType>17</rasd:ResourceType>

        <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />

      </Item>

      <Item>

        <rasd:AddressOnParent>3</rasd:AddressOnParent>

        <rasd:ElementName>Hard disk 4</rasd:ElementName>

        <rasd:HostResource>ovf:/disk/vmdisk4</rasd:HostResource>

        <rasd:InstanceID>11</rasd:InstanceID>

        <rasd:Parent>3</rasd:Parent>

        <rasd:ResourceType>17</rasd:ResourceType>

        <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />

      </Item>

What might these refer to:          <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />

Could they have led something to be put into the vmx-files and other contents of the VM-files that get built by the import, that's causing me a problem?  I don't understand the structure well enough to know what to look for.

Thanks,

Bill

Reply
0 Kudos
BillT2
Contributor
Contributor

Never mind that idea.  I deleted the extra sections in the ovf (that read like:    <vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false" />).

When I reimported those to a new workstation vm, it also failed to power-up, with identical errors on each virtual drive.

Reply
0 Kudos
danregan
Contributor
Contributor

I just ran into the same issue, but I tried creating a full clone of the imported VM and it works (I saw this question : Exported vApp won't start when imported to player/workstation ).  Doesn't explain why it happened, but at least it's a workaround.

Reply
0 Kudos
BillT2
Contributor
Contributor

Dan:

Thanks.  Cloning the VM allowed me to power it on.  Greatly appreciate the suggestion.  Would be really nice to know why, but I'm pleased to be able to move past this.

Best regards,

Bill

Reply
0 Kudos