I see this more and more often on ESXi 4.1
A VM can not be started with the error [msg.disk.configureDiskError] Reason: Invalid argument.
A typical log extract looks like:
VMXVmdb_LoadRawConfig: Loading raw config
DISK: OPEN scsi0:0 '/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk' persistent R[]
AIOGNRC: Failed to open '/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c-flat.vmdk' : Invalid argument (5634) (0x2013).
DISKLIB-VMFS : "/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c-flat.vmdk" : failed to open (Invalid argument): AIOMgr_Open failed. Type 3
DISKLIB-LINK : "/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk" : failed to open (Invalid argument).
DISKLIB-CHAIN : "/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk" : failed to open (Invalid argument).
DISKLIB-LIB : Failed to open '/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk' with flags 0xa Invalid argument (1441801).
DISK: Cannot open disk "/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk": Invalid argument (1441801).
Msg_Post: Error
[msg.disk.noBackEnd] Cannot open the disk '/vmfs/volumes/494b5559-aba4a48e-80c6-001a64689db9/domino/domino_c.vmdk' or one of the snapshot disks it depends on.
the vmdk in question has NO stale locks - at least vmkfstools -D does not show anything
the vmdk is not open in any other process
rebooting the esxi does not help
moving the vmdk is not possible
renaming it does not work
datastorebrowser can not copy it
winscp can not copy it
vmkfstools can not clone it
vMotion is not available or fails
Converter fails
if the VM is stored on a local disk of the esxi I can copy it with a LiveCD using vmfs-tools
what do I do in such a case if the VM is stored on a SAN ?
Can you try a copy via vcbMounter utility?
I wish I could reproduce this problem so I can experiment with it here.
I keep your suggestion in mind for next time I see it
I will also try to usevmware-mount from the vddk
But honestly I doubt it will work.
I think the reason for this must be some kind of in-use flag set by the esxi kernel.
I think I need to find a way to forcefully remove this left over artifacts ...
If it wasn't for the fact that WinSCP couldn't copy it either, I would then try to generate a new descriptor file for the matching -flat.vmdk...
/Rubeck
That was one of the things I tried - but it never made any difference
Spooky stuff, indeed... 😞
Please keep us posted regarding this issue... As you, I'm quite interested in what the issue might be.. Never seen this issue my self..
/Rubeck
looks like here is another case
http://communities.vmware.com/thread/334315?tstart=0
last 3 weeks I had more than 10 cases of this special error :smileycry:
This is a strange one. It is like the disk has been flagged as locked, but it isn't. I haven't seen this issue yet. Have you raised a case with VMware?
It seems like you have tried pretty much everything, but I would try this.
Unregister VM.
Log out of vCenter.
Log on to vSphere host using vCenter client.
Re-register VM.
Hopefully start VM.
Hi
that was one of the things I tried without luck.
It does not even help to create a new VM and new descriptor vmdks so that you only need the flat or delta vmdks from the corrupt VM.
It also does not help to re-install the ESXi
That is even stranger. I do not know of anything within the VMDK standard that could flag your disks like this. Can you export as OVF still? Also can you convert to workstation vmdk file format? Is there anything special about the VMs that get this bug?
Also do you have a case with VMware? It seems unlikely that you will find a solution to something this strange on the communities.
last week i found a first case of this issue on VMFS 5 and ESXi 5
I made some noise and VMware is now looking into it
hopefully they find a way to fix this problem that does not require to reformat the datastore
I will update this post when I know more
Hello continuum,
I have a similar issue on ESXi 4.1 (build 260247).
After ESXi server crash (server was not responding / frozen) and hard reboot, some VMs cannot be powered on and show as unknown.
Stored on local disks.
log extract:
Jan 17 16:51:35.294: vmx| VMXVmdb_LoadRawConfig: Loading raw config
Jan 17 16:51:35.300: vmx| DISK: OPEN scsi0:0 '/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk' persistent R[]
Jan 17 16:51:35.317: vmx| AIOGNRC: Failed to open '/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS-flat.vmdk' : Invalid argument (5634) (0x2013).
Jan 17 16:51:35.317: vmx| DISKLIB-VMFS : "/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS-flat.vmdk" : failed to open (Invalid argument): AIOMgr_Open failed. Type 3
Jan 17 16:51:35.317: vmx| DISKLIB-LINK : "/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk" : failed to open (Invalid argument).
Jan 17 16:51:35.317: vmx| DISKLIB-CHAIN : "/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk" : failed to open (Invalid argument).
Jan 17 16:51:35.317: vmx| DISKLIB-LIB : Failed to open '/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk' with flags 0xa Invalid argument (1441801).
Jan 17 16:51:35.317: vmx| DISK: Cannot open disk "/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk": Invalid argument (1441801).
Jan 17 16:51:35.318: vmx| Msg_Post: Error
Jan 17 16:51:35.318: vmx| [msg.disk.noBackEnd] Cannot open the disk '/vmfs/volumes/4cb35eeb-9c162916-9b59-00259013b287/Ubuntu Server 10.04 LTS/Ubuntu Server 10.04 LTS.vmdk' or one of the snapshot disks it depends on.
Jan 17 16:51:35.318: vmx| [msg.disk.configureDiskError] Reason: Invalid argument.----------------------------------------
Jan 17 16:51:35.326: vmx| Module DiskEarly power on failed.
Did you fix the problem?
Fixing the problem is out of range - all you can do is damage control = try to recover what can be recovered.
As you use VMFS 3 I would suggest you power of the ESXi and use the LiveCD I made for this purpose to copy the VM to a external USB-drive.
See http://communities.vmware.com/message/1877235#1877235
If you need assistance during the process let me know
Thank you very much for your answer.
I've already restored previous backup on new VM.
Unfortunately, I cannot access physically to the ESXi (OVH hosting company dedicated server).
Remote only via SSH or vSphere client.
Is there any good way to recover the disk data. I faced the same problem with you and in trouble now. Pls give me a help
Dont know if there is The One good way to handle this - I decide from case to case which approach to use.
If the VM can still be launched with the damaged files used in readonly mode it is quite easy -but sometimes no ESXi command can handle the file in question and then it may be required to use dd commands to extract as much as possible while not reading the damaged section.
Call me in skype (sanbarrow) and we can arrange something - or post details.
Can you still run vmkfstools -t0 /vmfs/volumes/datastore/dir/name.vmdk > /tmp/name.vmdk.map ?
If yes - post the result
Ulli
Hi,
This looks like heartbeat corruption on VMFS, file support request with VMware. For confirmation, in vmkernel.log of esxi host, check for corruption.
Regards
Mohammed Emaad