VMware Cloud Community
shawonpaul
Contributor
Contributor

Incompatible device backing specified for device '0'

I am facing this error meesage while trying to configure a shared disk for two different guest located on two different host.

I installed windows 2008 enterprise 64 bit on both the node.

Both node are on different host. I wanted to create a failover cluster.

I assigned a RDM disk to first node and selected a datastore which is located on SAN.

Then I tried to assign this disk to second node by selecting "use an existing virtual disk". I selected the same "Disk File Path" .

But i am just seeing the error saying "Incompatible device backing specified for device '0'".

I am using ESXi 5.0 evaluation.

I tried some fixes like renaming the .vmdk files and deleting cdrom from the add hardware - but none of them worked actually.

Tags (3)
24 Replies
GreatWhiteTec
VMware Employee
VMware Employee

Make sure y ou have the correct controller type (LSI logic SAS) and that you are NOT using SCSI:0., Select SCSI 1:0 for example.

Reply
0 Kudos
shawonpaul
Contributor
Contributor

well i actually used the same as you mentioned here, but no positive result.

Reply
0 Kudos
shawonpaul
Contributor
Contributor

I have this document from the beginning; well documents aren't much in this case.

I guess some seroius configuration change is required here; this is not a problem by the book

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

shawonpaul

Im experiencieng the same issue. Could you solve this?

Reply
0 Kudos
continuum
Immortal
Immortal

the vmware.log where that message occurs would be helpful


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

This:

2012-10-26T12:42:38.684Z| vmx| Log for VMware ESX pid=4135407 version=5.0.0 build=build-469512 option=Release
2012-10-26T12:42:38.684Z| vmx| The process is 64-bit.
2012-10-26T12:42:38.684Z| vmx| Host codepage=UTF-8 encoding=UTF-8
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_BackdoorHintsMPN                :       0        3        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_HV                              :       2        2        2
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_VNPTShadow                      :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_VNPTBackmap                     :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_SVMIDT                          :       0        2        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_CallStackProfAnon               :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_Numa                            :      93      723       91
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_GPhysTraced                     :     157      335      103
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_GPhysEPT                        :     251     1687      175
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_GPhysNoTrace                    :      14       83       10
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_BTScratchPage                   :       1        1        1
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_StateLoggerBufferPA             :       0        1        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_TraceALot                       :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_VIDE                            :       0        3        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_VMXNETWake                      :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_BusLogic                        :       0        8        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_PVSCSIShadowRing                :       0        0        0
2012-10-26T12:42:38.684Z| vmx| OvhdAnon OvhdMon_LSIRings                        :      16       16       16
2012-10-26T12:42:38.684Z| vmx| OvhdAnon Total                                   :    4693     7648
2012-10-26T12:42:38.684Z| vmx|
2012-10-26T12:42:38.684Z| vmx| OvhdMem maximum overheads: paged 12550 nonpaged 3091 anonymous 7124
2012-10-26T12:42:38.684Z| vmx|
2012-10-26T12:42:38.684Z| vmx| OvhdMem: average total user: 4179 anon: 4040
2012-10-26T12:42:38.684Z| vmx| OvhdMem: memsize 3072 MB VMK fixed 74 pages var 0 pages cbrcOverhead 0 pages total 1611 pages
2012-10-26T12:42:38.684Z| vmx| VMMEM: Maximum Reservation: 80MB (MainMem=3072MB SVGA=8MB) VMK=6MB
2012-10-26T12:42:38.685Z| vmx| VMXVmdb_SetToolsVersionStatus: status value set to 'ok', 'current', install possible
2012-10-26T12:42:38.688Z| vmx| Destroying virtual dev for scsi1:0 vscsi=8677
2012-10-26T12:42:38.688Z| vmx| VMMon_VSCSIStopVports: Invalid handle
2012-10-26T12:42:38.688Z| vmx| VMMon_VSCSIDestroyDev: Not found
2012-10-26T12:42:38.688Z| vmx| Destroying virtual dev for scsi0:0 vscsi=8676
2012-10-26T12:42:38.688Z| vmx| VMMon_VSCSIStopVports: Invalid handle
2012-10-26T12:42:38.688Z| vmx| VMMon_VSCSIDestroyDev: Not found
2012-10-26T12:42:38.688Z| vmx| SOCKET 2 (87) disconnecting VNC backend by request of remote manager
2012-10-26T12:42:38.689Z| vmx| MKS local poweroff
2012-10-26T12:42:38.691Z| vmx| scsi1:0: numIOs = 0 numMergedIOs = 0 numSplitIOs = 0 ( 0.0%)
2012-10-26T12:42:38.691Z| vmx| Closing disk scsi1:0
2012-10-26T12:42:38.692Z| vmx| DISKLIB-VMFS  : "/vmfs/volumes/49bfc291-f238c3b8-6dda-00215e2c75b6/Guanipa1/Guanipa1_1-rdmp.vmdk" : closed.
2012-10-26T12:42:38.692Z| vmx| scsi0:0: numIOs = 0 numMergedIOs = 0 numSplitIOs = 0 ( 0.0%)
2012-10-26T12:42:38.692Z| vmx| Closing disk scsi0:0
2012-10-26T12:42:38.693Z| vmx| DISKLIB-VMFS  : "/vmfs/volumes/4942c063-0aead0e8-e88c-00215e2c75b8/Guanipa2/Guanipa2-flat.vmdk" : closed.
2012-10-26T12:42:38.714Z| vmx| WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
2012-10-26T12:42:38.797Z| vmx| Vix: [4135407 mainDispatch.c:4084]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0
2012-10-26T12:42:38.798Z| vmx| Vix: [4135407 mainDispatch.c:4103]: VMAutomation: Ignoring ReportPowerOpFinished because the VMX is shutting down.
2012-10-26T12:42:38.816Z| vmx| Vix: [4135407 mainDispatch.c:4084]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0
2012-10-26T12:42:38.816Z| vmx| Vix: [4135407 mainDispatch.c:4103]: VMAutomation: Ignoring ReportPowerOpFinished because the VMX is shutting down.
2012-10-26T12:42:38.816Z| vmx| Transitioned vmx/execState/val to poweredOff
2012-10-26T12:42:38.816Z| vmx| VMX idle exit
2012-10-26T12:42:38.816Z| vmx| VMIOP: Exit
2012-10-26T12:42:38.819Z| vmx| Vix: [4135407 mainDispatch.c:900]: VMAutomation_LateShutdown()
2012-10-26T12:42:38.819Z| vmx| Vix: [4135407 mainDispatch.c:850]: VMAutomationCloseListenerSocket. Closing listener socket.
2012-10-26T12:42:38.820Z| vmx| Flushing VMX VMDB connections
2012-10-26T12:42:38.820Z| vmx| VmdbDbRemoveCnx: Removing Cnx from Db for '/db/connection/#1/'
2012-10-26T12:42:38.820Z| vmx| VmdbCnxDisconnect: Disconnect: closed pipe for pub cnx '/db/connection/#1/' (0)
2012-10-26T12:42:38.824Z| vmx| VMX exit (0).
2012-10-26T12:42:38.825Z| vmx| AIOMGR-S : stat o=2 r=6 w=0 i=0 br=98304 bw=0
2012-10-26T12:42:38.825Z| vmx| VMX has left the building: 0.

Reply
0 Kudos
continuum
Immortal
Immortal

attach the vmware.log please - this one looks truncated


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

I have configured

log.rotateSize = 10000

log.keepOld = 5

5 files attachments in this .zip

Thanks!!!

Reply
0 Kudos
continuum
Immortal
Immortal

log.rotateSize = 10000
is a really bad idea - why did you set that ?

it means that your logs are completely useless for troubleshooting


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

continuum,

Here is a log more useful...

Thanks

Reply
0 Kudos
continuum
Immortal
Immortal

please provide at least one complete, unthrottled log that shows the error


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

Here is...

Thanks

Reply
0 Kudos
continuum
Immortal
Immortal

looks like your host does not like the CD drive
try to assign iso files instead


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

AxelGonzalez
Contributor
Contributor

I assigned, no problems with the CD drive. 

Reply
0 Kudos
CredoR1
Contributor
Contributor

Hello,

I'm having the exact same issue here and can't seem to find the solution.  The first vm was configured as per the vmware documentation EN-000628-01 "Setup for Failover Clustering and Microsoft Cluster Service"

When I try to connect the drive to the second VM, i recieve this save error.

Has anyone come across the solution to this problem?  Any help would be greatly appreciated.

Thank you in advance,

Chris.

Reply
0 Kudos
AxelGonzalez
Contributor
Contributor

This issue was solved. I was using two disks, one (Quorum) from EMC storage and one (Data) from IBM storage. Now all disk are in one storage, and all is OK.

Thanks to all...

Reply
0 Kudos
Exxxpert
Contributor
Contributor

This helped for me:

Changed presentation that LUN ID is the same on both hosts.

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=110&prodSeriesId=...

Reply
0 Kudos
Biyouk
Contributor
Contributor

I ran to a similar issue in the past. You need to make sure the RDM is presented to each ESXi host with the same LUN ID number.

Reply
0 Kudos