I am facing this error meesage while trying to configure a shared disk for two different guest located on two different host.
I installed windows 2008 enterprise 64 bit on both the node.
Both node are on different host. I wanted to create a failover cluster.
I assigned a RDM disk to first node and selected a datastore which is located on SAN.
Then I tried to assign this disk to second node by selecting "use an existing virtual disk". I selected the same "Disk File Path" .
But i am just seeing the error saying "Incompatible device backing specified for device '0'".
I am using ESXi 5.0 evaluation.
I tried some fixes like renaming the .vmdk files and deleting cdrom from the add hardware - but none of them worked actually.
Welcome to the communities.
Hope using ISO able to resolve the problem as suggested by AP.
I'm having the same problem on vsphere (ESXi 6.0 - HP custom image) running on a SSD drive inside a HP MicroServer Gen8 .(No CDROM installed)
Incompatible device backing specified for device ‘#’.
Anyone has found a fix for this?
Can you tell how to do that?
Thanks in advance.
Reboot all the ESX hosts and hitting Storage Adapters "Refresh" fixed out-of-sync storage LUN and that insane 'Incompatible device backing..." error message for me.
vCenter just "goes stupid" sometimes even though all the storage LUNS are presented uniformly to the ESX hosts it still thinks there is a mis-match.
For some strange reason the individual ESX hosts get "out-of-sync" sometimes with their storage configuration.
You should only have to go to one of the ESX hosts Configuration tab, click Rescan All...
All the ESX hosts that are supposed to see the shared storage volumes should automagically update when the first one is done.
But I have seen how setting custom Names on one ESX configuration (instead of the default naa name) on shared storage doesn't replicate across all ESX hosts simultaneously.
Once you reboot all the ESX nodes they all refresh and synchronize changes correctly.
Like those Help Desk team guys ask: "Did you reboot your computer first?"
Anthony Maw, Vancouver, Canada
- The former Storage engineer had 3 hosts, on storage set as a host group, one RDM was being exported as LUN 0
- The former Storage engineer was forced to add 3 other host, and in stead of adding them to LUN0, he assigned them to LUN1
Years after I came instead and I can tell you that the MSFC misbehaved many times.
This time was another story ...
My 3 node cluster with a shared disk failed , only one node could access its RDM disk (LUN0) of the 2 others , these were halted/crashed and in a VM poweroffstate both at the same time.
It were precisely the on cluster forming VMs on LUN1. coincidence ?
I was only able to boot one of both Halted VMs and regain the majority of the MSFC (2 node + quorum, out of 3).
The 3rd however didn't feel like it. When I powered the 3rd VM I got "The operation is not allowed in the current state of the host"
I retrieved the Events & tasks section of the ESXi host in question and it pointed to a ramdisk full issue.
This was due to the Lenovo Image of ESXi5.5 that was lateron upgraded to ESXi6.0
A log file cimple_log_err_messages fills up the ram disks size.
When it gets full all sort of strange things happen
This problem is an Lenovo Custom VMware Image Bug, no idea why Lenovo want to write hardware error messages in something crucial as VMware /tmp folder.
Every 2 months one needs to clear a 'garbage' log file pertaining to vSpheres RAMDISk in /tmp
If the disk gets full
1 One gets connection failures using vi/webclient in direct or via vCenter
2 HA no longer functions with an alike 503 server connection failure
3 In fact the hostd & vpxa agents silently crashes all the time (restarting it has no effect)
After emptying the cimple_log_err_messages, I was able to reconfigure the vsphere HA agent. I did this because the host was unable to read its networkconfiguration and kept on displaying the message Loading when willing to represent the storage configuration.
Then due to the <<Reconfigure for vSphere HA>> on the ESXi host in question , my stuck VM was being automigrated to another host AND autostarted !!!!
It was precisely the VM of this cumbersome ESXi host (with ramdisk full condition) that wouldn't boot. I attempted to remove the RDMs initially on this one , rescan the devices and readd the RDM
but failed due to the message "Incompatible device backing specified for device '0' which the subject of this topic.
Conclusion, the ESXi host was in a tangled situation due to a storage issue, then cause its RAMdisk was full lost track of its Storage/Network config, and many operations failed.
Freeing the space and reconfiguring for vSphere HA , unlocked all my problems !.