VMware Cloud Community
psguser
Contributor
Contributor

RDM status (free/attached to VM) not the same across ESXi hosts

Hi everyone,

I have an issue with a certain RDM disk that occured lately, in a ESXi 5.5.1 environment within a single VMware cluster.

We've begun a process to install vShield Manager on all ESXi hosts, and as part of the process restarting the hosts. When reaching the last few, we found out that a certain RDM, which is attached to a VM, isn't seen the same way across all the hosts  The hosts that are yet to be restarted in the past month see the RDM correctly - attached to the VM. All the other hosts think that the RDM is free - they see the storage device, but don't recognize it as being attached to the certain VM. Meaning:

1. When trying to vmotion the VM to one of those hosts, the validation shows error "Virtual disk ..... is a mapped direct-access LUN that is not accessible...Unable to access file ds:///vmfs/volume.....vmdk".

2. When checking via PowerCLI the propety: .configmanager.datastoresystem.queryavailabledisksforVMFS I see that the first hosts don't show the disk (they don't see it as "free" for datastore purposes), while the rest do show it.

I already tried removing the RDM from the VM, rescanning all storage and then re-attaching it, but everything stayed the same. The only thing I haven't tried in terms of storage connectivity is to completely remove the device (unmap on storage side) and then return it, but I'd rather not do it since it's a critical VM and I would much like to solve the issue without having to shut it down again.

Could it somehow have to do with permissions? Is it possible that after a restart an ESX host loses permissions to a certain vmdk file? Or unable to access its information? Could it be a VC problem?

Just for clarification - we have dozens of RDMs in this cluster, all configured exactly the same way. Only one of them has this issue.

0 Kudos
0 Replies