I have a vSPhere 4.1 cluster with a vm that has an RDM to a SAN LUN. I accidentally removed access to the LUN from Unishpere. After I added the LUN back to the Storage Group in Unisphere, the server could again see the LUN, however, the other two ESXi hosts did not see the LUN properly. I removed the RDM from the vm a couple times, but the original ESXi host was the only one that could run the vm without complaining the Virtual disk 'hard disk 2' is a mapped direct-access LUN that is not accessible. The vm was operating properly on the original ESXi host, but I could not vmotion to either of the other two ESXi hosts in the cluster. When I browse the datastore, I can still see the first two RDM config files and the third one currently in use, computername_3.vmdk. Is it necessary for me to remove those other two files? They all reference the same LUN I tried to re-add in order to resolve the situation. I then powered down the vm and removed it from inventory. After I added it to inventory, only the two ESXi hosts that were NOT the original ESXi host where the vm lived when I removed the RDM LUN in my SAN config software, can successfully host the vm. Now the original host where the vm resides sports the same message about inaccessible LUN.
I have also tried rebooting, disconnecting/reconnecting the host from the cluster, removing the host from the cluster and re-adding it and I still have the same problem. The host in question does see the LUN on the same LUN number (8) as it was originally mapped. The way I currently sit, the vm can only live on two out of the three hosts in the cluster.