VMware Cloud Community
edelsol
Contributor
Contributor
Jump to solution

Unable to add and existing RDM disk

Hello,
I'm unable to add and existing RDM disk (attached on a primary server) to the secondary server.
I've made this on TEST servers and works fine.
The steps that i've made:
- Add RDM disk to primary server --> Edit settings --> Add new device /RDM Disk --> OK
- Add Existing Hard Disk to secondary server --> Edit settings --> Add Existing Hard Disk --> Choose the .vmdk attached to primary server --> KO
Error:
Failed to add disk scsi3:12. File system specific implementation of Ioctl[file] failed File system specific implementation of OpenFile[file] failed File system specific implementation of OpenFile[file] failed File system specific implementation of OpenFile[file] failed File system specific implementation of OpenFile[file] failed File system specific implementation of OpenFile[file] failed File system specific implementation of OpenFile[file] failed Failed to lock the file Cannot open the disk '/vmfs/volumes/56cb0537-ae11638a-54ab-0090fab8de2e/caprwsql11/caprwsql11_28.vmdk' or one of the snapshot disks it depends on. Failed to power on scsi3:12.

vSphere Client version 7.0.3.01400 - VMware ESXi, 6.7.0, 20497097


We don't have any snapshot, we've already restart both servers and still the same issue.

Any clue?¿

Thanks in advance.

0 Kudos
1 Solution

Accepted Solutions
AnaghB
Enthusiast
Enthusiast
Jump to solution

Hello @edelsol ,

 

The error message shows that the 2nd Esxi host cannot put lock on the vmdk pointer file of RDM. This means that the setting on RDM disk on Primary node is incorrect. There are 3 things to make sure.

1. The RDM is in Physical mode with SCSI Controller in Physical Bus Sharing.

2. The SCSI ID controller should be same on both VMs.

3. As we are using physical bus sharing then both VMs should be on seperate Esxi hosts.

4. For RDMs the LUN ID of the RDM disk should be same on all Esxi hosts. As the LUN ID component contributes to vml id which is important for RDMs.

If you still have any further concerns then let me know we can get on zoom session and fix it... it will hardly take 30 mins or so.

 

Anagh B
VCIX-DCV6.5, VSAN Specialist
Please mark help full or correct if my answer is use full for you

View solution in original post

6 Replies
Alfista_PS
Hot Shot
Hot Shot
Jump to solution

When you have test it, has you the same vSphere configuration? (VCSA 7, ESXi 6.7)

Maybe there can be the problem.

Please check the folder for snapshots - it has own configuration file if there isn't something written and you can try to consolidate too if its enabled before you try to read the VM disk.

 

Alfista
----------------------
Audio-Video Accessories
Selling and Integration of Audio & Video Accessories and Technology
If my answer has resolved your problem please mark as RESOLVED or if it has only was a good help then give me the KUDOS. Thanks.
0 Kudos
michelkeus_stwg
Enthusiast
Enthusiast
Jump to solution

@edelsol Do you have your VM SCSI Controllers set up for SCSI Bus Sharing? You need to configure it for that before you are able to share disks between VM's.

See here for reference: VMware Docs: Change the SCSI Bus Sharing Configuration in the VMware Host Client 

 

michelkeus_stwg_0-1689763940179.png

 

0 Kudos
edelsol
Contributor
Contributor
Jump to solution

Yes, on other TEST servers we had the same config.

I've checked everything and see any snap or similar files. I didn't try to consolidate cause it's a PROD server and have 28 disks.

0 Kudos
edelsol
Contributor
Contributor
Jump to solution

We have the same config on all servers, even on TEST servers that worked fine.

0 Kudos
AnaghB
Enthusiast
Enthusiast
Jump to solution

Hello @edelsol ,

 

The error message shows that the 2nd Esxi host cannot put lock on the vmdk pointer file of RDM. This means that the setting on RDM disk on Primary node is incorrect. There are 3 things to make sure.

1. The RDM is in Physical mode with SCSI Controller in Physical Bus Sharing.

2. The SCSI ID controller should be same on both VMs.

3. As we are using physical bus sharing then both VMs should be on seperate Esxi hosts.

4. For RDMs the LUN ID of the RDM disk should be same on all Esxi hosts. As the LUN ID component contributes to vml id which is important for RDMs.

If you still have any further concerns then let me know we can get on zoom session and fix it... it will hardly take 30 mins or so.

 

Anagh B
VCIX-DCV6.5, VSAN Specialist
Please mark help full or correct if my answer is use full for you
edelsol
Contributor
Contributor
Jump to solution

Hi, @AnaghB 

Thanks a lot for your replay, I think that you're right.

1. The RDM is in Physical mode with SCSI Controller in Physical Bus Sharing. --> OK

2. The SCSI ID controller should be same on both VMs. --> KO (Here's the problem)

3. As we are using physical bus sharing then both VMs should be on seperate Esxi hosts. --> OK

4. For RDMs the LUN ID of the RDM disk should be same on all Esxi hosts. As the LUN ID component contributes to vml id which is important for RDMs. --> OK

The problem is that on secondary node when they created (It was not me) the SCSI:0 was not created properly with SCSI Bus Sharing "Physical" so I have to use other SCSI instead of SCSI:0...I've just made a test with another disk that can share exactly the same Virtual Device Node and works perfectlly.

Next steps or I reconfigure new disks on primary node (i can't cause we have 48 Disks and don't fit all of them, 15 per SCSI) or I change the SCSI:0 Bus sharing of the secondary node. If i'm not mistaken this last option has to be done with VM powered off.

Thanks a lot for your time and experience I really apreciate 😉

 

0 Kudos