VMware Cloud Community
Ayan1
Contributor
Contributor

Shared RDM for Cluster Node

How to configure shared RDM between 2 node windows/Linux fail-over cluster.

3 Replies
a_p_
Leadership
Leadership

Please take a look at the MSCS documentation for the vSphere version you are using (http://kb.vmware.com/kb/1004617).

André

0 Kudos
JPM300
Commander
Commander

Liek A.P Said take a good look at the KB article as there is lots of little gotch'as however with that said it will be a good idea to always keep both nodes of the cluster on one side of the fence if you use SRM.  We had a cluster setup across a sctretch network one time and when we had to failover one of the nodes with SRM it worked but it was a pain.  If you use SRM keep both nodes on one side on one SAN.

0 Kudos
Wh33ly
Hot Shot
Hot Shot

- Make sure to have the RDM in physical mode

     - For VMDK's the disk must be "Thick provision Eager Zeroed"  and "Independent - Persistent"

Tip: for the sharing SCSI node I use SCSI1:X so I know that everything from SCSI controller 1 are sharing disks.

- Change the .ctkEnabled setting from the corresponding SCSI controller to false (this will disable change block tracking; so all changes are directly written to the disk. This prevents that the CBT file is locked and the other VM can't access that.)

pastedImage_0.png

Add a new row with the option SCSI1:0 (or whatever SCSI ID your shared RDM will be) and give it the value "multi-writer" so it is possible for multiple VM's to connect to the disk. It removes the write protection

VMware KB: Disabling simultaneous write protection provided by VMFS using the multi-writer flag

If you don't do this, the first powered on VM protects the disk, and the second VM will not properly start and give errors about file locks etc.

- Make sure you set these settings on BOTH VM's because these are VM settings. Also make sure you document it properly and test it before using it in production. Try things like extending disks etc.

You also could use powerCLI to set the settings above for example

$vm = "LINTST"

new-advancedsetting -entity $vm -name "scsi1:$X.sharing" -value "multi-writer" -confirm:$false

new-advancedsetting -entity $vm -name "scsi1:$X.ctkEnabled" -value "false" -confirm:$false

Or set the setting for the whole SCSI1: node with a simple loop

$vm = "LINTST"

foreach($x in (0..15)) {

new-advancedsetting -entity $vm -name "scsi1:$X.sharing" -value "multi-writer" -confirm:$false

new-advancedsetting -entity $vm -name "scsi1:$X.ctkEnabled" -value "false" -confirm:$false

}

- DRS rules: Don't forget to make a DRS rule to seperate both VM's when using it in the same cluster. (In my case I have a cluster with hosts across 2 datacenters)

- Linux fencing :  I created a separate fencing user, the only privileges the Fence user need are "Interaction - PowerOff / PowerOn"

Hope this will get you going a bit