VMware Cloud Community
adamjg
Hot Shot
Hot Shot

Adding RDMs to the 2nd node in a MSCS cluster

First off, Sorry for the lengthy text, but I’m having problems trying to put this to words.

I’m having issues using PowerCLI to script the addition of 45 RDMs to the 2nd node in a MSCS 2012 cluster.  I have a working script to add the RDMs to the 1st node.  That code at the most basic level is:

New-HardDisk -VM $vm -DeviceName $CanName -DiskType RawPhysical -Controller $scsi1

I’m pulling in Canonical names from a CSV file, and the SCSI controller I’m getting through a simple get-scsicontroller command.  This part works, I have code to add the SCSI controller in and put the disks in the order that I need them.

Now the trick is to get these same LUNs added in the same order on the 2nd node of the cluster.  My understanding is that this code should work:

$sc = Get-ScsiController –VM $vm | where {$_.Extensiondata.BusNumber -eq 1}

$vdisk = Get-VM $vm | new-harddisk -controller $sc -diskpath "[Datastore1]  VMname1/VMname1_2.vmdk"

However, I’m getting the error “Incompatible device backing specified for device ‘0’” when running this script.  SCSI controllers are set to Physical bus sharing and both VMs are powered off, so I don’t really know why this isn't working.

What I ended up trying was the first line of code above, but for the VM I pointed to the 2nd node instead. This works, and both VMs power up seemingly fine.  Here is the crux of my issue/question.  Some more details:

When we add RDMs manually with the GUI, we essentially create a new mapping for the 1st node, and the 2nd node is just pointing to the existing .VMDK pointer file to use an existing disk.  However, when scripting using the first line of code above on both nodes, we are technically adding a “new” disk to each VM, even though both “new” disks point to the same LUN, and it creates a new pointer in the 2nd node’s directory.

So for an example, doing it manually in the GUI, I pick my LUN off the list for the 1st node, then copy/paste that path and add it as an existing disk for the 2nd node.  Both paths end up looking something like "[Datastore1]  VMname1/VMname1_2.vmdk".

However, if I script it the above way that works (with the first line of code above just pointing to both VMs), what I get is "[Datastore1]  VMname1/VMname1_2.vmdk" for the first VM, but then "[Datastore2]  VMname2/VMname2_2.vmdk" for the 2nd VM.

The question is, is this ok?  The Physical LUN/Datastore mapping information for both disks is the same on both VMs, but it’s just this .VMDK pointer file that changes.  I’m building all of this in a prototype environment so if it fails miserably it’s not the end of the world, but I’d like to get some sort of semi-official answer on this before I move it into a real production scenario.  I know this is confusing so I'm more than happy to answer any questions.

Reply
0 Kudos
2 Replies
adamjg
Hot Shot
Hot Shot

I also want to say, thanks a ton to LucD. His replies on my previous post as well as PMs and his previous threads helped me get the code working for the 1st node.  Much appreciated!

Reply
0 Kudos
adamjg
Hot Shot
Hot Shot

Well, I talked to VMware support yesterday and they definitely do not recommend having separate pointer .VMDK files to the same LUN as it is likely to really mess with disk locking and that will wreak havoc on the MSCS cluster.

Reply
0 Kudos