VMware Cloud Community
roconnor
Enthusiast
Enthusiast
Jump to solution

Best Practice - Adding RDMs to second node of MCSC (W2K3) Virtual Machine Nodes Across Physical Hosts

Adding RDMs to second node of MCSC (W2K3) Virtual Machine Nodes Across Physical Hosts

Couldn't find any other thread on this

When adding RAW disks to the second node in Cluster Virtual Machines Across Physical Hosts / Cluster Across Boxes,

VMware says point shared storage disks to the same location as the first node’s shared storage disks*

-Select Use an existing virtual disk...

-In Disk File Path, browse to the location of the (quorum) disk specified for the first node

-Select the same virtual device node you chose for the first virtual machine’s shared storage disks, ie SCSI (1:0)…

In other words to add the RDMs to mscs-node2, browse to /vmfs/volumes/lun1/mscs-node1/ mscs-node1_2.vmdk (mscs-node1_2-rdmp.vmdk)

For years we have directly added the RDMs to the second node specifying RDM not existing disk, typically we have to do this directly from the host, not the vCenter, it seems to work fine.

So what’s the safest way,  the official method can cause all sorts of problems if you need to deregister RDMs on the first node (here is where I didn’t find any official docs.)

Do you delete or keep the descriptor file? We tried keeping it but ended up with multiple mappings to the .vmdk/rdmp.vmdk, so now this system has disk2_.vmdk/ disk2_-rdmp.vmdk and disk4_.vmdk/ disk4_-rdmp.vmdk pointing the same RAW.

What really worries me is safety, these are very important boxes, I’d prefer to continue having the vmdks and  rdmp.vmdk in separate datastores, and not have this reliance on the primary node

Feedback please, are we the only visualization shop configuring MSCS clusters with separate RDM paths, and are there risk associated with this?

*Ref: ‘Setup for Failover Clustering and Microsoft Cluster Service - 4.1’

Reply
0 Kudos
1 Solution

Accepted Solutions
roconnor
Enthusiast
Enthusiast
Jump to solution

I realise there was an error in my logic

When working with the primary node, if there is a requirement to unmap the RAW disks (move to another vmware cluster, clone the system etc)


Make a note of the location of all rdmp.vmdks

Remove each rdm disk without deleting it

To re-add

Add as "Use an existing disk virtual disk" (yeah I know its raw, but once you create the rdmp the host now thinks its virtual)

Browse to the existing raw device mapping, that will show up as a vmdk* and add using the former scsi location


*The gui hides the descriptor

A virtual disk has a vmdk and a flat.vmdk

A RAW disk has a vmdk and a rdmp.vmdk (the flat is substituted by the rdmp)

A suggestion of one of my companions is to locate al the RDM in a single small datastore, that way visibility of vms with raw disk is increased

View solution in original post

Reply
0 Kudos
4 Replies
beckham007fifa
Jump to solution

Great question.Hope all is well.

Let me try to understand. Apologies if I am getting it wrong.


When adding RAW disks to the second node in Cluster Virtual Machines Across Physical Hosts / Cluster Across Boxes,

VMware says point shared storage disks to the same location as the first node’s shared storage disks*

That is because of iScsi controllers, and similar iscsi controllers should be selected.

In other words to add the RDMs to mscs-node2, browse to /vmfs/volumes/lun1/mscs-node1/ mscs-node1_2.vmdk (mscs-node1_2-rdmp.vmdk)

For years we have directly added the RDMs to the second node specifying RDM not existing disk, typically we have to do this directly from the host, not the vCenter, it seems to work fine.

Yeah, this is perfect, add-rdm -datastore- physical/virtual - mode helps (I hope you are doing in the similar fashion)



So what’s the safest way,  the official method can cause all sorts of problems if you need to deregister RDMs on the first node (here is where I didn’t find any official docs.)

Do you delete or keep the descriptor file? We tried keeping it but ended up with multiple mappings to the .vmdk/rdmp.vmdk, so now this system has disk2_.vmdk/ disk2_-rdmp.vmdk and disk4_.vmdk/ disk4_-rdmp.vmdk pointing the same RAW.


Keeping it will create some problem in the future, when the mapping file is present at the destination end as a result of some storing. If it is already present in the target data store, the Storage vMotion finishes quickly, but without moving the data (in case you plan to move the disks, however, considering Vmotion/svmotion is not supported with MSCS).  This is because the Storage vMotion detects that the source and target datastores for the mapping file are the same, and therefore concludes that no movement is needed.

That is the reason it is good that you delete it or have a separate data-store for this which you generally do, Hats off!


Please correct me if I went wrong.


Lets get others response as well in this topic. Many Thanks



Regards, ABFS
roconnor
Enthusiast
Enthusiast
Jump to solution

Thanks for the interest

You said it's "because of iScsi controllers, and similar iscsi controllers should be selected."

Surely you mean SCSI Controllers not iSCSI, this is all pure FC SAN,

What VMware is saying is yes create another SCSI controller on the second node, but point it to your existing RDM mapping, add it as an existing disk NOT RDM.


What I haven't found is VMware explaining how to "unassign" the RDM, ie: customer wants to replace with larger RAW, migrating the storage cabin, or moving vm to new vmware cluster...

You also said delete the mapping file when unassigning the RDM on the primary/passive mscs node, but isn't that suicide, I doubt vCenter would allow it, I'd expect it to 'say no way you are about to kill the path to your RAW on the active mscs node, disk in use".

Reply
0 Kudos
ScreamingSilenc
Jump to solution

Your right its SCSI controller not iSCSI I think its a typo.

What VMware is saying is yes create another SCSI controller on the second node, but point it to your existing RDM mapping, add it as an existing disk NOT RDM.


Thats right, its possible to map RAW RDM from second node but vmware don't clam in doc and it have a disadvantage when you add RAW RDM directly to second node instead of using "existing disk" an extra metadata of same RDM is created in second node.

Please consider marking this answer "correct" or "helpful" if you found it useful.
roconnor
Enthusiast
Enthusiast
Jump to solution

I realise there was an error in my logic

When working with the primary node, if there is a requirement to unmap the RAW disks (move to another vmware cluster, clone the system etc)


Make a note of the location of all rdmp.vmdks

Remove each rdm disk without deleting it

To re-add

Add as "Use an existing disk virtual disk" (yeah I know its raw, but once you create the rdmp the host now thinks its virtual)

Browse to the existing raw device mapping, that will show up as a vmdk* and add using the former scsi location


*The gui hides the descriptor

A virtual disk has a vmdk and a flat.vmdk

A RAW disk has a vmdk and a rdmp.vmdk (the flat is substituted by the rdmp)

A suggestion of one of my companions is to locate al the RDM in a single small datastore, that way visibility of vms with raw disk is increased

Reply
0 Kudos