VMware Cloud Community
Bill_Oyler
Hot Shot
Hot Shot
Jump to solution

Single RDM mapping file vs. multiple RDM mapping files?

Hello,

When setting up Microsoft Cluster nodes in a cluster-across-boxes (multi-host clustering) scenario (two ESXi hosts, two-node MSCS cluster, with one node on each ESXi host), I realize that using Physical Mode RDMs (pass-through RDMs) is the way to go.  I also typically locate the C: drive (eager zero thick) for each cluster node on a different VMFS volume to ensure two separate failure domains.  So for example the C: drive of Cluster Node 1 (VM1) might be on Datastore1 and the C: drive of Cluster Node 2 (VM2) might be on Datastore2.  Likewise, I've always created separate RDM mapping files for each VM as well, pointing to the same raw LUN.  For example I would put the RDM mapping file for the quorum shared LUN on Datastore1 for VM1 and a new RDM mapping file pointing to the same quorum LUN on Datastore2 for VM2.  This way, the failure of Datastore1 will only affect VM1 and the failure of Datastore2 will only affect VM2.  I've been doing this for years (probably since ESX 2.5) and MSCS works great in this configuration.

However, I was re-reading VMware's official "Setup for Failover Clustering and Microsoft Cluster Service" documentation and noticed that VMware specifically states in several places that "A single, shared RDM mapping file for each clustered disk is required."  In other words, they call for me to select "Existing Hard Disk" on VM2 and point to the RDM mapping file that I created on VM1, rather than creating a fresh new RDM mapping file for VM2.  This strikes me as "less safe" because the failure of Datastore1, or the corruption of RDM mapping file 1, would result in the failure of VM2.  A separate RDM mapping file for VM2, on a separate datastore, would ensure two separate fault domains for the two VMs.

Does anyone know why VMware requires a single shared RDM mapping file?  Are there specific problems caused by having two separate RDM mapping files?

Thanks,

Bill

Bill Oyler Systems Engineer
Tags (1)
1 Solution

Accepted Solutions
Bill_Oyler
Hot Shot
Hot Shot
Jump to solution

It looks like VMware has posted a KB that explicitly states that a "single" RDM pointer/mapping file must be used across all nodes of the cluster.  So I guess this settles my question!

Multiple RDM pointer files after storage migration (2131011)

http://kb.vmware.com/kb/2131011

Bill

Bill Oyler Systems Engineer

View solution in original post

0 Kudos
3 Replies
JLackman
Enthusiast
Enthusiast
Jump to solution

I see what you're saying;

VMware says; use the same mapping .vmdk on each node. The mapping .vmdk points to the same RDM.

Your process; create mapping .vmdk ONE and mapping .vmdk TWO, but have them point to the same RDM.

I don't know the answer. I've built a bunch of MSCS on 6.0, but I've always followed the VMware presented option. I'm going to watch to see how this discussion goes.


-Jonathan

0 Kudos
Bill_Oyler
Hot Shot
Hot Shot
Jump to solution

It looks like VMware has posted a KB that explicitly states that a "single" RDM pointer/mapping file must be used across all nodes of the cluster.  So I guess this settles my question!

Multiple RDM pointer files after storage migration (2131011)

http://kb.vmware.com/kb/2131011

Bill

Bill Oyler Systems Engineer
0 Kudos
JLackman
Enthusiast
Enthusiast
Jump to solution

Good to hear they addressed that. We always used a single file, but I didn't have any specific reason why that was preferable, so it's good to see that VMware weighed in. Thanks for the update!

0 Kudos