VMware Cloud Community
ctg49
Contributor
Contributor
Jump to solution

Storage migration of pRDM Pointer Files

All,

Attempting to storage migrate some pRDM pointer files from one datastore to another.

We're sunsetting a specific LUN configuration for another (splitting a single LUN into two to alleviate some storage controller I/O imbalances we're seeing), and as a result we're migrating all data off aforementioned large LUN/Datastore into a DS Cluster of two smaller LUNs/Datastores (same storage device, for what that's worth).  All VMs migrated as expected, but having issues with some physical RDM disk pointer files I'm trying to move as well.  The pRDM disks are configured as part of a file server cluster (across-box), which I understand are not the simplest to move, but as far as I can tell are supported (or at least can be manipulated into being supported, via restriction of access to RDM disks).

Using vCenter 6.0 (Web client), ESXi 6.0U3.  All relevant datastores are VMFS5.  I've powered down and detached the RDM disks from one of the cluster nodes (so essentially only one VM actually holds the RDM pointers at this point).  This should be capable of being storage vmotioned at this point, just dragging the pointer files to the new storage.  Method of movement is via storage-only migration, advanced, manually change storage for RDM disks only to preferred destination for all disks, but I get a 'insufficient space in destination datastore' warning.  Datastores definitely have enough space for pointer files, definitely don't have enough space for entire RDM disk (if converted into VMFS volume, for instance).  To clarify, I'm not trying to move the RDM disk itself, just the pointer files, preferably without requiring the VM going offline.

I've tried:

Disable DRS everywhere, no effect.

Migrate pRDM disks to both DS cluster and specific DS cluster datastore, no effect.

Migrate entire VM storage, including OS VMFS volume, to both DS cluster and specific DS cluster datastore, no effect.

Migrate entirely separate VM which has a single pRDM disk, no clustering involved at the OS level, same error occurs.  I'm assuming whatever gets me possible to move the not-quite-clustered pRDM disks will also fix this.

Have not tried:

Powering off VM prior to migration - This might work better, unsure.  It shouldn't be necessary however, based on the documentation I'm seeing and people's responses online/on this forum.

Detaching all RDM disks and reattaching on new storage - I'm confident this would work, but also should not be necessary.

Tearing datastore out of DS cluster and re-attempt migration to unclustered DS.

The first two of the above require downtime, hence why I haven't tried them yet.  The last I just thought of while typing this out.

Would love any input.

Relevant information:

VMware Knowledge Base

0 Kudos
1 Solution

Accepted Solutions
ctg49
Contributor
Contributor
Jump to solution

So, I ended up just getting a maint window set up and gutting the pRDM disks, and rebuilding them on the new datastore.  No migration path could be found, and the closest I could find to guidance as to why was due to a shared SCSI bus (required for MSCS clusters).  Consider this resolved.

View solution in original post

0 Kudos
2 Replies
ctg49
Contributor
Contributor
Jump to solution

Update: A situation arose where I was provided the opportunity to test one of my 'Have not tried' scenarios.

Outage resulted in requiring a reboot of the VM, so I shut down instead, and while down I attempted to migrate the pointer files via the same method previously tried, and got the same error about insufficient disk space, so an offline migration of pRDM pointer files didn't work either.

It was rather imperative to get the VM back online, so I didn't have time to detach/reattach all the LUNs (aka a 'manual' migration of the pointer files).

0 Kudos
ctg49
Contributor
Contributor
Jump to solution

So, I ended up just getting a maint window set up and gutting the pRDM disks, and rebuilding them on the new datastore.  No migration path could be found, and the closest I could find to guidance as to why was due to a shared SCSI bus (required for MSCS clusters).  Consider this resolved.

0 Kudos