VMware Cloud Community
mythumbsclick
Enthusiast
Enthusiast
Jump to solution

Move Disk From 1 Disk Group To Another - Claim Disk or Add A Disk To The Selected Disk Group?

Hi All

I am running a 3 node Hybrid vSAN cluster. Each host has 2 disk groups (1 SSD and 5 HDs in each disk group). vSAN is set to to manual mode for adding disks to storage. Somewhere along the line a mistake was made where on 1 of the hosts, disk group 1 has 6 capacity disks and disk group 2 has 4.

I need to move 1 disk from group 1 to group 2.

So far I have done the following:

1. Removed disk from disk group 1 - Full data evacuation (What would the difference have been if I chose ensure accessibility?).

2. Allowed vSAN to resync objects

I now want to move the disk over to disk group 2.

1. Do I need to clean the disk first? If so should this be done at RAID level or vSAN (Storage Adaptors - Erase Partitions on the Selected disks)

2. Do I need to "claim" the disk or "add a disk to the selected disk group"? (What is the difference?)

3. Once the disk is back in the group, will vSAN start moving objects onto it or is it recommended to do a manual disk re-balance?

4. Is there anything else I should be aware of?

Many thanks Smiley Happy

0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello mythumbsclick​,

"1. Removed disk from disk group 1 - Full data evacuation (What would the difference have been if I chose ensure accessibility?)."

With EA(Ensure Accessibility) option it would wait for the clom repair delay timer (60 minutes default) to exceed before recreating the 2nd replica of data that resided on this disk - with FDM(Full Data Migration) it recreates replicas before removing the data on the device you are removing.

"1. Do I need to clean the disk first? If so should this be done at RAID level or vSAN (Storage Adaptors - Erase Partitions on the Selected disks)"

If the device was successfully removed then it should have no partitions on it - this can be verified in a number of ways including looking at it directly in /dev/disks/ , via Web Client Host > Storage Devices > Select device > Partitions or by seeing is it available to add to an existing Disk-Group: Cluster > Manage > vSAN > Disk Management > Select Disk-Group > '+' disk icon

If these devices are/were individual RAID0 VDs per device as opposed to passthrough, you shouldn't need to reconfigure this.

"2. Do I need to "claim" the disk or "add a disk to the selected disk group"? (What is the difference?)"

Not sure if you are referring to autoclaim or what, just add it to the Disk-Group with less devices.

"3. Once the disk is back in the group, will vSAN start moving objects onto it or is it recommended to do a manual disk re-balance?"

No, by design it won't start moving stuff unless any devices are over 80% (default) and can move data here within compliance of the Storage Policy applied to the data - use manual rebalance via the GUI or RVC (where there are extra options) to initiate moving some data here immediately.

Bob

View solution in original post

5 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello mythumbsclick​,

"1. Removed disk from disk group 1 - Full data evacuation (What would the difference have been if I chose ensure accessibility?)."

With EA(Ensure Accessibility) option it would wait for the clom repair delay timer (60 minutes default) to exceed before recreating the 2nd replica of data that resided on this disk - with FDM(Full Data Migration) it recreates replicas before removing the data on the device you are removing.

"1. Do I need to clean the disk first? If so should this be done at RAID level or vSAN (Storage Adaptors - Erase Partitions on the Selected disks)"

If the device was successfully removed then it should have no partitions on it - this can be verified in a number of ways including looking at it directly in /dev/disks/ , via Web Client Host > Storage Devices > Select device > Partitions or by seeing is it available to add to an existing Disk-Group: Cluster > Manage > vSAN > Disk Management > Select Disk-Group > '+' disk icon

If these devices are/were individual RAID0 VDs per device as opposed to passthrough, you shouldn't need to reconfigure this.

"2. Do I need to "claim" the disk or "add a disk to the selected disk group"? (What is the difference?)"

Not sure if you are referring to autoclaim or what, just add it to the Disk-Group with less devices.

"3. Once the disk is back in the group, will vSAN start moving objects onto it or is it recommended to do a manual disk re-balance?"

No, by design it won't start moving stuff unless any devices are over 80% (default) and can move data here within compliance of the Storage Policy applied to the data - use manual rebalance via the GUI or RVC (where there are extra options) to initiate moving some data here immediately.

Bob

mythumbsclick
Enthusiast
Enthusiast
Jump to solution

Hi Bob

Thank you for a rapid detailed response. Massively appreciated and meant I could confidently complete my maintenance over the festive period!

0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello mythumbsclick​,

More than happy to help.

Though to you and anyone else reading this: if you have VMware/other support for vSAN, please don't be shy with logging a Support Request with us - even if it just for clarification or something you think is relatively trivial - we do not bite and prefer proactive guidance/clarification over fixing something broken as a result of improper procedure so you may actually be doing us a favour :smileygrin:.

Bob

0 Kudos
5kyFx
Contributor
Contributor
Jump to solution

Hello Bobkin,

your information also helps my case (thanks to you both).
Would I run into a problem when I need to move one capacity tier disk to another DG and the original DG is almost full?

My client has multiple ROBO clusters and needs to enlarge capacity in every cluster. Most of them are straight forward, as they only get another capacity tier disk to the existing DG.

BUT there are also clusters that get a second DG. Existing have 1x NVME 3xSSD and get another NVME+SSD. The problem is, those are already pretty full. Can this SSD "swap" -after configuring the 2nd DG- be done - or do I have to rebuild the cluster?

0 Kudos
TheBobkin
Champion
Champion
Jump to solution


Hello @5kyFx, happy to hear my posts from years ago are still helpful!

 

If you have a 2-node cluster with disks pretty full then removing any disks with full data evacuation probably won't work as there is no-where applicable that has enough space to move the data to.

 

"BUT there are also clusters that get a second DG. Existing have 1x NVME 3xSSD and get another NVME+SSD."
Just to confirm (as you didn't say it exactly, but it is implied), these currently have single DG and you are adding a second DG and want them to be consistent e.g. each with 2x SSDs as Capacity-tier in each DG?


If deduplication is not enabled here then you could just add the second DG with single Capacity-tier SSD, then remove (with full data migration) 1 Capacity-tier SSD from the original DG and then add it to the new DG.

If deduplication is enabled then this poses 2 issues:

1. you can only remove whole DG and not a single disk so cannot do the above and

2. while it permits addition of disks to deduped DGs after initial DG creation it actually won't dedupe data on that new disk so above option of later-add wouldn't be advisable.

If dedupe is enabled then a possible alternative option would be to make sure everything is for sure stored as FTT=1 and cluster is healthy, take current backups (and maybe do this during non-business hours just in case), remove DG from 1 node with ensure accessibility option (all data is FTT=0 now and stored only on the other node), recreate DGs in new configuration, repair all data back to FTT=1 state then do the same for the other node.

0 Kudos