VMware Cloud Community
Enter123
Enthusiast
Enthusiast

vSAN stretched cluster - shutdown secondary site

Hi all,

we have stretched 16-node cluster. For some maintenance reason we need to shutdown all nodes in secondary site. Is there a know procedure for this task?

Thanks

16 Replies
TheBobkin
Champion
Champion

Hello Enter123​,

Do you have any VMs/data with local-only (to one site) protection (e.g. PFTT=0,SFTT=1) or is everything just PFTT=1,SFTT=1 (or PFTT=1,SFTT=0)?

If no local-only(to one site) data then nothing will need to be moved to the other site.

Either way the process should be to:

- Check that your data is Healthy (Cluster > Monitor > vSAN > Health > Data).

- Check that the site that will remain up is set as the Preferred-site.

- Change DRS to manual.

- vMotion all VMs to remaining site (or power them off).

- Place all nodes on site that doing maintenance on in Maintenance Mode with 'Ensure Accessibility'.

- Perform maintenance.

Some more points relating to this here:

https://communities.vmware.com/thread/586591

Bob

Reply
0 Kudos
Enter123
Enthusiast
Enthusiast

Hi,

both PFTT and SFTT are 1.  Next week I need to do it, hope all goes well:)

thanks

Reply
0 Kudos
srodenburg
Expert
Expert

It's very simple. vMotion all VM's from the site that needs to go down (say: Site A), to the other site B. Then put all the nodes in Site A in maint.mode with the option "No data migration". You will notice that the last one you put in maint.mode will take a bit longer than the others.

After all are in maint.mode, shut them down normally.

One very important thing: make sure that the Witness Appliance always stays connected, ensuring that 2/3rds of the components are available. It's nothing special, other cluster technologies have the same requirements but don't take it for granted.

I've done this several times during "datacenter fail-over exercises" where a datacenter is put down for a whole week. After the week, fire up everything in site A, take them out of maint.mode (which allows vSAN IO to happen) and let it sync until it's ready. Then vMotion the VM's back and your'e done.

Enter123
Enthusiast
Enthusiast

Thank you all for response.

One more question is bugging me: how much free space on a vSAN data store should be ideally available for a successful failover?

Is it enough to have just over 50%, like 55% ? Or something else needs to be taken into account?

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello Enter123​,

If you don't intend to change the Storage Policies of the Objects and leave them as FTT=1 on one site (as they are PFTT=1) during maintenance window of second site then you shouldn't require any extra space other than some slack for temporary data (e.g. snapshots), temporary VMs - VMware guidelines of 70-75% should be fine (unless you want to change Storage Policies significantly for a lot of data in this interim e.g. changing FTM and/or SFTT).

Should account for equivalent to (non-reserved) .vswp space if they are Thick and/or have cluster/host default as plain FTT=1 (with Force Provisioning) as these will then likely rebuild the 2nd mirror on the remaining site.

Bob

Reply
0 Kudos
Enter123
Enthusiast
Enthusiast

Let's try with numbers.

- 16-node stretched cluster - vSAN data store size: 139,73 TB

- if I power off entire site, 8 nodes - vSAN data store size: 69,86 TB

So I should keep data store utilization up to 75%. That means I should have up to 52,39 TB vSAN data store used space after shutting down secondary site.

To play it safe, in a stretched cluster, my case 16 nodes, data store size: 139TB, I should use max cc 52TB of data store, in case a disaster happens.

Am I correct?

Reply
0 Kudos
vpradeep01
VMware Employee
VMware Employee

Hi just a suggestion here srodenburg

With PFTT and SFTT are 1; we could place Secondary site/FD in to mm with "ensure accessibility".  I had done this in past and works perfectly with EA. No data migration may not required since we care about Witness and Primary site here.

Thanks

Reply
0 Kudos
srodenburg
Expert
Expert

We are testing "what happens when a datacenter is lost". Not "its all nice and good and we do a controlled shutdown". We need to see vSAN's recovery-abilities in unplanned "shit really hits the fan" situations. And in such tests, we don't play nice and we don't have time.

The TS scenario is different as he talks about a controlled but quick shutdown. At least, that's how interpret it. Doing a site shutdown with "ensure accessibility" can take a long time to finish, especially with the last node in the site. And vSAN works fine when doing a "no datamigration" shutdown. vSAN Versions before 6.6 could end up a smouldering pile of digital doodoo. But nowadays, it auto-recovers just fine. It's self-healing abilities have come a long way since the early days.

Reply
0 Kudos
Enter123
Enthusiast
Enthusiast

I did it today and it all went well.

1. did first "proactive rebalance disks"

2. confirmed that all VMs were compliant to our storage policy

3. disabled HA and DRS

4. manual vMotion for all VMs to primary site

5. put each host into maintenance with Ensure accessibility

Thank you all for help!

TheBobkin
Champion
Champion

Hello Enter123​,

Glad to hear went as planned (as it should!).

But just a few points:

"1. did first "proactive rebalance disks" "

This is unnecessary, but shouldn't affect entering MM as it doesn't change the compliance and also will only balance data per site (assuming no un-pinned FTT=0 data).

"3. disabled HA and DRS"

While disabling HA is fine, I would typically never advise anyone to disable DRS as this removes the Resource Pools (unless saved) - setting it to Manual will do the job (and perhaps this is what you meant).

Bob

Reply
0 Kudos
vpradeep01
VMware Employee
VMware Employee

That's great !
Yeah EA should work.

Reply
0 Kudos
Cipo800
Enthusiast
Enthusiast

Hi all, just implemented Vxrail 6 hosts stretched between two sites.

Yesterday evening I tried the failover procedure, everythings went as planned, very fast and very simple compared to our previously Nutanix.

Today I'm with half hosts, now I need to increase a disk VM but Vsphere give me an error of "insufficient space".

Just check the vsan capacity and is 32% used, same space of before DR test.

Is it correct? Do I need to failback to all hosts to increase a VM disk space?

 

Thanks in advance

Reply
0 Kudos
TheBobkin
Champion
Champion

@Cipo800, It likely isn't complaining about space in the traditional sense of datastore free size, more likely it is not being able to extend a disk when the backing vmdk Object is reduced-availability-with-no-rebuild (e.g. you have 2 of 3 Fault Domains available).

 

While there are workarounds to doing this, it isn't exactly a normal process for admins to be doing tasks such as extending disks while the cluster is only half a cluster (e.g. they would be focused on getting the full cluster back).

Reply
0 Kudos
Cipo800
Enthusiast
Enthusiast

Thank you @TheBobkin for the fast reply, you save me "the first Dell ticket" about VXRail!

I understand the limit, anyway I don't agree about this choice from Vmware/Vsan team, extend VM disk size isn't a strange task in my opinion.

Can you link me the workaround?

 

Have a good day!

 

Reply
0 Kudos
TheBobkin
Champion
Champion

@Cipo800, Don't be shy of Dell EMC VXL team - they have some quality engineers whom I work with on a daily basis, fair enough many of them don't have as deep an insight into some areas of vSAN as my team (VMware vSAN GS) but that is expected since they are not working with just the vSAN aspect all day, every day and thus why we have things in place that they consult our team as backline when things get 'L3' (and this is the same as with HPE, Dell, Cisco, IBM, insert-vendor-that-also-sells-S&S-here).

 

This is not a design-decision relating to just the state your cluster is in right now - actually it won't allow changing size of anything when it is in a reduced-availability or otherwise 'unhealthy' state, I am not aware of when this originated (relatively sure it wasn't the case a few years ago) but I can understand the logic that making changes to something already in an impaired state (in the vast majority of cases anyway - what if the other node comes back and doesn't have space to accommodate the change?) probably isn't a great idea and focus should be on addressing the issue and getting data redundant/compliant again. Increasing size of Objects while their Storage Policy is being changed also will be denied.

 

Workaround for this is the same as if you had a necessity to create snapshots from (and thus with the same Storage Policy (SP) as the) vmdks with a dual-site mirroring policy - it is complaining because you are asking it to do an operation that requires 3 Fault Domains (e.g. siteA+siteB+Witness) for component placement but this is not available - telling it to try create/change the Object with that policy but manage with less if not possible (with the caveat that anything created in this manner won't have the same redundancy as the parent Object) by applying a policy with Force-Provisioning(FP) on it (e.g. try make the Object/apply the change as FTT=1 but if not possible make it as FTT=0).
I assumed this was the case but did test this just now to be sure - it looks like you actually have to apply the FP=1 policy on it to make the change (which makes sense as you are changing the Object itself not making an Object from it like a snapshot) as opposed to just changing the policy rule so if doing this I would advise just cloning the SP in use, adding FP=1 rule to it and applying this only to the vmdks you need to expand (as opposed to applying an FP=1 SP to all the Objects using the original SP), do ensure to revert this change and re-apply the original SP once things are back to normal.

 

By the by, just an FYI - I would advise if you have questions like this to just post it as a new topic, if it isn't 100% what the original thread was discussing it makes things convoluted. 

Reply
0 Kudos
Cipo800
Enthusiast
Enthusiast

@TheBobkin, thank you for the detailed explanation, following your directions I cloned the actual SP  (Site disaster tolerance=Dual site mirroring (stretched cluster), FTT=1) with FP, apply only to interested vmdk, but I received another error: 11 (resource temporary unavailable).

Edited the policy again and switched the Site disaster tolerance to "None - Keep data on preferred (stretched cluster)", finally I can increase the disk size!

Again thank you for the trick and for the teaching, now I understand more the storage Vsan concepts.

P.S. I opened a Vmware ticket yesterday evening (Italy time fuse) because in Dell support site I can choice only Hardware or licensing problems.

On Vmware portal I can't select Vsan, only Vsphere was available, I've detailed the request/problem. Today morning call me an engeneer without any skills on Vsan, after 5 minutes by Zoom he decided to pass the ticket to another engineer. After 3 hours (one hour ago) another person write me asking the esx log, but I replied that the problem was solved by Vmware forum.

 

Have a great w.end:)

Reply
0 Kudos