VMware Cloud Community
srodenburg
Expert
Expert

Migration to new vCenter causes unnecessary re-sync's of VM's

Bonjour,

Situation:  2-Node ROBO vSAN 6.5 U1 loses it's vCenter completely. No way to get it back. Builds a brand new vCenter 6.7.0d, re-creates the vSAN cluster and adds the now orphaned Nodes into it. So far so good. No downtime, no funky VM's. All good.

Upgrades both Nodes to 6.7 and upgrades the filesystem to v6.0 so everything is full-on 6.7

The VM's end up without any storage policy according to the GUI. But they are running fine and looking at "Cluster -> vSAN -> Virtual Objects" reveals that everything is still in place. All VM's used to have a Policy with a stripe-width of 3  (because the single disk-group in each node has 3 rotational disks each) and the components are still spread over all 3 disks of a disk-group. So that's good. Better said, nothing changed.

I say "end up with no policy" but that is not entirely true. Even though one cannot do a "check policy compliance" because there is no policy, the default vSAN policy for that cluster was actually applied. You just don't easily see that in the GUI's.

So, a new Policy is defined in this otherwise still virgin vCenter. This new policy has the same properties as the old policy had in the old vCenter  (stripe-width=3).

This new policy is then applied to a VM (which had no policy remember) and huh??? this triggers a total re-sync. Afterwards, the VM ends up with it's components on the same disks as before. So nothing changed in the end.

Example VM A

Situation before re-sync:

   Harddisk 1

      RAID-1

           RAID-0

                Component on Capacity Disk #1 on ESXi Server/Node 1

                Component on Capacity Disk #2 on ESXi Server/Node 1

                Component on Capacity Disk #3 on ESXi Server/Node 1

           RAID-0

                Component on Capacity Disk #1 on ESXi Server/Node 2

                Component on Capacity Disk #2 on ESXi Server/Node 2

                Component on Capacity Disk #3 on ESXi Server/Node 2

hours pass by (big VM...)

Situation after re-sync:

   Harddisk 1

      RAID-1

           RAID-0

                Component on Capacity Disk #1 on ESXi Server/Node 1

                Component on Capacity Disk #2 on ESXi Server/Node 1

                Component on Capacity Disk #3 on ESXi Server/Node 1

           RAID-0

                Component on Capacity Disk #1 on ESXi Server/Node 2

                Component on Capacity Disk #2 on ESXi Server/Node 2

                Component on Capacity Disk #3 on ESXi Server/Node 2

So everything is still there / back to where it was. What was that re-sync all about? And why re-sync in the first place if the end-result is going to be the same anyway (3 stripes etc.)

I'm stumped...

Reply
0 Kudos
0 Replies