Try refreshing the state of these Objects via RVC:
# vsan.check_state -r <PathToCluster>
If this does not start the Objects resyncing properly then try owner abdicating them from the host that is the current Dom_Owner of these:
# vsish -e set /vmkModules/vsan/dom/ownerAbdicate <objectUUID>
Not really advisable, but there are 3 scenarios that would result in that fixing the stuck resync:
1. The host rebooted was the target of the stuck resync and when it became unavailable to the cluster, that job would be marked failed and the resync requeued (either on another node if enough available Fault Domains or on that node again once it became available).
2. The host rebooted was the source of the resync data and the job was marked as failed when it became unavailable and restarted once it came back.
3. The rebooted host was the DOM_Owner of the Object and this switched to another host when this became unavailable (Owner Abdication does this also).
I had the same issue, fortunately for me the disk(vmdk) that was involved in the stuck vSAN resync doesn't have any data on it. So after running out of options to fix this, I have tried to completely delete the disk from virtual machine and re-added a new disk. This resolved the issue for me. If you don't have much data on this disk where the vSAN is stuck, you can probably try adding a temporary disk to the virtual machine and migrate the data over to the temporary disk and delete this disk completely from the VM. If not, I would recommend opening a support case with VMware.
That's a decent alternative workaround too - haven't had to consider using that since the *good* old days of vSAN 5.5 as Owner abdicating works reliably in this scenario unless there is something seriously wrong with the Object, cluster or the underlying storage device the components are located on (in which case in-guestOS cloning may be a good alternative).
Another possibility (but very unlikely the case here) is data-components unable to complete resync due to the target fault domain not having enough space, I have seen this in situations with poorly-sized clusters/disks e.g. putting vmdks near the size of the node-capacity on the vsandatastore - later versions of vSAN deal with this scenario better with the ability to stripe components a lot better to fit the smaller pockets of free space available on individual capacity-drives.