Hello andvm,
This can be a very problematic issue and I would advise opening a Support Request with VMware support immediately if you have not done so already.
"This is the likely cause of a VM which is complaining that its vmdk has no free space (yellow bar with click retry after adding more space on the datastore)"
The VMs can't write to disk if even one of their data-components resides on the full capacity-disk - stop clicking this until you free up adequate space and have the situation under control.
"There is a vSAN re-sync going on which is almost complete but the specific physical disk usage has not changed (still fully used)."
Specifically what is the reason for resync e.g. is it attempting reactive-rebalance to move data off the full disk or is it still trying to move data from putting node in maintenance mode with FDM? (e.g. the host won't have entered MM yet).
In later version it should indicate the resync intent in the Health checks.
"Should this fail to free up disk space from the specific physical disk, is there any manual intervention that can be done?"
Yes, delete any test or unneeded VMs you have in inventory, identify anything in inventory that wasn't removed (e.g. unregistered VMs that are no longer needed), consolidate snapshots (start with relatively small vmdk snapshots/disks and don't attempt to do more than 3-4 at once or you may just slow it down), if there is anything intentional or otherwise with Thick/OSR=100/proportionalCapacity=100 then consider thinning these but don't do this unless you know what you are doing or you could incur more resync (as a result of deep-reconfig of the Object(s)), changing some un-important data to FTT=0 could be a last option but again not something to be done unless you understand SPBM (e.g. if you try to change an FTT=1,FTM=RAID5 Object to FTT=0 it will temporarily create a new FTT=0,FTM=RAID1 Object and only remove the RAID5 Object once complete).
"There is around 12% free disk space on the vSAN datastore"
It's not about how much you have, it's where you have it - if you have 0% free on a disk then it can't update the components on that disk and everyone else will end up waiting on it.
"one host forming the same vSAN had been placed in maintenance and its data had been fully migrated."
How many nodes with how many Disk-Groups in the cluster, what FTM(Fault Tolerance Method) and FTT is in use?
Why did you put a host in MM with FDM when you had an inadequate free space?(considering you should always have adequate overhead)
If it is not in MM due to some failure/critical maintenance on it then you consider taking it out of MM (and really should have done MM with Ensure Accessibility otherwise).
Bob