I've got a case open with VMware on this, but I thought I'd post to see if anyone has seen this. Currently running vCenter 6.7U1 with hosts to match. All our storage is NFS based from a few NetApps. When you go to the Datastores view in vCenter and select any datastore my understanding of the "Provisioned Space" metric is that it's a sum of the provisioned VMDKs. I know that Free space is coming from the array and things like Dedupe and compression come into play... but provisioned space I thought was solely calculated by vCenter based on the provisioned VMDKs on that datastore. We're seeing value that are way off in both directions on some datastores, but ones that are accurate on others.
Example 1: Evacuated datastore with no VMs on it. vCenter shows provisioned space as 5TB.
Example 2: Datastore with 20 - 500GB VMs on it. Powercli (get-datastore | get-vm | select ProvisionedSpaceGB | sum -ProvisionedSpaceGB) returns exactly what you'd expect 10TB. In vCenter though Provisioned Space shows 6TB.
Since we're thin provisioned we're using Provisioned Space as our guidlines for where to place VM so as not to oversubscribe. But if the calculation for that Provisioned Space can't be trusted we may be oversubscribing un-intentionally. Anyone seen anything like this before? Is it possible the Provisioned Space property has changed and is no longer just a sum of the VMDKs?