we're currently demoing vSAN 6.6 on an all-flash setup. After the upgrade to vSAN 6.6, we have noticed a huge increase in file system overhead.
With dedup & compression enabled, the file system overhead is about 1.7 TB for a 16TB vSAN.
With dedup & compression disabled, the file system overhead is still about 200GB:
Please note that both of these are empty, newly created vSAN datastores containing neither VMs any other objects. In previous releases I've seen file system overhead in the range of a couple of megabytes to maybe a few gigabytes, which also matches the screenshots in the official vSAN 6.6 documentation displaying 11.34 GB file system overhead for a 15TB array.
Is there an explanation for that strange behavior?
A miscalculation in vCenter was my first intuition as well, but the df -h command on one of the ESXi servers returns the same (or very similar) numbers. The cluster in question is under a demo license, pending the processing of our enrollment in the vCloud Air program, so I don't think it's eligible for support just yet.
Not sure how I missed this question previously - I had worked a case with similar display issues and surely would have noted this.
This issue is resolved in 6.5 U1 (vSAN 6.6.1):
"Large File System overhead reported by the vSAN capacity monitor
When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.
This issue is resolved in this release."
(Note: df will not give representative usage here either as the calculation is being made before it gets down to this level)
I have this problem on version 6.7U3 of VSAN on dell R740XD. (config: 2 nodes + a witness full SSD)
I use the vmware version provide by dell for R740XD.
As you can see, 3 VMS consume 100GB in thin provisioning and system usage consume 4210GB.
Fortunately, deduplication and compression save 1.29TB (LOL)
I have to go into production in 1 months with real licenses at 25000 euros.
anyone for a solution ?
Welcome to Communities.
Please note that this is an English-speaking Community and thus to increase the chance of a decent reply you should post in English (or post on the French sub-Community).
I don't necessarily see any problem in your screenshot - vsanDatastore has filesystem overheads and some of these are consumed up front (e.g. they won't increase or won't increase linearly with data added).
Please add more data to this datastore so you can see what I mean, shouldn't take 4 months to do this :smileygrin: .
It is only 80% of a small number - our documentation states virsto format on disk uses 1-2%, the 299GB file system used in your screenshot is what this represents
"I think, it's also an issue."
Then please feel free to go read the documentation so that you understand that it is not.
'On-disk format version 3.0 and later adds an extra overhead, typically no more than 1-2 percent capacity per device. Deduplication and compression with software checksum enabled require extra overhead of approximately 6.2 percent capacity per device.'
Thanks for that link, we are using 7.0 and having this issue on a fresh installation in test.
As of checking today 24 July 2020 the link suggests "The VMware Engineering team is aware of the issue and is working to have a fix released in the future release of the product."
It does make you wonder how much space usage vCloud Meter would record - Is anyone running that with this bug?
Here is a screenshot of the issue in 7.0, ~500GB of vm usage (even with FTT=2 that would be max 1.5TB) and usage shows as 13.39TB..