Reply to Message

View discussion in a popup

Replying to:
LevaEng
Contributor
Contributor

Hi

Thank you for you answers
I will try to expand and clarify why thick-storage is better in this case

As support in the wild I can't verify right now but I'm 99% sure KVM/QEMU have TRIM support on GUEST and propagate it to the HOST (I used it before) (docs: driver -> discard)


About why not thin-provisioned disk?

My architetture stack is:

  • VMs: Windows 11 w NTFS 64K cluster size
  • Host: Workstation 17 on Linux
  • NFS over dedicated 10GBe
  • NAS: ZFS dataset w 64K record size & deduplication

With this solution I get the best performances from my system and a really good disk saving
Why: I found 64K sector is the sweetspot for my usage and it give good balance so that after deduplication and compression all VM disk stay nicely and hot in the ARC and L2ARC (ram and nvme cache respectively)

The problem with thin-provisioning is that I have no control over the allocation, granularity and most important alignment of it's internal chunks
If I put a thin disk on top of ZFS I'm putting a de-facto sparse datastructure on on top of a CoW filesystem, this create a really big write-amplification and also completely nullify deduplication given the fact that there is no more "sector/chunk" alignment from the Windows-NTFS to the actual storage-file

Why ZFS as backing storage?

  • Deduplication
  • Transparent compression
  • Automatic 0% performance penalty snapshot

I didn't use iSCSI on top of ZVOL mostly for simplify sysadmin operation for other in the team

The absence of TRIM/Discard/UnMap mean that block are allocated but never freed up
Yes you could enter every single machine and execute a SDelete on each of them and then at the end of the stack ZFS will recognise and free-up that chunk, but this IMHO is a really ugly hack


Hope this explain better why TRIM/Discard/UnMap support in important of Filesystem supporting FALLOC_FL_PUNCH_HOLE

Reply
0 Kudos