<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>LevaEng Tracker</title>
    <link>https://communities.vmware.com/wbsdv95928/tracker</link>
    <description>LevaEng Tracker</description>
    <pubDate>Fri, 17 Nov 2023 17:37:28 GMT</pubDate>
    <dc:date>2023-11-17T17:37:28Z</dc:date>
    <item>
      <title>Re: Trim/Unmap support on GuestOS using FALLOC_FL_PUNCH_HOLE on HostOS (Thick provisioned vmdk)</title>
      <link>https://communities.vmware.com/t5/Workstation-2023-Tech-Preview/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/idc-p/2988298#M31</link>
      <description>&lt;P&gt;It is impossible for the host machine to know which data in the guest disk it is actually used and which is not&lt;BR /&gt;For a full explanation on why&amp;nbsp;&lt;A href="https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/m-p/2985948/highlight/true#M182697" target="_blank" rel="noopener"&gt;see my answer in the other thread&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;While it is true that the Trim command (that is a sata command) is mostly associated with ssd it also true that the SCSI variant UnMap (that is the same thing) exist for this exact purpose of virtualisation and disk-provisioning&lt;/P&gt;</description>
      <pubDate>Tue, 26 Sep 2023 09:44:05 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Workstation-2023-Tech-Preview/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/idc-p/2988298#M31</guid>
      <dc:creator>LevaEng</dc:creator>
      <dc:date>2023-09-26T09:44:05Z</dc:date>
    </item>
    <item>
      <title>Re: Trim/Unmap support on GuestOS using FALLOC_FL_PUNCH_HOLE on HostOS (Thick provisioned vmdk)</title>
      <link>https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/m-p/2985948#M182697</link>
      <description>&lt;P&gt;Hi&lt;BR /&gt;&lt;BR /&gt;Thank you for you answers&lt;BR /&gt;I will try to expand and clarify why thick-storage is better in this case&lt;BR /&gt;&lt;BR /&gt;As support in the wild I can't verify right now but I'm 99% sure KVM/QEMU have TRIM support on GUEST and propagate it to the HOST (I used it before) (&lt;A href="https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms" target="_blank" rel="noopener"&gt;docs: driver -&amp;gt; discard&lt;/A&gt;)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;About why not thin-provisioned disk?&lt;BR /&gt;&lt;BR /&gt;My architetture stack is:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;VMs: Windows 11 w NTFS 64K cluster size&lt;/LI&gt;&lt;LI&gt;Host: Workstation 17 on Linux&lt;/LI&gt;&lt;LI&gt;NFS over dedicated 10GBe&lt;/LI&gt;&lt;LI&gt;NAS: ZFS dataset w 64K record size &amp;amp; deduplication&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;With this solution I get the best performances from my system and a really good disk saving&lt;BR /&gt;Why: I found 64K sector is the sweetspot for my usage and it give good balance so that after deduplication and compression all VM disk stay nicely and hot in the ARC and L2ARC (ram and nvme cache respectively)&lt;BR /&gt;&lt;BR /&gt;The problem with thin-provisioning is that I have no control over the allocation, granularity and most important alignment of it's internal chunks&lt;BR /&gt;If I put a thin disk on top of ZFS I'm putting a de-facto sparse datastructure on on top of a CoW filesystem, this create a really big write-amplification and also completely nullify deduplication given the fact that there is no more "sector/chunk" alignment from the Windows-NTFS to the actual storage-file&lt;BR /&gt;&lt;BR /&gt;Why ZFS as backing storage?&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Deduplication&lt;/LI&gt;&lt;LI&gt;Transparent compression&lt;/LI&gt;&lt;LI&gt;Automatic 0% performance penalty snapshot&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I didn't use iSCSI on top of ZVOL mostly for simplify sysadmin operation for other in the team&lt;BR /&gt;&lt;BR /&gt;The absence of&amp;nbsp;TRIM/Discard/UnMap mean that block are allocated but never freed up&lt;BR /&gt;Yes you could enter every single machine and execute a SDelete on each of them and then at the end of the stack ZFS will recognise and free-up that chunk, but this IMHO is a really ugly hack&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Hope this explain better why&amp;nbsp;TRIM/Discard/UnMap support in important of Filesystem supporting&amp;nbsp;FALLOC_FL_PUNCH_HOLE&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 09 Sep 2023 11:53:08 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/m-p/2985948#M182697</guid>
      <dc:creator>LevaEng</dc:creator>
      <dc:date>2023-09-09T11:53:08Z</dc:date>
    </item>
    <item>
      <title>Trim/Unmap support on GuestOS using FALLOC_FL_PUNCH_HOLE on HostOS (Thick provisioned vmdk)</title>
      <link>https://communities.vmware.com/t5/Workstation-2023-Tech-Preview/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/idi-p/2979516</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I'm spinning multiple VMs on top of a ZFS filesystem dataset&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Each VM has its disk fully allocated (Thick Provisioned), but given that ZFS support native compression (and deduplication of data too) the actual on-disk-size of each vmdk file is only as big as the written data (or even 1 / N with dedup)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Generally speaking this is what should happen on any modern FileSystem than support sparse-file / Fallocate/ Hole-Punching&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;(This setup is better than sparse vmdk because it have superior performance generally and extra far better storage efficiency if combined with ZFS native compression and on-line data deduplication )&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;This work good and well while the GuestOS write data, but then when such data is deleted it is never freed-up on the Host side because the HostOS can never know (if not hinted) that this range of the vmdk file has been "released" by the upper GuestOS&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;For this, at least on linux based OSes there is the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://man7.org/linux/man-pages/man2/fallocate.2.html" target="_blank" rel="nofollow noopener noreferrer"&gt;fallocate( FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE )&lt;/A&gt;&amp;nbsp;&lt;SPAN&gt;that can "punch hole" in the file and free the not used space &lt;EM&gt;(logical size of the file stay the same but physical on-disk size shrink as needed)&lt;/EM&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;But when configuring a VM on Workstation 17 with:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;- Thick Provisioned Disk (single file)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;- I/O controller : LSI Logic SAS&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;- Disk Type : NVMe&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;The GuestOS (Windows11) report that the disk do not have Trim support&lt;BR /&gt;But Trim support in this case should be supported and then converted in a fallocate call on the vmdk file by the HyperVisor (VMWare Workstation)&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;How can I can enable this behaviour? I already tried adding to the .vmx configuration file:&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;- nvme0:0.virtualSSD = "TRUE"&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;-&amp;nbsp;disk.scsiUnmapAllowed = "TRUE"&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;But GuestOS is still reporting no Trim support&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Thanks,&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Luca&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/td-p/2979492" target="_blank" rel="noopener"&gt;Also posted here&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jul 2023 14:00:44 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Workstation-2023-Tech-Preview/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/idi-p/2979516</guid>
      <dc:creator>LevaEng</dc:creator>
      <dc:date>2023-07-27T14:00:44Z</dc:date>
    </item>
    <item>
      <title>Trim/Unmap support on GuestOS using FALLOC_FL_PUNCH_HOLE on HostOS (Thick provisioned vmdk)</title>
      <link>https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/m-p/2979492#M182146</link>
      <description>&lt;P&gt;Hi&lt;BR /&gt;&lt;BR /&gt;I'm spinning multiple VMs on top of a ZFS filesystem dataset&lt;BR /&gt;&lt;BR /&gt;Each VM has its disk fully allocated (Thick Provisioned), but given that ZFS support native compression (and deduplication of data too) the actual on-disk-size of each vmdk file is only as big as the written data (or even 1 / N with dedup)&lt;BR /&gt;Generally speaking this is what should happen on any modern FileSystem than support sparse-file / Fallocate/ Hole-Punching&lt;BR /&gt;(This setup is better than sparse vmdk because it have superior performance generally and extra far better storage efficiency if combined with ZFS native compression and on-line data deduplication )&lt;BR /&gt;&lt;BR /&gt;This work good and well while the GuestOS write data, but then when such data is deleted it is never freed-up on the Host side because the HostOS can never know (if not hinted) that this range of the vmdk file has been "released" by the upper GuestOS&lt;BR /&gt;&lt;BR /&gt;For this, at least on linux based OSes there is the &lt;A href="https://man7.org/linux/man-pages/man2/fallocate.2.html" target="_blank" rel="noopener"&gt;fallocate( FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE )&lt;/A&gt;&amp;nbsp;that can "punch hole" in the file and free the not used space (logical size of the file stay the same but physical on-disk size shrink as needed)&lt;BR /&gt;&lt;BR /&gt;But when configuring a VM on Workstation 17 with:&lt;BR /&gt;- Thick Provisioned Disk (single file)&lt;BR /&gt;- I/O controller : LSI Logic SAS&lt;BR /&gt;- Disk Type : NVMe&lt;BR /&gt;&lt;BR /&gt;The GuestOS (Windows11) report that the disk do not have Trim support&lt;BR /&gt;But Trim support in this case should be supported and then converted in a fallocate call on the vmdk file by the HyperVisor (VMWare Workstation)&lt;BR /&gt;&lt;BR /&gt;How can I can enable this behaviour? I already tried adding to the .vmx configuration file:&lt;BR /&gt;- nvme0:0.virtualSSD = "TRUE"&lt;BR /&gt;-&amp;nbsp;disk.scsiUnmapAllowed = "TRUE"&lt;BR /&gt;&lt;BR /&gt;But GuestOS is still reporting no Trim support&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Luca&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://communities.vmware.com/t5/Workstation-2023-Tech-Preview/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/idi-p/2979516#M12" target="_blank" rel="noopener"&gt;Also posted here&lt;/A&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jul 2023 14:04:30 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-Workstation-Pro/Trim-Unmap-support-on-GuestOS-using-FALLOC-FL-PUNCH-HOLE-on/m-p/2979492#M182146</guid>
      <dc:creator>LevaEng</dc:creator>
      <dc:date>2023-07-27T14:04:30Z</dc:date>
    </item>
  </channel>
</rss>

