VMware Cloud Community
SayNo2HyperV
Enthusiast
Enthusiast
Jump to solution

VSAN 6.7 U1 UNMAP issue

Hi.  Been trying to get new unmap working in my lab today without success.

Environment

2 node direct connect ROBO - Wit over VPN

Vsphere 6.7 U1 + ESXi 6.7 U1 both data host & Witness

Disks version 7.0

vsan cluster unmap enabled - vsan.unmap_support MYCluster -e

Server 2016 unmap enabled - fsutil behavior query DisableDeleteNotify 0

Tested VMs VMX does not have disk.scsiUnmapAllowed

VMs Version 14 - shutdown / powered on several times following unmap enable on vsan cluster.

So, I Add/Delete bunch data, then run Optimize-Volume -DriveLetter c -ReTrim -Verbose or Defrag C: /L  - size remains as is

ESXTOP MBDEL/s remains  - 0MB/s

In addition to unmap issue, the Vsphere 6.7 U1 appears to have broke vSAN Host Performance statistics (Where unmap is shown) - only functional category is host network.  All other categories "No data available for the selected period."  Happening on both HTML5 + Flash client.  All Vsan cluster performance statistics are working just fine at cluster level.

Any suggestions on either the unmap or vsan host statistics?

Thank you.

0 Kudos
1 Solution

Accepted Solutions
SayNo2HyperV
Enthusiast
Enthusiast
Jump to solution

Last update

The 2 VMDKs have pre-existed for sometime.  Not sure why they are not shrinking.  In past many punch zero runs were done.  Maybe related...dunno.  Will be simply build new ones...

However new unmap is working fantastic.

Here's a quick example of unmap in action on server 2016.  Both deletes / space reclaim happened automatic in 1-2 minutes.  Very fast.

Also, FYI...very unsupported config:  1x Dell T610 , 1x R710 - Both w/ PERC H730P - Hybrid - 1 disk group - Intel S3710 cache , 5x WD RE4 2TBs capacity - 10GB SFP direct connect.

Fresh 40GB disk - NTFS -Defaults

Format vmdk size = 233,472KB

Copied 30.1GB data

vmdk = 63,967,232KB (FFT1)

delete 10.6GB data

vmdk = 41,529,344KB

delete 19.4GB data

vmdk = 233,472KB

-----

Edit.  Just ran another quick test with larger file count for good measure.  Also unmap in under min.

Copied 24,428 files - 30.8GB data

vmdk = 65,613,824KB

delete - 19,861 files - 16.1GB data

vmdk = 31,272,960KB

delete - 4,567 files - 14.6GB data

vmdk = 282,624KB (Slightly larger than fresh disk)

TTFN

View solution in original post

0 Kudos
2 Replies
SayNo2HyperV
Enthusiast
Enthusiast
Jump to solution

Update

Turned off vsan performance monitoring.

Ran services.sh restart on hosts

Re-enabled performance monitoring.

Host vsan statistics/graphs now functioning.

In addition I've ran some unmap tests on fresh Win10 VM.  Unmap is indeed working, just not on some larger pre-existing 2016 vmdks.

I hope Vmware has plans to give better reporting of vsan disk usage.  Really dislike how provisioned size / vsan file browser is based on RAID/storage policy.  Searching now for PS scripts for better disk reporting.

The vmdk shouldn't be the combined size - for me FFT1 = 2x space - It should be used size without vsan raid calc.  Maybe I'm just not in tune with newer software storage views and still old school HW RAID where overhead is done ahead of time.

I envision a vsan capacity tab where all the following is easily obtained: 

Per VM - each VMDKs listed w/ following:  Written size(used) , configured/ provisioned vmdk size (No vsan calc) , total vsan used (vsan RAID policy calc) , total vsan used when disks hydrated.  Then at top - level show all those same metrics of all VMDKs belonging to that VM totaled.

 

Also it would be nice to get overall view of vsan to simulate configured thin disks becoming fully hydrated.  How over committed am I?  How much vsan space remains is Server XYZ gets full.  Yes I can add, but PCs do it better. Smiley Happy 

Regardless of wishes...Vsan is awesome.

TTFN

0 Kudos
SayNo2HyperV
Enthusiast
Enthusiast
Jump to solution

Last update

The 2 VMDKs have pre-existed for sometime.  Not sure why they are not shrinking.  In past many punch zero runs were done.  Maybe related...dunno.  Will be simply build new ones...

However new unmap is working fantastic.

Here's a quick example of unmap in action on server 2016.  Both deletes / space reclaim happened automatic in 1-2 minutes.  Very fast.

Also, FYI...very unsupported config:  1x Dell T610 , 1x R710 - Both w/ PERC H730P - Hybrid - 1 disk group - Intel S3710 cache , 5x WD RE4 2TBs capacity - 10GB SFP direct connect.

Fresh 40GB disk - NTFS -Defaults

Format vmdk size = 233,472KB

Copied 30.1GB data

vmdk = 63,967,232KB (FFT1)

delete 10.6GB data

vmdk = 41,529,344KB

delete 19.4GB data

vmdk = 233,472KB

-----

Edit.  Just ran another quick test with larger file count for good measure.  Also unmap in under min.

Copied 24,428 files - 30.8GB data

vmdk = 65,613,824KB

delete - 19,861 files - 16.1GB data

vmdk = 31,272,960KB

delete - 4,567 files - 14.6GB data

vmdk = 282,624KB (Slightly larger than fresh disk)

TTFN

0 Kudos