"The main doubt is: is this officially supported? I haven't found any statement regarding"
Not to be a buzzkill, but it is literally the first thing mentioned in that article:
"Disclaimer: The technology tested here is not currently supported by VMware and may never be. This is not recommended for use in any production environment, especially where VMware support is required.."
"and also I have some doubt about the durability of a NVME drive for cache, because the spare space (all the space over the 600GB limit)"
How NVMe/SSDs have as high TBW as they do is by using ALL of the device over time via dynamic wear-levelling (+ the extra % that is hidden from the user assigned only for this purpose).
Using Cache-tier devices far larger than 600GB isn't that uncommon for the above reason - a good example being VMConAWS nodes (which accounts for likely thousands of nodes) using 1.8TB NVMe as Cache-tier.
While the article you have referenced is a cool example of what is theoretically possible, I would think it more likely that changes to how much of a single device Cache-tier vSAN can actively use being a more feasible option (as this doesn't rely on non-VMware protocols and potentially vendor-specific implementations of nvme namespaces etc.).
and what about this?
this is a official VMware VSAN node with NVME namespaces. Ok, it hardware is fully certified and configured by VMware, but this open a possibility...
Thanks for that link - oddly I didn't see that in any of my feeds and seemingly been too focused on 7.0 U1 stuff to look at where VMC branch is currently.
Fair enough, that is potential progress toward such implementations being supported on vSAN in general, but I wouldn't be so hasty to jump to the conclusion that this feature is a certainty for regular on-prem vSAN (or that it will take the same form) - while VMConAWS uses vSAN, it isn't 'vanilla-vSAN' (as is released publicly and currently), also features can get added and removed on it so time will tell if this persists.
Also, one has to consider that from a support/repair/redundancy perspective VMConAWS is a lot different than your average cluster (on-prem or other hosted(that I am aware of)) in that a failed node is replaced rapidly - so while splitting a single NVMe into 4 Cache-tier devices looks great from a performance/utilisation perspective, this comes with the obvious trade-off in that if that 1 device fails you lose 4 Disk-Groups.
While this is fine for VMConAWS where this sort of thing is reacted to rapidly, I would have concern for smaller clusters and more specifically those that didn't design for N+1 nodes (relative to their Storage Policy node-count requirements), who knows maybe such things will be a support requirement for implementing this (should it ever be part of a future release).