continuum Very cool. How does one know if they have this condition? Is it in cases where a VMFS datastore has been spanned across more than one extent where any N extents are missing/dead? What are symptoms that this method could be helpful? I'm curious myself hence the questions.
If an extent is missing the datastore can be still mountable but trying to start a VM or cloneing a vmdk via vmkfstools will fail.
You should see a message in the vmkernel.log that will have lines like:
extent of datastore went offline
or cant mount datastore - reason: extent XY is missing
Basically this is an issue that you can not overlook.
If you have a missing / lost extent you will hear about it very soon.
In this recent case the parent extent - (that looks like a regular VMFS-volumes and has the regular hidden *.sf files if you check with ls -lah) - had a single I/O error and one of the 3 extents had a bad VMFS-header.
Generally speaking extents are the worst option to enlarge a VMFS-volume.
If just a single one fails - for most users everything would be lost.
So thats worse than a RAID 0