The devices in /dev/vsan seem to be very fragile - when they are missing in one host of a cluster the webclient can no longer list the content of directories in vsanDatastore.
How can we recreate them manually ?
These links appear dynamically as the object gets opened on the host. Not having a VM namespace object listed under /dev/vsan means that particular namespace object is not mounted on the host. Namespace directories (VMFS) are mounted on demand. So either `cd` or `ls' action on the namespace folder under /vmfs/volumes/vsanDatastore will result in contained VMFS being mounted in the host and you will see /dev/vsan/ node for the object.
Based on this a browse call from webclient should cause the namespace directory to be mounted if not mounted already and files should be visible.
cd /vmfs/volues/vsanDatastore = mounts a VMFS container and displays it as the content of a directory
ls /vmfs/volues/vsanDatastore = does the same
hexdump -C /dev/vsan/UUID = unmounts it ?
is that by design ?
Once you exit hexdump, it triggers a device close for the object and the /dev/vsan node is removed (as hexdump is the last active client for the object). Since this is happening outside the control of osfs which manages the /vmfs/volumes/vsanDatastore interactions, the folder under the vsan datastore gets orphaned. After a while (15 secs) osfsd will remove inactive mounts from the vsanDatastore folder and this orphaned mount will be gone. Later ls on the vsanDatastore will mount them back again and you will see the /dev/vsan node coming back again.
Ok - that confirms my observations.
But the question remains - why does the use of a diagnostic tool has such unexpected effects ?
@vmware: any news on that one?
joergriether wrote:
@vmware: any news on that one?
You contact VMware support when this happens.
As long as osfsd is managing /vsanDatastore mounts, the links will not get orphaned. As I mentioned earlier exiting hexdump causes the particular object to be closed (as it is the only client which has opened it) and the device node to be removed. The mounts can be accessed after osfsd runs the purging of inactive mounts (15 secs)..