I am on vSphere 5.5. Recently I used Storage vMotion to move several datastores from one LUN to another. My process was - I created a LUN on the storage, created a temp datastore on the LUN, SvMotion'ed several VMs to the temporarry datastore, deleted the original LUN and datastore, re-created the LUN and datastore, then SvMotion'ed the VMs back to the original storage (basically a temporary swap so I could re-size datastores smaller). Somewhere during the process, I noticed that I now have folders with the name ".sdd.sf" on the root of every datastore that I used SvMotion on. On datastores that I did not do anything with, there is no such folder. The folder is visible in the datastore browser. I see from some research that ESXi has system files with the name .sdd.sf, but I see nothing about folders, and I am not sure why they showed up after a SvMotion. Anyone else seen this? Can I delete these folders?
Those files ending with .sf are system files, and are used by the the kernel to aid in several storage related operations, those are small files that do not consume much space therefore should not be deleted.
Anyone else? I have this exact problem, and the folders these files are in belong to View desktops that are no longer registered. The filesystem is littered with folders containing .sdd.sf items that cannot be deleted. I prefer to keep a tidy house when it comes to filesystems, and this post about not deleting them is not helpful.
Hi MClark,
A shy 9 months later than your question...
I recently ran into the same problem running a datastore on VMFS 5.60. The datastore was completely locked with the presence of the .sdd.sf (system file) folder. I couldn't get rid of the folder, couldn't detach or delete the datastore...
Mount Point Volume Name UUID Mounted Type Size Free
------------------------------------------------- ---------------- ----------------------------------- ------- ------ ------------ --- ---------
/vmfs/volumes/569b7d51-079eee98 cloudfix-NFS01 569b7d51-079eee98 true NFS 528444817408 511 500533760
/vmfs/volumes/c682ffb0-d50d7200 cloudfix-NFS02 c682ffb0-d50d7200 true NFS 528444817408 528 235053056
/vmfs/volumes/53acaa7a-4513f7d1-dc33-c03fd562b260 cloudfix-iSCSI02 53acaa7a-4513f7d1-dc33-c03fd562b260 true VMFS-5 536602476544 443 464810496
/vmfs/volumes/53acaa66-18560593-0430-c03fd562b260 cloudfix-iSCSI01 53acaa66-18560593-0430-c03fd562b260 true VMFS-5 536602476544 506 740080640
/vmfs/volumes/53b2d740-e6afee39-352b-c03fd562b259 CF-ESX3-temp 53b2d740-e6afee39-352b-c03fd562b259 true VMFS-5 99857989632 98 797879296
/vmfs/volumes/fff486e5-a6899ffe-f315-2498cb5d36da fff486e5-a6899ffe-f315-2498cb5d36da true vfat 261853184 85823488
/vmfs/volumes/27c55be7-06b25928-aa05-8556e0e720dc 27c55be7-06b25928-aa05-8556e0e720dc true vfat 261853184 85831680
/vmfs/volumes/53ab399f-0f5dc05a-faa6-c03fd562b259 53ab399f-0f5dc05a-faa6-c03fd562b259 true vfat 299712512 97648640
vi-admin@localhost:/opt/vmware> esxcli -s 10.100.0.12 storage filesystem unmount -l CF-ESX3-temp
Enter username: root
Enter password:
Volume 'CF-ESX3-temp' cannot be unmounted. Reason: Busy
I was playing a bit (this is regarding my homelab) with SIOC, vFRC. I could not point a finger at what was locking my datastore.
After a while I decided to reboot the host, after the reboot the datastore was listed as inactive and the .sdd.sf folder was gone...
No problem deleting the datastore now... But the initial issue remains a mystery.
Just to add to the discussion. I to am having this problem and the only way so far I have found to fix it is to reboot the host. I do have a ticket open with VMWare and they haven't been able to figure it out yet either.
for every new top level directory i create on the vsan, the total of these files consume over 700MB, not so small.
Has anyone received any response from VMWare about this?
These are filesystem resource files that represent certain areas of the datastore metadata. They exist on VMFS and VSAN 1.0 datastores and the number of files vary by the filesystem version.
The .sdd.sf is new to VMFS5 version 5.60. The acronym is System Data Directory.
The latter is for use by VMFS5 enhancements planned in a future release.
I ran into a similar situation, but in my case it turned out to be a powered off VM configured such that the DVD device pointed to an iso located on the datastore. Once I reconfigured the VM's DVD to use an iso from a different location, I was able to delete the datastore.
I found and verified at least two sourcse of this issue.
If you have a template stored on a datastore and use the move folder DATASTORE function to relocate that template's folder to another datastore you will get this issue. The solution is use REMOVE FROM INVENTORY on the template to break the association to the original datastore location and then re-add the template to inventory from its new location.
I also had a vmotion (move compute and storage) leave the swap file. After powering off the VM and deleting the VM's folder the .sdd.sf is deletable.. I had originally used ADD TO INVENTORY to get this VM into vcenter after recovering from some SAN issues that required re-signature.
I think the root issue is when you use ADD TO INVENTORY from a location this issue can arise.
Please let me know if you had used add to inventory prior to your issues.
Thx
Lee
Moved a template from VMFS 5 to new VMFS 6 Datastore and couldn't get rid of the old Templates Datastore. VM Template showed the old datastore on it. Unregister/re-register and it let me delete the datastore. Thanks for your post weblee, it did the trick.
I had the same problem with, at the beginning, sdd.sf folder and couldn't figure out what on the host was blocking the datastore from unmounting....
After checking that no VMs were running on the datastore, it had no SIOC enabled, it was not used as heartbeat datastore, etc..etc..
So I decided to run a "tail -f /var/log/vmkernel.log" while trying to unmount the datastore and came up with a 'vm_name'-ctk.vmdk file being in use, although I never saw that file on the datastore while browsing it.
Anyway... I vMotion the VM to another host, unmounted the datastore successfully and that was it.
Hope it helps the next man.