We have a SATA FC LUN, presented to 3 x ESX 4.1 hosts and 1 x ESXi 4.1 host (all fully patched) from an IBM DS4800 SAN. We wanted to provision some new test VMs and noticed that the provisioned space didn't add up the total of the VMs (see attached images):
The 'Performance' view is the most revealing. It shows 266GB of 'other' data on the datastore; the question is, what is it?
Querying the datastore via the vSphere Client, ESX console and PowerCLI reveals nothing. The totals match with the total of the provisioned VMs.
Any ideas what this could be? There are no snapshots, or large data dumps (ISOs etc) on the datastore as far as I can see.
PS I forgot to mention that some of the VMs have raw mappings to other LUNs (i.e. the last VM on the lists has 71GB provisioned on the datastore in question.The rest of the 5.05TB is made up of raw mappings to other LUNs)
now that I read your post - and afterwards straightly looked at the 'Performance' view of our Datastores, I seem to have the same problem. Although not as excessive as your "Other Files" for me the "Other Files" make up about 120G for 550G of Virtual Disks and 70G of Snapshots. At first I would assume "Other Files" should consist of log files, but 120G of Log files for around 20 VM's would be insane.
There are also - and have never been - any ISO Files or other non-VM-related files on this Datastore.
Thanks in advance!
Hello, I have the same problem.
"Other" files increases very fast. Yesterday I migrated some VM's from storage to free space, but this night "other" files fill all the available space.
My storage is NetApp, mounted via NFS.
When I check disk space from esxi console, "df" shows that Used 2Tb (100%), but when I do "du" command on whole mount point, I see that only about 660Gb used. This happened last 3 days, I had no such problem earlier.
Please help me to know how to delete this "other" files.
If you take a look into the documentation (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=200309...) you can see that "other" files are:
"All other non-managed files placed on the datastore, such as documentation, backups, and ISO or Floppy images. Includes all virtual machine files which are not associated with a registered virtual machine. On ESX, includes the Service Console's virtual disk file (
Note: Information stored on a Datastore by Lab Manager and not associated with a registered virtual machine in vCenter Server, including virtual machine Captures, will be reflected in the Other category."
Have you stored the esxconsole.vmdk on the concerned datastore?
No esxconsole.vmdk on the affected datastore.
I'm also clear on what the 'other' file types could be but no matter how I query the datastore, I cannot see ANY files that could be classed as 'other'. This is consistent if I query the datastore via vSphere or through the command line - we now have 266GB of 'other' files that I cannot actually see on the datastore!
As a matter of interest, is anyone using VMware Data Recovery in their affected environment?
hm, is it possible that you have such big systemfiles (.sf files)?
In my case the size of the "other Files" on the LUNs match with the size of the system files. Have you alread tried to take a look at your LUNs with WinSCP instead of using the ESX console?
We were also affected by this situation over the weekend. In this case the mystery "Other" file types consumed all available disk space and brought every VM on the DS offline. Still showing 500+ gb of "unkown" files
ESXi 4.1.0 988178
NetApp 7mode 8.0.3
Snap Manager for VI 4 (backing up datastores)
We started to experience this issue after the latest ONTAP upgrade (8.1.2) to our Netapp filers. In our instance, the issue was related to de-dupe bug.
Previous version of ONTAP had an issue, where the metadata that is used to track duplicate data becomes stale. This data is supposed to be cleared out during a de-dupe run. The bug did not properly detect the stale data and left it, reporting it as "Other" on the volume and being completely invisible to all methods of perusing the datastore. After the upgrade, there was a fix installed for the bug but it did not include cleanup of the existing data. The data was then viewed as usage, after being left orphaned on the volume and messed with all sorts of things, including snapshot deltas and creating a ton of "other" space - up to 1.4TB on one volume!
A manual cleanup of the de-dupe data was required.
We have this similar problem. NetApp confirmed to us this was reported as:
|Title||Stale metadata not automatically removed during deduplication operations on volume|
We had a similar issue.
(storage: netapp with NFS datastores).
Check if fractional reserve is enabled on the netapp:
If the VM’s that reside in that volume have thick provisioned eager zeroed disks and fractional reserve is at 100% it will consume 2x the space as the vmdk and will show up as “other” space in vSphere.
Disabling fractional reserve helped for us, also fixing the dedupe bug afterwards claimed some extra space.
I encountered a similar issue with our Datastore. "Other" file type accounted for over 1TB of a 4TB datastore. I added up all of the files on the datastore and they never added up to the space consumed. The worst part is that the "other" files were continuing to grow at a rate of around 50 GB per week! After going back and forth with NetApp and VMWare, I finally got an ace of a NetApp technician. In our case, apparently Dedup had been turned on, but was then turned off. At some other point, we upgraded NetApp and now the Dedup process was creating metadata without cleaning it up properly. To find this out, we needed to run "priv set diag" and then "sis check -c /vol/(volume name)" Since we didn't want dedup on anyway, we issued the command "sis reset /vol/(volume name)". The files are currently purging (3 hours later) and we are getting our free space back. Article is at https://kb.netapp.com/support/index?page=content&id=7010056.
I would like to explain the meaning of "other" on Netapp environments.
Assuming that you already verify there is not a iso images or zombies vmdk.
1) Create a new DS located on Aggr0
Aggregate total used avail capacity
aggr0 6245GB 4488GB 1756GB 72%
aggr0/.snapshot 0TB 0TB 0TB 0%
vol create ds_test -s none aggr0 2048g
2) move a 1 vm in order to populate a virtual disk item
DS = 2048 GB
VM = 20 GB
Avail = 2028 GB
however a vmware space utilization show us 1757,16 GB Avail
Is there somentig wrong here ?
It's not, this is an expected behavior on overcommitted aggr. The available capacity on aggr is 1756 GB, so you can write up to the aggregate limit. instead of the DS limit 2048 GB
Therefor vmware try to fix it adding an occupied space "other" in order to make sense.
FreeSpace = TotalSpace - other - (Vm space)