I'm running ESXi6 update 02 on a 250GB SSD as main data store.
On the side I created a second datastore of 120 GB to add in its entirety as a secondary HDD to a VM.
Here is my basic setup:
VM1: 100 GB
VM2: 60 GB
VM3: 25 GB
VM4: 25 GB
26GB ish free. Disks are fully provisioned, eager zeroing.
All VM's have run successfully for extended periods of time.
What happened twice now is the following, in two different scenarios:
When editing a VM (like changing some memory or boot setting) I get the following error:
"The disk capacity specified is greater than the amount available in the datastore". This happens both in the web UI and in the vSphere client.
Looking at the HDD page in the VM properties it shows the HDD size I originally picked, 100GB for VM1 and 60GB for VM2, but below the box where you enter their size it says "Max 26ish GB" - the amount of free space in the datastore.
So for whatever reason ESXi6 isn't considering that these disks are existing ones, and the "free space" check should not be performed, it should just realize these are previously created HDDs and boot.
This is a hard stop and you cannot make any other configuration changes while that message happens.
I've gotten past it by flipping configuration pages and swapping between vSphere and UI a few times and the machines booted again, but I fully expect to be hit with it again next time I have to take the Vm's down.
The other VM's with 25GB HDD's boot just fine because the free space on the datastore is slightly bigger than the 25GB of their disks so the check is passed.
I got the same error when I added datastore 2 to a VM as dedicated disk (one of the VM's with 25GB from datastore 1 as HDD1). I simply made Datastore 2 on the second HDD, allocated it fully to Datastore 2 and then allocated the maximum, or close to the maximum, of the available space to a HDD2 on one of the VM's. The exact same error would happen - it would check free space on datastore 2 and say that the 120GB or so was way bigger than the free space on the datastore, due to the same symptom as above (it checking against free space erroneously).
Has anyone experienced this before?
I am not sure if this is the problem, but you might check it out.
When you power on a virtual machine with no reservation a swap file of the size of the configured memory is created (by default on the same datastore where the vm files are).
So if you have your VMs powered on the insufficient space maybe a real problem. If this is the problem to workaround it you can configure some reservations to reduce the swap file or you can move your swap files to another datastore with more free space.
Thank you for the tip.
While I did see some errors related to this in the past as I was playing with VM's, I don't think that explains it, for two reasons:
1) The free space still available on the datastore is greater than the size required for the swap file (6GB mem assigned to the VM for example and 25ish GB free)
2) The error also occurs if you add a second physical harddisk, create a new datastore eating up the full disk, and then adding that entire datastore to a VM as a secondary harddisk. I would not expect swap to be created on this secondary disk yet you run into the same problem.
Some additional info - if you run into this error, on the UI you can't edit the virtual machine settings but on the full client you can. I find myself going back and forth between fat client and UI frequently depending on what I need to do and which issue I run into. In my opinion the UI definitely can't fully replace the vSphere client at this stage - just my humble opinion.
I have the same problem with my 200gb vm's vdisk on a 290gb array (90gb free). The problem is a bug in the UI web interface to edit vm settings. It seems to be checking to see if there is enough disk space for it be added, when the vdisk already exists and is not being expanded or even changed.
My solution, which I don't necessarily recommend but I had to do since I can't install vSphere client on my chromebook and I was desperate, was to:
1) make sure the vm was powered down
2) ssh into the esxi interface
3) manually change the memory allocation from 4096mb to 8192mb (what I needed changed) on the vm in the vm's config file using vi
4) then use the UI to re-register the vm
Tricky stuff, but it worked.
I can confirm that I have the exact same issue. Using the new Web UI, i'm not able to change any settings to my VM due to the error "The disk capacity specified is greater than the amount available in the datastore. Disk space overcommitment can consume all space in the virtual disk and block the virtual machine. Increase the datastore capacity before proceeding or enter a smaller disk size."
The check seems to be thinking I'm trying to add a new drive to my virtual machine, even though I'm only trying to change a network setting. I'm running 6.0 Update 2 and the VM is Version 11. Others in the forums have recommended installing the Windows client and doing changes from there so I am downloading that now. Glad the Windows client hasn't been completely killed off yet!
I also had the same issue and out of datastore space error when modifying VM settings on a new ESXi 6.0U2 install. Upgrading to the latest version of the host client fling corrected this for me. ESXi Embedded Host Client
I too have the same problem.
My steps to reproduce:
1.- create a new datastore, on an empy 500GB disk, full disk dedicated (465.5GB total capacity reported in ESXi).
2.- create a VM, 378.78 GB used, 86,72 GB free.
3.- Edit VM settings: see 86.72 max disk capacity reported instead of true 465.5GB.
4.- Try to save settings -> fail.
My "about esxi data reads as follows":
Build number: 3617585
Build type: release
That worked for me too.
Question remains if "Maximum Size" (which is still displaying the remaining free space and not the maximum (datastore) space) should be renamed to "remaining space".