VMware Cloud Community
fmdj
Contributor
Contributor

VMotion - free datastore space requirement

Hi,

since I have updated to vSphere CenterServer and ESX4 I can no longer vmotion some VMs. Especially those that have only little free space left on their datastore. VMotion fails with a message that there is not enough free space in the datastore. This worked fine with VC 2.5 and ESX3.5. Is there a new space requirement for vmotion?

It seems that I can only vmotion VMs that have at least so much space in their datastore avaliable as the size of their ram.

Thx

0 Kudos
18 Replies
VirtualKenneth
Virtuoso
Virtuoso

Hmmmmm so let me get this right. You have, for example 2 VM's

VM1: 4 GB Memory

VM2: 8 GB Memory

Datastore1: 6 GB Free

So you can only VMotion VM1 and not VM2?

0 Kudos
dburgess
VMware Employee
VMware Employee

Are your source systems using local disk for swap? Vmotion might be trying to create vswap on the datastore. Normally in this config (you set loacl swap option) it will try and find a local vmfs on the target host and relocate the local swap file there. If it can't do that I beleive it then tries to locate the swap file with the vmx file.

dB

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

See http://www.vmware.com/pdf/vsphere4/r40/vsp_40_admin_guide.pdf

page 96, Swapfile Location Compatibility

0 Kudos
fmdj
Contributor
Contributor

I have analyzed my problem, you are right it has something to do with the swap file. The cluster is configured to store the swap file in the same place as the vm. While vmotion is in progress esx creates a secound swap file. So if your VM has 4GB ram you will need 8GB space left in the datastore - 4GB for the actual swap file when the VM is turned on and another 4GB while vmotion is in progress.

Is that correct? Was it always so that esx creates a secound swap file while vmotion?

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

I'm experiencing this same behaviour in my vSphere environment (both priority levels)

And i'm not seeing this in my ESX 3.5 environment. So this must indeed be a change Smiley Wink

0 Kudos
dburgess
VMware Employee
VMware Employee

We have tried to reproduce it here and did not see a new file created. Could be a special condition that is causing this. Are you monitoring the datastore while the vmotion is taking place. I'd be interested to see what you see, presumably for a period there are two vswap files present? Could you post a couple of screen shots?

dB

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

Hi,

See screenshots attached... 1 is before, 2 is while vmotion is taking place

Kenneth

0 Kudos
depping
Leadership
Leadership

that's weird.... but interesting though.

Duncan

VMware Communities User Moderator | VCP | VCDX

-


Blogging:

Twitter:

If you find this information useful, please award points for "correct" or "helpful".

0 Kudos
dburgess
VMware Employee
VMware Employee

Thanks - we have spotted it on our systems now so can reproduce.

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

Maybe this has something to do with the changed behaviour of the .vswp

file under vSphere?

i.e. it's created on power-on and deleted when powered down.

0 Kudos
dburgess
VMware Employee
VMware Employee

So - speaking with engineering. We always create this file, so that's new. It is only used if the target is under memory pressure. It is thin provisioned so even though it looks the size of the memory it should have very little impact on the free space of the VMFS. Could be there is some other underlying problem with the storage. How much free space do you actually have available? And what is the exact error you get from VC. We may need vm-support dump to diagnose completely so might be best to raise an SR with support.

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

Hi,

Thanks for the information. But how do you mean, a target host is pressured?

Does it deny a VMotion in case an ESX Host is already overcommited? (and is the function of creating this file to maybe check some things?)

I want the technicall stuff Smiley Wink

Next to that, you are saying it's thin provisioned. But imagine the following scenario:

I've got an ESX Cluster with many VM's. The situation could occur in where 8 VM's are being VMotioned at the same time (same datastore).

Now these VM's all carry 8 GB memory which is totally used inside the guest. In that case I need 64 GB of very temporary space available in my datastore, right?

If so this all could result in a new math to calculate the datastore size.

Thanks so far!

Kenneth van Ditmarsch

0 Kudos
dburgess
VMware Employee
VMware Employee

Here is a screen shot showing the vswp file using 0 blocks - if you are short of memory on the target this may be used so this could be the reason for the failure.

0 Kudos
dburgess
VMware Employee
VMware Employee

So as far as I know you can only have two concurrent vmotions in flight at any one time. The other thing is that the temp swap will only be used for activity as the machine transistions so should not grow to the size of the memory. If you du the file systems you should see the the dick blocks being consumed. Engineers think this should be tops 400M even if it is used at all. By pressured we mean the amount of memory free is low. That will not deny the VM to vmotion unless we can't allocate enough reserved memory (this is zero by default). Once the transition is complete the VM reverts to the orginal swap file and the temp is deleted.

0 Kudos
VirtualKenneth
Virtuoso
Virtuoso

Two concurrent per host that is. It's 8 per datastore.

Aaah okay, I see, so I don't have to adjust my calculations.

Thanks for the effort in clearing this out!

0 Kudos
fmdj
Contributor
Contributor

So whats the conclusion, is it correct that I need space for two temp files while vmotion is in progress?

0 Kudos
dburgess
VMware Employee
VMware Employee

Appologies been away - yes vmotion does create a second swap file but this file should actually consume zero or very small amounts of actual disk space unless the target host is very short of memory.

Cheers,

dB

0 Kudos
kefalak
Contributor
Contributor

Related to the problem described in this thread, we have discovered the following while setting up a vSphere environment:

We are modifying the .vmx file in order to store the vswap files in different data stores than the virtual machines (for data replication purposes).

During vmotion migration, we observe that the temporary swap file is created in the designated data store (not in the VM data store) and then deleted.

However: There seems to be a free space check taking place only for the VM data store and not for the data store where the temporary swap file is actually created. This means that the file system checked for free space and the file system where the temporary swap file is stored are different. Therefore, for a machine with 4GB RAM and no reservation, the following condition must be met:

  • 4GB free space must be available on the VM data store (even though it is not used and no file is created there)

The 4GB temporary swap is allocated in the swap file data store even if there is not enough free space available there.

If the condition above is not met, vmotion fails, even though the VM can be powered on without a problem on the target server (without using vmotion)

This behaviour may provoke a waste of disk space, as free space is required in data stores without actually being used...

0 Kudos