VMware Cloud Community
rrosenkoetter
Enthusiast
Enthusiast

Memory Resource Allocation - Serious Bug?

So I'm looking over Memory Resource Allocation, and I notice most VMs have the Limit set to the memory assigned to the VM (i.e. 1024 MB or 2048 MB). A few apparently random VMs have the limit set to Unlimited though. Anyone know what causes this? And if it would cause any problems?

Second thing I notice, and this could be a real problem... There are a couple of VMs that I built with 1024 MB of memory, and then later I increased their memory to 2048 MB. Well, the Resource Allocation number didn't change! So the limit is still set 1024 MB...

What happens when a VM that has been assigned 2048 MB (and that's what the VM's OS thinks it has), has a Resource Allocation limit of 1024 MB? I'm thinking some serious swapping is going on (which could explain some performance issues I've been having). Anyone aware of this bug? I'm using VC 2.0.2 50618

33 Replies
allencrawford
Enthusiast
Enthusiast

Sorry for the delay, just now getting caught up on some old emails and found the one for this thread.  The script below is meant to be run interactively after you've already connected to vCenter and it expects you to provide a cluster name to it.  I did that because our environment is pretty large and I didn't want to hit it all at once, though it really shouldn't be an issue (other than the amount of time it takes).  You can easily tweak it to just run on every VM.  The script not only sets the memory limit on all VMs to the default (unlimited) but also resets the shares and reservations back to default as well as the same three values for CPUs.  The script isn't as optimized as it could be for speed (this is one of my first PowerCLI scripts, so it is old) and could probably have some disk I/O resource/share items added to it.  Regardless, here it is:

Param([string]$strClusterName)
if (! $strClusterName) {
Write-Output ("")
Write-Output ("Hey, you need to give me a parameter!")
Write-Output ("  Example: " + $MyInvocation.MyCommand.Name + " <ClusterName>")
Write-Output ("    - where <ClusterName> is a string that matches the name of your VM cluster")
Write-Output ("")
}
else {
Write-Output ("")
Write-Output ("Thinking...")
$spec = New-Object VMware.Vim.VirtualMachineConfigSpec;
$spec.cpuAllocation = New-Object VMware.Vim.ResourceAllocationInfo;
$spec.cpuAllocation.Shares = New-Object VMware.Vim.SharesInfo;
$spec.memoryAllocation = New-Object VMware.Vim.ResourceAllocationInfo;
$spec.memoryAllocation.Shares = New-Object VMware.Vim.SharesInfo;
$spec.cpuAllocation.Shares.Level = "normal";
$spec.cpuAllocation.Reservation = 0;
$spec.cpuAllocation.Limit = -1;
$spec.memoryAllocation.Shares.Level = "normal";
$spec.memoryAllocation.Reservation = 0;
$spec.memoryAllocation.Limit = -1;
#Get-Cluster $strClusterName | Get-VM | % {Get-View $_.ID} | % {Get-View($_.ReconfigVM_Task($spec))}
Get-VM | % {Get-View $_.ID} | % {Get-View($_.ReconfigVM_Task($spec))}
}

EDIT:

1) I have no idea why the above script is double-spaced, but I'm too lazy to fix it.

2) I just now noticed that this is my modified version that was set to run on ALL VMs in the vCenter you are connected to.  The second to last line (Get-VM ...) really should be commented out and the one above it should be uncommented to run as I originally intended it.

0 Kudos
MattG
Expert
Expert

Any update on this?

I am seeing this at alot of clients and it appears to be upgrade related.   The Memory Limits get set to the size of the granted memory of the VM.  This isn't a problem until they grant the VM more memory and the limit stays at the old value.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
vGuy
Expert
Expert

I too have seen this behaviour earlier in my environment running on vSphere 4.1. Some of the VMs had memory limit set to 4 GB causing severe performance issues. I have not been able to reproduce the issue therefore refrained from opening the support ticket. It wasn't a upgrade but a fresh 4.1 install, still no clue what had caused that behaviour..

0 Kudos
shishir08
Hot Shot
Hot Shot

Just to make things much clearer.
There are three cases here:
1) When Memory limit is less than the Memory size --> Memory Limit is the Hard value that you assign to the VM which means that VM should  not grow beyond this limit.This is being set by VMkernerl.Since this hard limit is not known by the GOS and the application running on it, so they think  that complete Memory size(assigned) is at their discretion.Over the period when once the limit is hit VMkernel will state that you have reached the memory limit and no more memory can be assigned to you.
Suppose your VM memory size is 2GB and you have set the memory limit to 1 GB in that case VM will only be able to use 1GB and the remaining 1GB will be swapped out
The guest simply doesn't know that it is being rejected, it just knows it cannot get access to more memory.  As far as it is concerned, it still sees another 1GB sitting there unused.  This is naturally going to cause a performance hit to your applications, as they are being deprived of memory resources.
The VMkernel simply will not allocate any more memory due to the hard limit.That's when Memory reclamation techniques will come in picture which will break the limit set by the VMkernel like Ballooning.

2) When Memory limit option is checked as unlimited or Memory limit set is more than Memsize --> By default  Memory limit is unlimited for  VMs.Here   limit will be  equal to Memsize size of the VM.

0 Kudos
MattG
Expert
Expert

The problem is that the memory limit is being set by some bug with vSphere.   I am pretty sure it has to do with the upgrade process.

This is a serious bug as it affecting alot of clients that I have seen and they did not change the memory limit settings on their own.

VMware needs to address what is causing this,  because it can causing Guest OS memory paging even though memory is not an issue on the host.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
mcowger
Immortal
Immortal

The only way VMware is going to address it is if you file an SR.  A forum post (while helpful) is not something VMware uses to build bug reports.

--Matt VCDX #52 blog.cowger.us
0 Kudos
craigamason
Contributor
Contributor

Cases have been opened and subsequently closed when the issue cannot be easily reproduced. (11049431803)  I'm not so sure that the issue has gained the attention of the correct people in support, ie an escalation engineer.

In my case, the problem completely went away after upgrading to 4.1U1.

0 Kudos
MattG
Expert
Expert

Unfortunately, this is one of those bugs that is hard to reproduce (you don't know when it happened). Calling support to say "somewhere in the past year a VMware related process changed my memory limits from unlimited to hard set" is probably not going to get me very far.

If enough people chime in on this thread maybe VMware can look into why this is happening.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
MattG
Expert
Expert

Thats wierd? Upgrading to 4.1 U1 removes the limits? It either would need to remove all memory hard limits (bad) or know which VMs had been mistakenly set to a hard limit and change only them?

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
craigamason
Contributor
Contributor

Upgrading did not remove the limits.  We wrote a script to check for and remove it.(set it to unlimited)  We just noticed that the problem stopped happening after the upgrade.

0 Kudos
murphyslaw1978b
Contributor
Contributor

All,

I've seen this issue from ESX on 3.5 all the way up through and including the version that I'm on today (5.0.0).  I will open up a case to see where VMware is with this.  I assume this is a bug that's been an issue from long ago and that VMware admins just have to deal with it.  That said, it did cause an issue recently whereby I had a VM ballooning after RAM increases....

0 Kudos
jklick
Enthusiast
Enthusiast

For additional emphasis, I have worked for a vendor for four years where every month I look at dozens of different customer sites. After looking at 100s of virtual environments, I count on seeing these "stealth" memory limits plaguing at least a few virtual environments every week and it's a part of my regular demos. Admins are regularly stunned and confused when I show this to them and is often a source of performance issues.

I'm seriously suprised that there hasn't been enough support tickets to warrant a witch hunt at any point in the past four years.

@JonathanKlick | www.vkernel.com
0 Kudos
MattG
Expert
Expert

I saw hard memory limits yesterday at a client who is running 5.1.

If VMware won't fix this bug,  they should at least change vCenter to have it put a warning icon next to VMs with limits (like SSH enabled hosts) so that you know something is wrong.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
jesse_gardner
Enthusiast
Enthusiast

Chiming in- I've seen this in two environments and found this to be the source of performance issues.  VM memory size grown, limit set to the old size, swapping (etc) kicks in.

I first noticed this years ago, in the probably 3.x versions.  At the time, from the research I did, I came to the conclusion that these limits were set by DRS during a DRS-initiated VMotion operation.  Though that could be incorrect.

0 Kudos