VMware Cloud Community
bradley4681
Expert
Expert

High System Memory Useage 37GB of 96GB

I have 3 new servers I've installed the latest release of 3.5 on and it's showing unusually high system memory useage, maybe I'm just not used to seeing it that high. The servers are quad-core Intels 4x4=16cores and 96GB of ram, there are no guests on the hosts and it's showing 34.5 GB of physical memory in use. The install was basically all default with no custom changes, is it suppose to be this high, am I missing something?

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
30 Replies
RParker
Immortal
Immortal

No, it should be under a gig, with NO VM's running. Did you install some 3rd party apps on that host? Is your firmware up to date on that box?

0 Kudos
RParker
Immortal
Immortal

And by some strange coincidence, are these Dell machines?

We just ordered 2 R900 16 core (2.4ghz) / 96 Gig RAM / 8 146 2.5" drives, with Fibre.. We haven't installed ESX yet, but we will so many there is a bug...

0 Kudos
bradley4681
Expert
Expert

No third party apps have been installed, these are freshly installed, nothing else has been run. I've delayed deploying anything to them yet until i could figure out what's going on, it didn't seem right to me either. These are brand new out of the box about a week ago, I suppose the bios could be updated, i'll give that a try but out of the 60+ installs i've done i've never seen this before. These are the first quad-cores i've done and the largest memory configuration i've seen before and thats why i'm stumped.

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
bradley4681
Expert
Expert

No they are HP DL580 G5's 4 x 2.4GHZ

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
bradley4681
Expert
Expert

Well I updated to the latest BIOS and still having the same issue...

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
RParker
Immortal
Immortal

Just for kicks, install a 64-bit Guest OS and Max out the RAM inside the VM just to see what happens....

Then reserve ALL the RAM in the VMand see what value shows up for free memory then.

0 Kudos
spex
Expert
Expert

Did you reserve fixed amount's of memory to resourcepools or vm's?

Regards Spex

0 Kudos
bradley4681
Expert
Expert

I have done neither yet as these are fresh installs, once i saw the system memory usage so high i have not proceeded with configuring anything for production. I have only configured the networking and dns, and license.

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
Dave_Mishchenko
Immortal
Immortal

Could you post a screen shot of the output from esxtop (press m after you start it to display memory satistics)?

0 Kudos
bradley4681
Expert
Expert

done

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
RParker
Immortal
Immortal

That's odd, it's like it's not recognizing the memory above 64G, and it's reserved for the kernel... interesting.

0 Kudos
williamarrata
Expert
Expert

Starting with ESX Server 3.5 and VirtualCenter 2.5, VMware DRS applies a cap to the memory overhead of virtual machines to control the growth rate of this memory. This cap is reset to a virtual machine specific computed value after VMotion migrates the virtual machine. Afterwards, if the virtual machine monitor indicates that the virtual machine requires more overhead memory, VMware DRS raises this cap at a controlled rate (1MB per minute, by default) to grant the required memory until the virtual machine overhead memory reaches a steady-state and as long as there are sufficient resources available on the host.

For VirtualCenter 2.5, this cap is not increased to satisfy the virtual machine's steady-state demand as expected. Thus, the virtual machine operates with an overhead memory that is less than its desired size, which in turn may lead to higher observed virtual machine CPU usage and lower virtual machine performance in a VMware DRS-enabled cluster.

Diagnosing the Issue

To diagnose the issue:

  • 1. Log in to VirtualCenter with Virtual Infrastructure Client as an administrator.

  • 2. Right-click your cluster from the inventory.

  • 3. Click **Edit Settings.

  • 4. Disable VMware DRS.

  • 5. Click **O and wait for 1 minute.

  • 6. In the Virtual Infrastructure Client, note the virtual machine's CPU usage from performance tab and the virtual machine's memory overhead from the summary tab.

  • 7. Right-click your cluster from the inventory.

  • 8. Click **Edit Setting.

  • 9. Re-enable VMware DRS.

  • 10. Use VMotion to migrate a problematic virtual machine to another host.

  • 11. Note the virtual machine CPU usage and memory overhead on the new host.

  • 12. Disable VMware DRSon the cluster again, as noted aboveand wait for 1 minute.

  • 13. Note the virtual machine CPU usage and memory overhead on the new host.

If the CPU usage of the virtual machine increases in step 11 in comparison to step 6, and decreases back to the original state (similar to the behavior in step 6) in step 13 with an observable increase in the overhead memory, this indicates the issue discussed in this article.

You do not need to disable DRS to work around this issue.

Working around the issue

To work around this issue:

  • 1. Log in to VirtualCenter with Virtual Infrastructure Client as an administrator.

  • 2. Right-click your cluster from the inventory.

  • 3. Click **Edit Setting.

  • 4. Ensure that VMware DRS is shown as enabled. If it is not enabled check the box to enable VMware DRS.

  • 5. Click **O.

  • 6. Click an ESX Server from the Inventory.

  • 7. Click the **Configuratio tab.

  • 8. Click **Advanced Setting.

  • 9. Click the **Me option.

  • 10. Locate the **Mem.VMOverheadGrowthLimi parameter.

  • 11. Change the value of this parameter to 5 and click **O.

    **Note By default this setting is set to -1.

<h3<br />Verifying the workaround

To verify the setting has taken effect:

  • 1. Log in to your ESX Server service console as root from either an SSH Session or directly from the console of the server.

  • 2. Type less /var/log/vmkernel.

A successfully changed setting displays a message similar to the following and no further action is required:

vmkernel: 1:16:23:57.956 cpu3:1036)Config: 414: VMOverheadGrowthLimit" = 5, Old Value: -1, (Status: 0x0)

If changing the setting was unsuccessful a message similar to the following is displayed:

vmkernel: 1:08:05:22.537 cpu2:1036)Config: 414: "VMOverheadGrowthLimit" = 0, Old Value: -1, (Status: 0x0)

**Note: If you see a message changing the limit to 5 and then changing it back to -1, the fix is not successfully applied.

In the case that the fix is unsuccessful attempt the following:

  • 1. Create a new cluster and move the ESX Server hoststo this cluster.

  • 2. Check to see if the fix has been implemented successfully.

To fix multiple ESX Server hosts

If this parameter needs to be changed on several hosts (or if the workaround fails for the individual host) use the following procedure to implement the workaround instead of changing every server individually:

  • 1. Log on to the VirtualCenter Server Console as an administrator.

  • 2. Make a backup copy of the vpxd.cfg file (typically it islocatedin C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg).

  • 3. In the vpxd.cfg file, add the following configuration after the &lt;vpxd&gt; tag:

&lt;cluster&gt;

&lt;VMOverheadGrowthLimit&gt;5&lt;/VMOverheadGrowthLimit&gt;

&lt;/cluster&gt;

This configuration provides an initial growth margin in MB-to-virtual machine overhead memory. You can increase this amount to larger values if doing so further improves virtual machine performance.

  • 4. Restart the VMware VirtualCenter Server Service.

    **Note When you restart the VMware VirtualCenter Server Service, the new value for the overhead limit should be pushed down to all the clusters in VirtualCenter.

This issue will be addressed in a future VMware VirtualCenter update release. The workarounds will not be needed in the update release and in any subsequent releases of VirtualCenter.

Hope that helped. Smiley Happy

Hope that helped. 🙂
0 Kudos
bradley4681
Expert
Expert

well there are no guests on any of these hosts yet, however i did try the above and the system memory remains the same. The attribute always changes back to 0 which is what it seems to be set to by default, not -1. Just for reference I tried all the above steps, minus the guest specific parts...

Feb 25 12:20:11 gsccvm03 vmkernel: 0:01:14:01.858 cpu3:1048)Config: 414: "VMOverheadGrowthLimit" = 0, Old Value:

5, (Status: 0x0)

Feb 25 12:24:12 gsccvm03 vmkernel: 0:01:18:02.640 cpu3:1048)Config: 414: "VMOverheadGrowthLimit" = 5, Old Value:

0, (Status: 0x0)

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
Funtoosh
Enthusiast
Enthusiast

What you have described has been discussed in following KB and I don't feel it has anything to do with current problem which is discussed now.

0 Kudos
bradley4681
Expert
Expert

this is definitely odd since it's occurring accross 3 identically configured machines. Support only got back with me at the expected inital commit time to tell me that they are working on my ticket.

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
jhanekom
Virtuoso
Virtuoso

If I may hazard a guess... the amount of free memory is suspiciously close to 64GB. Memory above that is reported to be allocated to vmkernel. The maximum amount of memory addressible by a 32-bit process is 64GB (using PAE.)

Suppose that portions of the vmkernel (currently reported to be 32-bit) has been updated to allow use of memory beyond 64GB (either by porting some code to 64-bit or some other trickery using VT or similar features.) Also suppose that not all the code (in this case, esxtop) has been updated yet to be able to deal with the additional free memory.

Pure speculation, but I think this is just an artifact of the ongoing move to a 64-bit vmkernel. I'm hazarding a guess that we'll see many more such inconsistencies in VMware and other operating systems in the next few years.

I wouldn't consider it a bug unless it actually prevents me from using all the memory. In other words, if I wasn't able to power up 4x20GB VMs, I'd be concerned.

For interest's sake, it would be interesting to see how memory utilisation changes if you power up a VM. Does the amount of memory utilised increase? Does available memory decrease? If nothing else, this will make for some interesting reporting challenges.

0 Kudos
bradley4681
Expert
Expert

Yes I was also thinking something along those lines. I know that when upgrading from 2.0 to 2.5 there were a few display issues in VC. The amount of memory in use increases as you power up guests. I'm building up a 64-bit windows template now to test what happens if i power on 3 guests, each with 20GB. I don't plan on having guests with that much memory but I intend on using above 64GB in production so I want to see if anything weird happens when what's displayed is technically above what's actually there, i.e. 3 x 20GB + the 34GB shown in use by the system = 94GB

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
bradley4681
Expert
Expert

Solved by support, it was a setting in the vmkernel that needed to be unchecked and then reboot the server.

Cheers! If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
williamarrata
Expert
Expert

Can you let me know exactly what and where they did changes.

Hope that helped. 🙂
0 Kudos