Try to restart management agents on your ESXi server.
I have rebooted the host already.
I have the same problem but my was doing with 6.0 as well. Let me know if you find a solution.
Interesting that you had the same issue on 6.0.
I'm guessing this is a bug with the alerts system for memory usage in vCenter. Turning off the alert for that particular VM is the only workaround at present
I'm seeing something similar after upgrading to 6.5. Memory active for a guest has gone up on a lot of my VM's (various O/S versions including Linux). What is interesting is if I vMotion to another host it appears to drop back to its pre 6.5 levels but then starts to creep back up.
Did you ever find out the cause for yours?
See attached. 6.5 upgrade on 2/9. Increases memory on 2/13 in an attempt to silence the alarm. vMotioned on 2/21.
Capture.JPG 55.2 K
Someone might have already chimed in on this, but since we performed an in-place upgrade from ESXi 6.0 to 6.5, we started seeing VmMemoryUsageAlarm on quite a few VMs. While we usually don't rush to upgrade VMware Tools on the VMs, I did see that a number of VMs with version 10249 (10.0.9) were those noted with the alarm, so I upgraded them to latest version 10272 (10.1.0) and saw a dramatic reduction in vRAM utilization. This seemed most prominent in our older, lighter VMs with 2GB or less vRAM assigned. Has anyone seen anything similar in their environment?
I've attached the chart for one VM that shows "normal utilization" up to our 6.5 upgrade (03/06), then usage spiked to about 95%. I upgraded VMware Tools yesterday and usage dropped back down to "normal utilization".
Memory.jpg 289.5 K
We are in the same boat. We migrated to vSphere 6.5 by clean installing the vCenter server and then the ESXi hosts one by one.
Some VMs with lower allocated RAM immediately started spitting out errors in the web client regarding active memory, which went over 75 to 98% and oscillated. One particular VM, which has a PCI device passed through it (this in turn reserves all the RAM allocated to the VM from the start), has an active memory counter to 100% all the time. On vSphere 6.0 we've never seen this behavior. Only allocated and reserved to 100% but active memory was ok.
I'm seeing the same problem with VMs that are @ tools version 10249 after upgrading to 6.5 on my Hosts. I will try to schedule a tools update on the VMs.
I'm seeing the same issue after an upgrade from 5.5. VM tools have been updated to 10272, but still the same. It's only happening on a few VMs.
Same issue as well. vCenter was a brand new 6.5 appliance deployment. Hosts were upgraded from 6 and then rejoined to the new vCenter cluster.
Random VM's showing high memory usage in vCenter but windows/Linux OS shows nothing out of the ordinary.
VMware tools at latest. Host builds at 6.5.0 4882521 so not the latest. vCenter also a few patches behind at 126.96.36.19900 5178943. We have 2 vCenters with separate clusters using a shared PSC and we see the fault in both DC's on either vCenter. I'd be more inclined to think this was a host issue like the one that existing back on Version 4 with incorrect memory usage alerts.
Same behavior here.
This occurs when memory is heavily used on a VM. For example, when there is a full scan of mcafee on a VM, the memory does not seem to be released correctly by the ESXi server afterwards. If I do not do anything, the memory usage by the ESXi will only increase. If I make a Vmotion of the VM, the whole thing becomes normal again.
Someone opened an incident? It's harder for me, I speak French.
memory.jpg 32.7 K
I just came across this KB Article for everybody's viewing pleasure.
Snippit from the KB Article so people don't have to click on the link..
VM Memory Usage heuristic over-reporting on ESXi 6.5
- Virtual Machines are triggering the "Virtual machine memory usage" alarm
- The VM Memory Usage / Active performance metric is higher compared to running on ESXi 6.0 or earlier
This issue is more prevalent in:
- Virtual Machines with less configured memory
- Virtual Machines that are highly utilized for at least short periods during the day
CauseA change in the large page promotion path in ESXi 6.5 causes the activity sampling to over-report and stay high despite decreasing workload activity.
ResolutionVMware is aware of this issue and is working to resolve this in a future release. This is a display problem only and has no performance impact on the Virtual Machine or the host it is running on. It is recommended to disable the "Virtual machine memory usage" alarm to avoid false positives.To disable the alarm definition:
- Select the vCenter object in the navigation pane of the vSphere Web Client
- Click the "Monitor" tab, then the "Issues" sub-tab
- Here, select "Alarm Definitions" and search for "Virtual machine memory usage"
- Highlight the alarm and click "Edit", then un-check "Enable this alarm"
- Click "Finish"
Thanks for the link to the article - it certainly has the hallmarks of the problem.
I can narrow the problem down to VM's configured for PCI Pass through - and reserving all memory.
At that point - the Active memory graph tracks the Granted memory and doesnt ever fall like it does on 6.0.
I wish the KB had highlighted PCi Passthrough, or rather 'Reserve all Guest Memory' in the 'more prevelent' list.
Also - the solution provided (unless I am missing something) is not as easily possible as suggested.
The rule which is triggered - is at the VC root level (by default) - and unless this is removed, and then individually assigned at child container levels, it can not be simply removed from the VM which is triggering it.
Please let me know if there is an easier way to remove the alarm defenition from a VM other than removing from the root parant - and subsiquently preventing all other VM's from triggering this alarm.