I am evaluating VeeamOne, and it threw me a warning that a VM and a host were swapping.
After logging into the host with SSH and starting ESXTOP I got the following information :
12:30:26pm up 17 days 1:13, 630 worlds, 7 VMs, 17 vCPUs; MEM overcommit avg: 0.00, 0.00, 0.00
PMEM /MB: 262109 total: 3238 vmk,117338 other, 141532 free
VMKMEM/MB: 261723 managed: 3231 minfree, 9892 rsvd, 251831 ursvd, high state
NUMA /MB: 131035 (106123), 131071 (35025)
PSHARE/MB: 3676 shared, 130 common: 3546 saving
SWAP /MB: 97 curr, 4 rclmtgt: 0.00 r/s, 0.00 w/s
ZIP /MB: 120 zipped, 78 saved
MEMCTL/MB: 3906 curr, 3906 target, 104671 max
GID NAME MEMSZ GRANT SZTGT TCHD TCHD_W MCTL? MCTLSZ MCTLTGT MCTLMAX SWCUR SWTGT SWR/s SWW/s LLSWR/s LLSWW/s OVHDUW OVHD OVHDMAX
4412749 pevwsas1 131072.00 92154.00 101857.09 18350.08 18350.08 Y 0.00 0.00 83980.53 0.00 0.00 0.00 0.00 0.00 0.00 12.24 485.51 882.42
49839 pevwinfdb1 16384.00 16382.98 16485.60 1146.88 1146.88 Y 0.00 0.00 10649.31 0.00 0.00 0.00 0.00 0.00 0.00 10.89 99.76 129.80
7648063 pevwtvdb1 6144.00 2165.61 2118.30 1167.36 675.84 Y 3906.20 3906.20 3993.31 0.00 0.00 0.00 0.00 0.00 0.00 10.58 68.02 64.64
48041 pevwzarmta2 6144.00 5731.09 2721.25 0.00 0.00 Y 0.00 0.00 3880.12 0.00 0.00 0.00 0.00 0.00 0.00 6.64 53.93 60.19
884841 pevwdoor2 2048.00 2048.00 2087.21 225.28 204.80 Y 0.00 0.00 1228.36 0.00 0.00 0.00 0.00 0.00 0.00 10.12 36.95 34.89
6772752 pecloud2 1024.00 1024.00 1049.29 174.08 112.64 Y 0.00 0.00 646.39 0.00 0.00 0.00 0.00 0.00 0.00 6.09 25.33 24.34
6646 pevwvma1 512.00 396.00 426.58 51.20 35.84 Y 0.00 0.00 293.00 0.00 0.00 0.00 0.00 0.00 0.00 6.07 18.71 20.60
I am no expert at ESXOP, but what I gather from some KB's the info above show that the host has 141GB free memory (141532 free)
Which makes sense, the box has 256GB installed and is only half full when looking at the combined memory sizes of the vm's.
I can also see the host has swapped some memory 97MB currently, but it is not reading or writing at this time (0.00r/s and 0.00w/s)
More alarming to me is that the hypervisor is forcing one of the vm's (pevwtvdb1) to start ballooning ! and it has grown its balloon to 4GB (3906.20)
Why is it doing this ?
For the VM in question, open the vSphere Client and navigate to:
Edit Settings > Resources > Memory and check the 'Unlimited' box.
Note: The above setting is the default, and is considered best practice. It allows the VM to use all of the memory assigned to it. Despite the name 'Unlimited' it will not take more than is assigned to the VM. It simply allows the VM to use all memory assigned to it. Also, make the change on your templates (convert to VM and check) as that's a common source of this issue.
For just one VM, the GUI technique is fine. However, to fix this at scale you can use the API. Here's the PowerCLI way:
#List VMs with a Memory Limit Configured:
Get-VM | Get-VMResourceConfiguration | where {$_.MemLimitMB -ne '-1'} | Select VM,MemLimitMB
#Remove the memory limit from all VMs:
Get-VM | Get-VMResourceConfiguration | where {$_.MemLimitMB -ne '-1'} | Set-VMResourceConfiguration -MemLimitMB $null
#Remove the memory limit from a single VM:
Get-VM myVM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemLimitMB $null
PS - The same concept applies to CpuLimitMhz so consider reviewing that as well.
Hi!
What is the H/W version running on your VM? Have you got current version of VMTools running on the VM?
For the VM in question, open the vSphere Client and navigate to:
Edit Settings > Resources > Memory and check the 'Unlimited' box.
Note: The above setting is the default, and is considered best practice. It allows the VM to use all of the memory assigned to it. Despite the name 'Unlimited' it will not take more than is assigned to the VM. It simply allows the VM to use all memory assigned to it. Also, make the change on your templates (convert to VM and check) as that's a common source of this issue.
For just one VM, the GUI technique is fine. However, to fix this at scale you can use the API. Here's the PowerCLI way:
#List VMs with a Memory Limit Configured:
Get-VM | Get-VMResourceConfiguration | where {$_.MemLimitMB -ne '-1'} | Select VM,MemLimitMB
#Remove the memory limit from all VMs:
Get-VM | Get-VMResourceConfiguration | where {$_.MemLimitMB -ne '-1'} | Set-VMResourceConfiguration -MemLimitMB $null
#Remove the memory limit from a single VM:
Get-VM myVM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -MemLimitMB $null
PS - The same concept applies to CpuLimitMhz so consider reviewing that as well.
Hi There,
Could you please update the VMtools ad hardware version as latest available as per esxi build.
Thanks
SPOT ON ! Grashopper, Thank you !
I actually found that solution after taking a break for looking at this issue all day.
Trying to wrap my head around it just made me run in circles.
When taking a moment to relax it hit me, the hypervisor is not demanding the memory because it needs it, its taking it because it is ordered to do so......
Then I started looking at why it would have orders to do so and found the 'unlimited' tick box.
Should have reported back here, but I was to eager to start my weekend. :smileysilly:
Thanks again Grashopper and the rest of this great community !