Erwin_Zavala
Contributor
Contributor

Need to understand how a vm uses host memory

I have a vm that has been allocated 4096 MB of RAM. The memory profile is as follows:

Private 3.56 GB

Shared 193 MB

Swapped 0

Ballooned 0

Unaccessed 256 MB

Active 368 MB

On the host, however, the vm consumes 3.65 GB of RAM. My question is why would a vm that has only 368 MB of active RAM consume 3.7GB of RAM on the host. Could someone explain to me the interaction between guest active memory and consumed memory at the host.

0 Kudos
3 Replies
cebomholt
Enthusiast
Enthusiast

The difference you are seeing is the difference between what is active (currently in use) and what the VM has been granted by the host. If you were to experience memory contention on your host, the difference is what would end up being ballooned.

Someone can correct me if I'm wrong, but I believe the host won't typically grant memory until the guest has requested it. A lot of times a wide margin between active and consumed can probably be explained by the windows boot process where a large amount of memory goes active (memory diagnostic?)

I would think it's safe to say that if your active memory stays substantially lower than the granted memory, you have given the VM more memory than it needs.

Erwin_Zavala
Contributor
Contributor

So the hypervisor has not deallocated the memory the guest at one time requested and is not using anymore.... in the tune of GBs. seems highly ineffective.... When and how is the inactive deallocated? In terms of Admission Control, what are the implications of having all this RAM that the guest vm is not actively using allocated. Is it fair to infere, that when the gap between active and consumed memory widens, that there is very little memory resource contention on the hosts?

0 Kudos
cebomholt
Enthusiast
Enthusiast

The de-allocation mechanism is the ballooning driver, which won't happen

until there is memory contention on the host. In terms of implications

on HA admission control, I'd highly recommend going through Duncan's

deepdive on HA- he goes into good detail on ac and slot sizes...

http://www.yellow-bricks.com/vmware-high-availability-deepdiv/

Your last question encroaches on the same thing I have been struggling

with lately: Which is a better solution- ballooning? or allocating less

memory to the VM in the beginning? So far my thoughts are that a VM that

is resource constrained from the beginning isolates impact to just that

VM, whereas ballooning has the potential to impact an entire host. On

the flip-side- what is the actual impact of ballooning? Has it proven to

be problematic when there is a large gap between active memory and

granted memory? Unfortunately I haven't been able to answer that one...

Maybe someone else will chime in?

0 Kudos