So I'm looking at the task manager of this server and noticed that Windows reports only using about 6GB of the 32GB of ram available, while looking at the VMWare web managment and notice that VMWare is showing nearly all of the 32gb assigned to the that Virtual Machine as in use or being used or reserved? Just wondering if there's a way to get the Host to release any memory that the OS is not using a little more actively.
Look at the cached memory, windows will cache as much memory as possible, but leave it open for use. You can try clearing this, but windows just takes it back. Here is a sample discussion
Windows will cache
I was hoping that VMware had a little more dynamic control of the resources. You see, this VM is running on nice new spiffy NVMe drives and is pretty fast, caching is a little pointless at this point. The time it takes to wipe cache and load the application that I run will more than likely be just as long as loading the applications straight from disk. Maybe that's a feature for the next upgrade?
It would be nice if there was a memory range that could be provided to each sever and VMware can use performance counters to establish peak performance or balanced overall performance of the VMs and dynamically distribute within given ranges.
"It would be nice if there was a memory range that could be provided to each sever and VMware can use performance counters to establish peak performance or balanced overall performance of the VMs and dynamically distribute within given ranges."
By default, DRS usese active memory to balance resource rather than consumed, which is what youre looking at there. If you also have vROPS you can make use of proactive DRS... it looks at VM utilisation over time and preemptively moves VMs when it knows theyre likely to require additional resource etc
Unfortunately its just the way Windows works... if you have Linux VMs you will probably see that their consumed isnt as high
The ESXi host does NOT Interference to the Guest OS virtual memory how to use it or free the idle pages! and actually the host doesn't have any right for this operation. The hypervisor maps only the available Machine Page Number (MPN) of the ESXi to the VM's memory or Physical Page Number (PPN), and the Guest OS of VM has the duty of mapping PPN to the Virtual Page Number (VPN) for OS/Apps usage of that VM. For better understanding relation between the physical & virtual memory, please look at my recent post in my personal blog:
Honestly I don't think it has to be that complicated as to have to get involved with the OS memory management or anything. I'm thinking something like making use of hot add cpu and memory to VMs starting at a low amount and slowly giving it a little more till performance really isn't going anywhere adding any more. But from what you guys are saying, VMware already has something that can optimize like that, but I'm guessing isn't not in the free hypervisor package? Sorry, i don't recognize any of those acronyms mentioned.
The only way vmware gets involved with the memory managment is if the host gets low on memory, and then it will try to save some by filling up the balloon driver installed in vmware tools in vms that have high amount of inactive memory pages. There isn't any other feature in any versions I'm aware of does what you want, the product isn't setup that way. Stuff like DRS looks at active memory and consomed memory, where it manages how hosts are locationed by looking at the active memory plus 25% of the consumed memory above . There is a new feature in 6.5 that will look more at the consumed memory and balance them if thats what you need. outside of this its like was mentioned before, vmware just manages the guest memory by mapping it to the physical memory in the host.
Remember active memory is probably going to be closer to what you see in guests, but not always. Consumed generally will be close to what you've granted the vm for windows in general.
You can use a tool like vrealize operations to find the optimal size for the vm
or what I've done is monitored the resources in both vmware and set the minimum amount needed for the vm to get the performance you need without sitting idle. things like hot add are nice if you do need to increase it but if you let it do it automatically you run into issues when they get over a certain size the performance drops because you've started to split the Numa barrier and things slow down because access memory across the processors is slower then staying local
This is a