My setup:
3 HP DL360 G6s
2 quad core Intel X5570s in each
64GB RAM in each
28 VMs all running Windows 2008 x64
In vCenter, looking at the RAM on ALL of the VMs, I notice that the Host Memory (Memory Consumed) is all but the same as the acutal allocated VM RAM. The Active RAM in each VM very low as is the Guest memory %
Any idea why the Consumed Memory is almost matching the memory that the VM has assigned to it?
Anyone? Or should I open a ticket with VMware?
Can you provide screen shot of
Host Resource Allocattion->View ->Memory
and
Guest/VM Resource Allocation
There is nothing wrong with what the stats are reporting . Windows usually Zero's all pages during boot up and hence the Memory consumed is near to RAM allocated to the VM . After bootup , the active RAM indicates the guest memory utlization .
Hope this helps .
I am receiving alerts abot Host memory consumption and I also would like to do some capacity planning. So, the Host consumed memory will never go down while running Server 2008 VMs?
The consumed memory will comed down gradually once Page Sharing kicks in . You should observe "Zero" and "Shared" graph scaling upwards while "Consumed" comes down from the Performance Tab( Memory ) provided no memory intensive application is running inside the VM .
Also remember that Windows 2008 will actually cache everything it can into memory when the apps /services startup provided there is enough memory.
This is different then Windows 2003 where only parts of every app were cached as they loaded (hence always using page file when there was obviously enough memory available)
All my 2008 VMs are showing the memory consumed all but matching the memory allocated to the VMs and that has rarely changed. Attached is a screenshot of one of my VMs. Server 2008 x64, 4x vCPUs, 16GB RAM running MYSQL. The consumed memory is almost 16GB and doesn't change too often.
Won't this hurt in capacity planning if the 2008 VMs show as consuming so much Host memory? I have 2008 VMs that are relatively dormant and just have the base 2008 O/S installed and still have the Host memory consumption as almost matching the allocated VM RAM
This seems Hyper-V like to me
From the graph , it can be seen that consumed memory did come down as page sharing increased .
It seems later some application inside the VM caused the page sharing to be broken and consumed memory . It could be cached memory used by a running application even though the pages are not active .
Here is an article that was released today talking about Nehalem and memory virtualization:
http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1405795,00.html
"in the chips with memory management features, ESX is freed for other
tasks. It can use larger memory pages, which boost application
performance, particularly database apps such as Oracle and SQL Server.
(If the application doesn't have to access as many memory pages, it can
perform faster.)
But if a server isn't running enough virtual machines to consume all
the system's memory -- a state often called being "undercommitted" --
and ESX is using larger memory pages, TPS won't be as effective because
there are fewer memory pages it can dedupe.... that as a server's memory gets closer to being fully committed, ESX
switches back to smaller memory pages so that TPS can dedupe more
effectively and end users can then pack more VMs into the host"
"ProCare circumvented the problem by following suggestions on a VMware community forum until the company issued the patch in Update 1."
I am running Update 1 and am still seeing the issue.
So, once I start packing more VMs into each Host, it will settle down? How does the high Host memory consumption false positive work for forecasting and building new VMs if it looks like most of the Hosts RAM is being used?
Here is a workaround you can try. You can actually disable the virtualized MMU.
set the CPU/MMU Virtualization setting within a particular VM (Edit
Settings -> Options Tab -> CPU/MMU Virtualization) to "Intel
VT-x/AMD-V for instruction set virtualization and software for MMU
virtualization" which changes the VM to only use Hardware based
virtualization on the CPU and not the Memory. After changing that
setting you have to power off the machine and turn it back on, a
restart won't cut the mustard, and the TPS will be working as it should with smaller pages.
"certain processors that have hardware-assist memory management, such as
the Nehalems, ESX can use large memory pages for machines that are
undercommitted memory-wise, which improves performance. Because of
this, TPS isn't as effective because there are fewer large memory pages
that are identical. That causes it to appear that the VMs are using
more physical memory than they actually are. But when the machine gets
closer to being fully committed or over committed memory-wise, then ESX
switches it back to smaller memory pages, and TPS works more
effectively." a quote from the author of that article
I can't tell you how to forecast for new VMs if it looks like you are running low on resources when you aren't. This is just explainig why Nehalems use more memory for better performance of the VM.
don't worry, i'm in the same boat as you. I was asking myself the same exact questions for the past month. just took me a lot of research and digging around. let's hope IBM and VMware come up with something to use all these new features but don't put a strain of resources on the box.
Also, shouldn't DRS be moving VMs for you if your hosts are being overcommitted?
Yea, DRS would so I guess thats a good thing that it hasn't been. I have mine set to apply priority 1-4 recommendations and haven't seen an automated vmotion in awhile.
It's my 4 vCPU 16GB 2008 x64 VMs that are showing up as hogs