VMware Cloud Community
BRiley1
Enthusiast
Enthusiast
Jump to solution

Reservations and their effect on memory in 4.1

All:

I've been reading up on Frank Denneman's excellent series on oversized VM's and their impact.  I started reading more in-depth VMware documentation on this, and I came up with a question I cannot find the answer to yet.  Maybe someone here knows the answer.

If I have a Windows VM with a large reservation, since Windows zeros all memory at boot, it's going to take the entire reservation up front.  My question is how does TPS and zero-page sharing affect that allocated memory?  I understand that once a machine has been allocated its full reservation, that memory won't be ballooned out, but will these other features help reduce the impact on my physical memory?

Thanks!


Brandon

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
frankdenneman
Expert
Expert
Jump to solution

Hi Bradon,

Thanks for the complement.

Comments inline:

If I have a Windows VM with a large reservation, since Windows zeros all  memory at boot, it's going to take the entire reservation up front.

Good thing to know is that reserved memory does not equal allocated memory in all cases. If a reserved page is shared between two vm's the allocated space is 0.5 pages, but both or a single vm will have a page reserved.

Yes, it will reduce the amount of unreserved memory in the host\cluster. The available unreserved memory is used by admission control to determine if it can power-on a virtual machine with a memory reservation. It has nothing to do with actual active memory or allocated memory.

My  question is how does TPS and zero-page sharing affect that allocated  memory? 

Normally TPS will collapse and share every identical page in the host. Unfortunately if the virtual machine is using HW MMU (due to AMD RVI and Intel EPT) the VMkernel is using large pages. A large page (2MB) is not shared by TPS. vSphere 4.1 uses zero page sharing, Duncan will post a very interesting post soon about Zero page thunder, so I don't want to steal the thunder.

All small pages backed by large pages have precomputed hints for tps so when real memory contention occurs (less than 6% free memory space of the host) the VMkernel might have to swap out pages. But the VMkernel has a share before swap and a compress before swap policy. So ESX breaks the large page and will eventually share all the identical pages using the precomputed hints before they are swapped out.

Due to this policy, its very likey to see very little page sharing in vSphere 4.1 on systems using AMD RVI or Intel EPT when the host memory is under commited. Only when high memory pressure is reached, mass page sharing occurs. Contrary to popular belief, the ability of memory reclaimation by page sharing in esx is not degraded, it is just postponed until free memory is really low.

I understand that once a machine has been allocated its full  reservation, that memory won't be ballooned out, but will these other  features help reduce the impact on my physical memory

Correct and that is exactly what you want. A reservation is a guarantee of physical resources whether the system is experiencing contention or not.

Please read http://frankdenneman.nl/2009/12/impact-of-memory-reservation/ for more info about memory reservations.

Blogging: frankdenneman.nl Twitter: @frankdenneman Co-author: vSphere 4.1 HA and DRS technical Deepdive, vSphere 5x Clustering Deepdive series

View solution in original post

0 Kudos
6 Replies
frankdenneman
Expert
Expert
Jump to solution

Hi Bradon,

Thanks for the complement.

Comments inline:

If I have a Windows VM with a large reservation, since Windows zeros all  memory at boot, it's going to take the entire reservation up front.

Good thing to know is that reserved memory does not equal allocated memory in all cases. If a reserved page is shared between two vm's the allocated space is 0.5 pages, but both or a single vm will have a page reserved.

Yes, it will reduce the amount of unreserved memory in the host\cluster. The available unreserved memory is used by admission control to determine if it can power-on a virtual machine with a memory reservation. It has nothing to do with actual active memory or allocated memory.

My  question is how does TPS and zero-page sharing affect that allocated  memory? 

Normally TPS will collapse and share every identical page in the host. Unfortunately if the virtual machine is using HW MMU (due to AMD RVI and Intel EPT) the VMkernel is using large pages. A large page (2MB) is not shared by TPS. vSphere 4.1 uses zero page sharing, Duncan will post a very interesting post soon about Zero page thunder, so I don't want to steal the thunder.

All small pages backed by large pages have precomputed hints for tps so when real memory contention occurs (less than 6% free memory space of the host) the VMkernel might have to swap out pages. But the VMkernel has a share before swap and a compress before swap policy. So ESX breaks the large page and will eventually share all the identical pages using the precomputed hints before they are swapped out.

Due to this policy, its very likey to see very little page sharing in vSphere 4.1 on systems using AMD RVI or Intel EPT when the host memory is under commited. Only when high memory pressure is reached, mass page sharing occurs. Contrary to popular belief, the ability of memory reclaimation by page sharing in esx is not degraded, it is just postponed until free memory is really low.

I understand that once a machine has been allocated its full  reservation, that memory won't be ballooned out, but will these other  features help reduce the impact on my physical memory

Correct and that is exactly what you want. A reservation is a guarantee of physical resources whether the system is experiencing contention or not.

Please read http://frankdenneman.nl/2009/12/impact-of-memory-reservation/ for more info about memory reservations.

Blogging: frankdenneman.nl Twitter: @frankdenneman Co-author: vSphere 4.1 HA and DRS technical Deepdive, vSphere 5x Clustering Deepdive series
0 Kudos
BRiley1
Enthusiast
Enthusiast
Jump to solution

Thank you Frank.

Do you think TPS will work on those 2MB pages in a future release?

0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

The issue is that it is very unlikely to get a match across 2MB, hence whey they are split to small pages for TPS

As an aside, bear in mind that overcomitting physical RAM can place an enormous strain on storage by generating large amounts of 4KB random IO for both vSwap and balloon.  Balloon is usually kinder because the OS has more visibility to the activity of it's guests (and it's own non-pageable areas).  RAM compression in my testing reduced vSwap storage load considerably (up to half with some tweaks to the default configuration values) presumably by effectively acting as a victim buffer (more here).

HTH

0 Kudos
frankdenneman
Expert
Expert
Jump to solution

As J!mbo mentioned, its very unlikely to find and match identical 2MB pages.  Large pages by itself can exisit out of small on large pages, i.e. small pages combined in a 2MB block or large on large, where the Guest OS or application uses a large page which is backed by a large page at VMkernel level.

I don't know how ballooning generates load on the SAN as those pages are pinned by the Guest os and their physical counter part being made available to other virtual machines.

Blogging: frankdenneman.nl Twitter: @frankdenneman Co-author: vSphere 4.1 HA and DRS technical Deepdive, vSphere 5x Clustering Deepdive series
0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

Apologies if I've misunderstood the comment, but the inflation of the balloon generates memory pressure within the guest which is then serviced with it's own swapping mechanisms if necessary.

0 Kudos
BRiley1
Enthusiast
Enthusiast
Jump to solution

Got it.  I found the KB article that explains what happens to large pages when there is low memory.  This explains a lot.

Transparent Page Sharing (TPS) in hardware MMU systems

One question I am left with is whether or not we should go in and manually configure the MMU to prevent it using EPT on VM's that aren't sensitive to performance.  Based on this document, we can see that with MMU intensive workloads, EPT speeds up performance pretty significantly.  With workloads that aren't hammering the MMU, it makes little difference.

If that is the case, would one be able to more efficiently consolidate by only using EPT on workloads where performance was paramount?  Or does that add more administrative overhead than it's worth?  Maybe it would help if I understood how the "Automatic" setting works under the MMU Virtualization properties.

Thanks again.

0 Kudos