We now have two ESX4 clusters in our environment. One with 5 hosts, one with 4. As we add vm's to them, I've noticed that the shared memory usage is very low. In fact, its tiny compared to the memory sharing I'm seeing on our ESX3.5 hosts. We are running HP G6 processors, fully patched with the latest nahelem proc issue fix.
Is there a configuration change or some optimization that can be done for this?, we really seem to be losing some of the memory sharing efficiencies when we move a vm to an ESX4 cluster.
If you are using the HP G6 servers you are using Nahalem CPUs. See http://kb.vmware.com/kb/1014019 for details.
JAMES WOOD | SYSTEMS ADMINISTRATOR | ARIZONA DEPARTMENT OF TRANSPORTATION
Phoenix, Arizona
You're correct, thats why I mentioned that we are running G6's since they are Nahalem procs, and we are patched with the referenced kb patch to correct this issue. I'm just wondering if there is another lingering issue(whether it be related to Nahalem's or not) with ESX4 that might be causing it to not share memory as effectively (or at all) as 3.5 did.
We do not have any hosts set with the Mem.AllocGuestLargePage set to 0.
Okay, I see... I wish I could be of more assistance. The shared memory issues I had went away with the patches.
JAMES WOOD | SYSTEMS ADMINISTRATOR | ARIZONA DEPARTMENT OF TRANSPORTATION
Phoenix, Arizona
So by default vm's on nehalem procs will use large pages until it is forced to split the pages into small pages.
Once the pages are split memory sharing will start to work as it finds matches.
So in theory, as these hosts get more vm's, it will begin to share more. Do you know of any documentation on this? I'm not seeing anything on vmware's site.
I had this problem. I reported to VM support. The support told me, that is normal. I made a test. I created vms with big memory setting. When its reaching the limit of the host, its disable Large page and the sharing memory start going up.