VMware Cloud Community
ukp732
Contributor
Contributor
Jump to solution

VMware ESXi 6.7 VMKernel memory usage

Hi,

I have seen this topic brought up on various posts online, but have never seen a real answer on how to address it.  In older versions of ESXi it sounds like the System Resource Reservation was tunable, but newer versions removed the capability.  I mean 16GB reserved might be fine in large memory configurations, but not ideal for lab environments that are often smaller.

I have a home lab with 3 small servers running VMware ESXi, 6.7.0, 15160138.  Each server has 32GB of memory in them.  With 0 VM's running on a given host, there is ~17GB of memory reserved by the VMKernel.  Basically about 50% of the memory.

I have a 6.0 installation on a host that has numerous VM's running and the VMKernel is only reserved ~3GB of memory.

Here's what Vcenter shows for "System Resource Reservation" on a host:

pastedImage_0.png

Here's what Vcenter Performance Monitor shows for VMKernel consumed memory for a given host with 0 VM's running.  It stays flat lined around 16GB:

pastedImage_1.png

Here is what "esxtop" shows from the host.  You can see vmk using ~16GB of memory with 0 VM's running:

pastedImage_2.png

So trying to figure out ESXi 6.7 is reserving so much memory for VMKernel in this case.  I thought these small servers would be good for a home lab, but they aren't so cost effective if 50% of the memory is lost to the VMKernel.

ESXi 6.0 is only using less than 3GB of memory for VMkernel on another installation I have with numerous VM's running

The major difference between the ESXi 6.7 and 6.0 installation in my case is that Vsan is being used on the ESXi 6.7 cluster where as the 6.0 cluster uses a shared FC VMFS datastore.

I would like to understand if this is simply a hard coded setting in ESXi 6.7 that all hosts reserve a minimum of 16GB of memory for a host, or perhaps ESXi is calculating the reservation incorrectly for some reason.  Then obviously I want to know if there is a way to modify the VMKernel resource reservation to something more reasonable like 3GB.

I have seen numerous posts on this with no real answer.

Thanks,

Steve....

1 Solution

Accepted Solutions
ccopol
Contributor
Contributor
Jump to solution

Hi Steve,

I don't have all the information to tell if this is exactly your issue, but my guess is that this is related to your 6.7 vSAN configuration.

Did you have a chance to take a look at this KB ? VMware Knowledge Base

Based on this one, to calculate the amount of memory required for vSAN vmkernel, the following information are required :

  • Is it Hybrid or All-Flash configuration ?
  • How many disk groups per host have been created ?
  • Is the number of disk groups per host homogeneous ?
  • How many capacity disks per hosts ?
  • What are the size of the disks (cache AND Capacity) ?
  • Dedup/Encryption activated ?

With those information we should be able to have a good estimation of the vSAN memory Overhead, which might be the reason why your hosts are consuming so much memory.

Cheers,

Cedric

View solution in original post

3 Replies
scott28tt
VMware Employee
VMware Employee
Jump to solution

From what I’ve read it’s vSAN: VMware Knowledge Base


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
ccopol
Contributor
Contributor
Jump to solution

Hi Steve,

I don't have all the information to tell if this is exactly your issue, but my guess is that this is related to your 6.7 vSAN configuration.

Did you have a chance to take a look at this KB ? VMware Knowledge Base

Based on this one, to calculate the amount of memory required for vSAN vmkernel, the following information are required :

  • Is it Hybrid or All-Flash configuration ?
  • How many disk groups per host have been created ?
  • Is the number of disk groups per host homogeneous ?
  • How many capacity disks per hosts ?
  • What are the size of the disks (cache AND Capacity) ?
  • Dedup/Encryption activated ?

With those information we should be able to have a good estimation of the vSAN memory Overhead, which might be the reason why your hosts are consuming so much memory.

Cheers,

Cedric

ukp732
Contributor
Contributor
Jump to solution

Hi,

Thanks for the responses.  I suspected it might be Vsan, but I couldn't find anything that broke the VMkernel resource reservation out in any detail.

My servers have 1 x 500GB NVMe M.2 drive used as the Vsan cache drive and 1 x 2TB SATA SSD drive as the capacity drive.  I based my configuration off an example I found, and wasn't aware of the impacts of vSan on the host memory.  The example I found built the home server with a 1TB capacity drive and a 128GB cache drive.  That would have used ~10GB of memory now that I know about the vSan calculations.  That in my opinion is still to much memory for a 32GB host.

Each host has 1 disk group that contains 1 x 500GB cache drive and 1 x 2TB capacity drive.  I knew about the 10% cache to capacity requirement for vSan, and thought why not go larger for the cache drive.  I should have stuck with the 10% ratio and used a 256GB cache drive which would have saved me ~5GB of memory.

Based on the KB article this makes sense:

vSANFootprint = 7100MB Host_Footprint + (1 NumDiskGroups * 11534MB diskgroupfootprint) = ~18GB

DiskgroupFootprint= 1360MB All_Flash + (.5 Diskgroup_scalable_footprint * 32GB) + (500 cache size * 20MB cache_disk_footprint) + ( 1 numcapactiydisks * 160MB all_flash)

Diskgroupfootprint= 1360MB + 164MB + 10000MB +10MB = 11534MB

So I don't know how useful vSan is for a small home lab if it is going to trigger 20-30% of the memory to be used on each host.  I would rather have the memory available for VM's.

Given this information I guess I will have to decide if I want to use a smaller NVMe cache drive to reduce memory footprint of vSan or eliminate vSan all together.  I could eliminate vSan all together as most of my VM's are clustered VM's and I could have the members run on different hosts to handle host fault tolerance.  I just can't see having 10-18GB of memory allocated for vSAN on 32GB hosts being practical.

Live and Learn I guess.  Back to the drawing board.

Steve...