VMware Cloud Community

Host Capacity


I am trying to determine the best way to balance VM allocation without running short on Host memory. My setup is 3 blades w/ 65GB of memory

Currently there are 56 VM's ( Linux Guests ) using this memory. VM's are configured w/ memory from 1gb - 7gb, depending on there needs.

Since my max memory amount is 191 GB ( as seen from Summary tab at cluster level ), I am operating under the theory that I should only allocate

a max of 126GB of memory to VM's. Leave a 65gb cushion in the event that I lose a blade or when I need to perform blade maint ( upgrade/patches/etc..)

Allocation on each blade is currently:

blade1: 45GB of 65GB used - 20GB free

blade2: 40GB of 65GB used - 25GB free

blade3: 45GB of 65GB used - 20GB free

So, staying at this level will allow me to put a blade in maint mode and successfully migrate all VM's on that blade over to the other 2 active blades.

On the downside I am leaving alot of memory unused. Is this just a trade off I have to make or am I missing something ?

I understand that there are memory overcommit options and such. But I am not in a position to trial/error with that when the result maybe performance hits on the vm guest...

These linux guests run jboss instances, so the memory allocated to each VM is based on how many instances of jboss will run and there heap size.

So a guest with 2 jvm's @ 1gb will be allocated 2.5GB of VM memory ( 500mb for OS )...

I have read through the Memory resource guide and I am still not clear....

Any insight is appreciated.. Thanks

0 Kudos
1 Reply

>Is this just a trade off I have to make or am I missing something?

This is a tradeoff you have to consider, as you can't magicially transfer the lost 65GB from a failure of one blade to the other 2 hosts, obviously.

What you can do to maximize memory efficiency is setting mem.allocguestlargepage to 0, which will force ESX to back Guest memory with small pages as opposed to large pages, which almost nullify TPS. Normally, ESX would break down large pages only when the host memory usage is at the limit, which then would take some time again for TPS to kick in.

It is said that this option has a small performance penalty though (which we don't see for our workloads). You need to VMotion VMs off and back again (or power off/on) for the setting to become effective.

TPS gains efficiency the more VMs run on a host, so you probably wouldn't really need to reserve the full 65GB of memory on your hosts.

If you overcommit and have VMs that are less important, you could assign lower memory shares to them through resource pools, so they will take the swapping penalty first.

Also, are you using ESX 4.1 and the new memory compression yet?

-- http://alpacapowered.wordpress.com
0 Kudos