I've recently heard an argument to the effect that it doesn't matter if you overallocate guest memory. Basically saying that as long as it isn't reserved, it won't actually use it so it should have minimal effect on the environment as a whole. I didn't have an immediate response so am trying to form one now. I see how one might come to that conclusion but, it doesn't sit right with me. I've always attempted to right-size VMs as a best practice and only increase resources when there is a justifiable, demonstrated need. I have to believe there is a more significant downside than I'm able to articulate.
I just don't want to overstate my position for right sizing memory so, I'm looking for real world bullet points against over allocating guest memory to present for argument (or pro-overallocating opinion if someone feels that way).
Overallocating either CPU or memory increases the amount of memory overhead that the VMkernel needs to run the VM. So, with 4 GB's of memory configured for your VM your VM overhead will be lower than with 8 GB's of memory.
See these examples for a VM with 4 GB and one with 8 GB.
This 30 extra MB of memory might not mean much, but over provision 100 VM's like this and you lose 3 GB of memory to overhead! Also, let's just say that for a reason, a few of these VM's start to burst memory. They use up all of their configured memory, which all together is more than you have in your host or cluster. And before you know it, you'll be either ballooning, compressing or even swapping memory, and that is a bad thing!
Your theory in this relates to what I (would) do and keeping the 2 (main) reasons above in consideration, I would definitely not over allocate memory if it isn't needed.
Yup..Additional overhead was the one I had thought of but, this is a much better detail than what was in my head. Thanks. If anyone can think of any other potential downsides I'd be interested in hearing some intangible suppotability or capacity planning complications that I may not be considering.
By the way, what is the calculation used to determine "memory overhead". I couldn't tell from the example provided. The suggested increases in my current environment would be even greater deltas.
I don't think VMware has released a calculation for this. It's made up by the number of vCPU's and the amount of memory allocated to the VM. Also, a bit of memory overprovisioning isn't necessarily a bad thing (only when it's needed) , but you need to watch / monitor this type of configuration more to avoid stuff like ballooning, compression and swapping, whereas the last one (swapping) is really bad for performance.
Agreed. Each VM should have what it needs. It was just the suggestion that any amount over was irrelevant is what wanted to address. The potential for unplanned bursts is the one I see as most critical. Memory leaks, bugs, or poorly written applications, boot storms, etc. could quickly overwhelm an environment with uncontrolled overallocation.
Overcommit the memory always create overhead as well scheduling the memory resources. In most of environment ex: 8 GB physical memory , Guest memory mostly more than that system. In these case swap operation increased, which can degrades the performance.
If we know what each application require to run, its better to have in that limit, since the more we overcommit, swap operations will increase...