VMware Cloud Community
esnmb
Enthusiast
Enthusiast
Jump to solution

At what point should I stop adding VM's?

Hey all.

We have 5 blade servers, all with 4 dual core CPU's and 32 GB RAM. CPU utilization is very low, but all blades are around 50 - 60% memory utilized.

We have another blade with the same specs but no physical room in the bladecenter chassis until December with an older ESX 2.5 blade will be retired.

So when should I say NO MORE VM requests?

0 Kudos
1 Solution

Accepted Solutions
esiebert7625
Immortal
Immortal
Jump to solution

Keep adding until the server starts smoking Smiley Happy

You may see low memory utilization but you need to watch out for other bottlenecks, ie. cpu/disk/network. Typically disk will be the first bottleneck you hit, you do not mention if you are using a SAN or not. With blades you most likely are. The general rule of thumb is about 4 vm's per cpu core, you could go up to about 8 vm's per core for lightly utilized vm's. Monitor all you resources to see if you are running into constraints that are impacting VM performance. If you do start running into constraints then you should not add more VM's. It also helps to balance your VM's across ESX hosts so you do not group disk intensive or network intensive servers together.

Read through these guides and they should help you.

ESX Workload Analysis: Lessons Learned - http://download3.vmware.com/vmworld/2006/adc9398.pdf

Getting the Right Fit: VMware ESX Workload Analysis - http://download3.vmware.com/vmworld/2005/sln056.pdf

Fyi…if you find this post helpful, please award points using the Helpful/Correct buttons.

-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-

Thanks, Eric

Visit my website: http://vmware-land.com

-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-

View solution in original post

0 Kudos
16 Replies
esiebert7625
Immortal
Immortal
Jump to solution

Keep adding until the server starts smoking Smiley Happy

You may see low memory utilization but you need to watch out for other bottlenecks, ie. cpu/disk/network. Typically disk will be the first bottleneck you hit, you do not mention if you are using a SAN or not. With blades you most likely are. The general rule of thumb is about 4 vm's per cpu core, you could go up to about 8 vm's per core for lightly utilized vm's. Monitor all you resources to see if you are running into constraints that are impacting VM performance. If you do start running into constraints then you should not add more VM's. It also helps to balance your VM's across ESX hosts so you do not group disk intensive or network intensive servers together.

Read through these guides and they should help you.

ESX Workload Analysis: Lessons Learned - http://download3.vmware.com/vmworld/2006/adc9398.pdf

Getting the Right Fit: VMware ESX Workload Analysis - http://download3.vmware.com/vmworld/2005/sln056.pdf

Fyi…if you find this post helpful, please award points using the Helpful/Correct buttons.

-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-

Thanks, Eric

Visit my website: http://vmware-land.com

-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-

0 Kudos
esnmb
Enthusiast
Enthusiast
Jump to solution

The blades are attached to a fibre attached storage, yes.

Total we are running about 75 VM's. We do have some high utilization VM's in place, like Documentum and SAP, against my judgement but were there before I came onboard.

0 Kudos
kreischl
Enthusiast
Enthusiast
Jump to solution

What is your Documentum environment? We are about to install that within a few months.

thanks

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

I would suggest you leave enough head room across the blades to ensure you could afford to lose 1 and still maintain the same level of performance.

So in your case, if you've got 5 blades and 75 VM's, that's 15 per blade. If a blade goes bad, you've got 3.75 VM's to split between the remaining 4 hosts.

So before filling up, consider these points.

Hope this helps.

esnmb
Enthusiast
Enthusiast
Jump to solution

That is why I am asking since I am getting concerned at this point.

I got a quote for 64GB RAM for each blade, oh my god it is crazy. I am really hoping to get that 6th blade in place...

0 Kudos
esnmb
Enthusiast
Enthusiast
Jump to solution

I didn't do any of the install and am not a Documentum guy, so I am sorry to say I don't have a good answer for you.

We have a DB server, an app server, and two others...

0 Kudos
MR-T
Immortal
Immortal
Jump to solution

Going to 64GB or RAM might not help as the CPU's could become constrained before then.

Adding another blade is your best bet (And financially viable)

0 Kudos
kreischl
Enthusiast
Enthusiast
Jump to solution

Do you know how big your DB server is and how often it gets hit?

I would think it is cheaper to get another blade than to purchase 64GB per blade. Smiley Happy

0 Kudos
esnmb
Enthusiast
Enthusiast
Jump to solution

Yeah. IBM would be over $300K. Kingston is over $144K...

We currently have an identical blade and the licenses to support.

0 Kudos
esnmb
Enthusiast
Enthusiast
Jump to solution

The only time I see a CPU alert come in is when the indexing service kicks off for Documentum. Of course we are currently qualifiying the system now, so there is no produciton data or users hitting it. For all I know it may all fall apart once it hits production.

If that happens we'll be going physical.

0 Kudos
kreischl
Enthusiast
Enthusiast
Jump to solution

$144k will buy a lot of nice dual socket/ dual core 3650 servers. Smiley Happy

What blades are you using and what chassis? We have the original chassis and

older HS20 blades. I'm not sure if I want stick with blades - space isn't a problem anymore.

0 Kudos
kreischl
Enthusiast
Enthusiast
Jump to solution

The only time I see a CPU alert come in is when the

indexing service kicks off for Documentum. Of course

we are currently qualifiying the system now, so there

is no produciton data or users hitting it. For all I

know it may all fall apart once it hits production.

If that happens we'll be going physical.

Thanks for your feedback.

0 Kudos
dfgl
Hot Shot
Hot Shot
Jump to solution

Leave it all down to your own comfort zone - if you are seeing any performance issues in terms of the four core resources then stop & see if you can rebalance the worklods across the hosts. If you can't improve the "mix" on each host then stop. Also, look at what the servers are being used for - we created an IIS farm of two load balanced servers for internally developed apps & managed to get rid of 8 other guests (& are looking at removing other physical boxes onto this farm). Don't forget to take into account why you are using VMware - we have two environments - one for development servers which I am happy to ramp close to 100% as the guests are scrap & burn but for our production environment we need failover capability for DR situations so we only consume 50% of resources so that we can flip between sites.

Hope this helps - but don't let the powers that be give you grief - you are the VMware expert!!!!

esnmb
Enthusiast
Enthusiast
Jump to solution

We have an H chassis with LS41's. I would have said we have some nice HP DL585's or something but the person that was here before me insisted on spending a ton of money on IBM equipment.

0 Kudos
esnmb
Enthusiast
Enthusiast
Jump to solution

This is our production environment so it is critical. I am also in the process of planning DR for the company, so that may bring along some changes.

I think my best bet here is to try and push to get the old ESX 2.5 blade physically out of the chassis so I can add that 6th blade...

0 Kudos
adolopo
Enthusiast
Enthusiast
Jump to solution

my preference is to allocate VM's for 1-to-1 memory with enough resources on the HOST to deal with sessions that will VMOTION over, if the need arise. I don't think it's necessarily something that needs to ironclad, just something to be considered with provisioning sessions.

0 Kudos