Hey all.
We have 5 blade servers, all with 4 dual core CPU's and 32 GB RAM. CPU utilization is very low, but all blades are around 50 - 60% memory utilized.
We have another blade with the same specs but no physical room in the bladecenter chassis until December with an older ESX 2.5 blade will be retired.
So when should I say NO MORE VM requests?
Keep adding until the server starts smoking
You may see low memory utilization but you need to watch out for other bottlenecks, ie. cpu/disk/network. Typically disk will be the first bottleneck you hit, you do not mention if you are using a SAN or not. With blades you most likely are. The general rule of thumb is about 4 vm's per cpu core, you could go up to about 8 vm's per core for lightly utilized vm's. Monitor all you resources to see if you are running into constraints that are impacting VM performance. If you do start running into constraints then you should not add more VM's. It also helps to balance your VM's across ESX hosts so you do not group disk intensive or network intensive servers together.
Read through these guides and they should help you.
ESX Workload Analysis: Lessons Learned - http://download3.vmware.com/vmworld/2006/adc9398.pdf
Getting the Right Fit: VMware ESX Workload Analysis - http://download3.vmware.com/vmworld/2005/sln056.pdf
Fyi if you find this post helpful, please award points using the Helpful/Correct buttons.
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
Thanks, Eric
Visit my website: http://vmware-land.com
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
Keep adding until the server starts smoking
You may see low memory utilization but you need to watch out for other bottlenecks, ie. cpu/disk/network. Typically disk will be the first bottleneck you hit, you do not mention if you are using a SAN or not. With blades you most likely are. The general rule of thumb is about 4 vm's per cpu core, you could go up to about 8 vm's per core for lightly utilized vm's. Monitor all you resources to see if you are running into constraints that are impacting VM performance. If you do start running into constraints then you should not add more VM's. It also helps to balance your VM's across ESX hosts so you do not group disk intensive or network intensive servers together.
Read through these guides and they should help you.
ESX Workload Analysis: Lessons Learned - http://download3.vmware.com/vmworld/2006/adc9398.pdf
Getting the Right Fit: VMware ESX Workload Analysis - http://download3.vmware.com/vmworld/2005/sln056.pdf
Fyi if you find this post helpful, please award points using the Helpful/Correct buttons.
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
Thanks, Eric
Visit my website: http://vmware-land.com
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
The blades are attached to a fibre attached storage, yes.
Total we are running about 75 VM's. We do have some high utilization VM's in place, like Documentum and SAP, against my judgement but were there before I came onboard.
What is your Documentum environment? We are about to install that within a few months.
thanks
I would suggest you leave enough head room across the blades to ensure you could afford to lose 1 and still maintain the same level of performance.
So in your case, if you've got 5 blades and 75 VM's, that's 15 per blade. If a blade goes bad, you've got 3.75 VM's to split between the remaining 4 hosts.
So before filling up, consider these points.
Hope this helps.
That is why I am asking since I am getting concerned at this point.
I got a quote for 64GB RAM for each blade, oh my god it is crazy. I am really hoping to get that 6th blade in place...
I didn't do any of the install and am not a Documentum guy, so I am sorry to say I don't have a good answer for you.
We have a DB server, an app server, and two others...
Going to 64GB or RAM might not help as the CPU's could become constrained before then.
Adding another blade is your best bet (And financially viable)
Do you know how big your DB server is and how often it gets hit?
I would think it is cheaper to get another blade than to purchase 64GB per blade.
Yeah. IBM would be over $300K. Kingston is over $144K...
We currently have an identical blade and the licenses to support.
The only time I see a CPU alert come in is when the indexing service kicks off for Documentum. Of course we are currently qualifiying the system now, so there is no produciton data or users hitting it. For all I know it may all fall apart once it hits production.
If that happens we'll be going physical.
$144k will buy a lot of nice dual socket/ dual core 3650 servers.
What blades are you using and what chassis? We have the original chassis and
older HS20 blades. I'm not sure if I want stick with blades - space isn't a problem anymore.
The only time I see a CPU alert come in is when the
indexing service kicks off for Documentum. Of course
we are currently qualifiying the system now, so there
is no produciton data or users hitting it. For all I
know it may all fall apart once it hits production.
If that happens we'll be going physical.
Thanks for your feedback.
Leave it all down to your own comfort zone - if you are seeing any performance issues in terms of the four core resources then stop & see if you can rebalance the worklods across the hosts. If you can't improve the "mix" on each host then stop. Also, look at what the servers are being used for - we created an IIS farm of two load balanced servers for internally developed apps & managed to get rid of 8 other guests (& are looking at removing other physical boxes onto this farm). Don't forget to take into account why you are using VMware - we have two environments - one for development servers which I am happy to ramp close to 100% as the guests are scrap & burn but for our production environment we need failover capability for DR situations so we only consume 50% of resources so that we can flip between sites.
Hope this helps - but don't let the powers that be give you grief - you are the VMware expert!!!!
We have an H chassis with LS41's. I would have said we have some nice HP DL585's or something but the person that was here before me insisted on spending a ton of money on IBM equipment.
This is our production environment so it is critical. I am also in the process of planning DR for the company, so that may bring along some changes.
I think my best bet here is to try and push to get the old ESX 2.5 blade physically out of the chassis so I can add that 6th blade...
my preference is to allocate VM's for 1-to-1 memory with enough resources on the HOST to deal with sessions that will VMOTION over, if the need arise. I don't think it's necessarily something that needs to ironclad, just something to be considered with provisioning sessions.