VMware Cloud Community
jacquesdp
Contributor
Contributor

Physical Server Configurations

Hi all,

We are in the planning phase of Virtualizing our Infrastructure, and are currently stuck with the question whether we should invest in a couple of really powerful machines, or a blade solution with more, less powerful blades. I am sure other companies have faced this choice and I would be grateful for any comments.

Thanks!

Jacques

Reply
0 Kudos
36 Replies
gary1012
Expert
Expert

You'll get a lot of different opinions on this question. If you have current issues with power and available core data center network/SAN ports; go with blades that have switch options. Most of the hardware vendors have now released blades that have either reduced or eliminated concerns surround I/O expansion.

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
khughes
Virtuoso
Virtuoso

Well I had a nice little post typed out and yay, it errored posting so round 2

Like Gary said it can go either way with lots of different opinions. There are a couple factors when going about it, like if you're going to be using ESXi or ESX for the purchasing. If you're using free ESXi then obviously if you have lots of blades its not going to cost you but if you are going to use ESX and pay for licenses having a lot of tiny blades might not make the most sense. Also when you think about your virtualized hardware that you're going to be doing, do you have any real big boxes that might eat up a lot of resources that could possibly swamp a blade down?

In the end its all about the resources delivered, and how you go about doing it. A VM doesn't care where it gets the resources from as long as it gets it.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
Ken_Cline
Champion
Champion

How large is your environment? Keep in mind that you're going to want to be able to take at least one host offline for patching / testing / failures. If you scale up (bigger boxes), you may wind up provisioning a lot of extra capacity for a small environment - and lose some flexibility. Remember that you're going to want to upgrade to the "next" version fo ESX at some point in time. You'll want to do this as a rolling upgrade - again, a good reason to scale out rather than up. Also, if you plan to use VMware HA, consider how long it will take to restart the VMs from a failed host. If you've got 20 VMs on a host, it will take "time X", if you've got 40 per VM, it may take "time X*2" or longer. Same thing if you want to take a host offline for maintenance. When you put it into maintenance mode, it will begin migrating VMs to other hosts - with 20 VMs, that will take a while. With 40 VMs, that will take a long while.

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
azn2kew
Champion
Champion

What is your long term strategies for data center consolidation? If you want less racking space, power consumptions and flexiblity modular blades systems, than try out new Dell PE M600 series, these are capable of running ESX 3.5 hosts with max 64GB RAM if you wish and it has all the things you need to virtualize your systems. If you want more powerful and high end rack servers, than using PE 2950, 6950 or R900 with max out with 256GB RAM, plenty of power for any solutions.

No matter what types of servers, you must have all networking, storage, security and implementation plan out thoroughly so you don't expose to performance and disk I/O issues. Try to price out which types are cheaper and reasonable, than use it otherwise neither solutions is perfectly fine. New blades systems are no longer limited to NIC/HBA expansions or CPU cores.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos
jacquesdp
Contributor
Contributor

Hi Gary, well we are not too concerned about that, since we will be eliminating (virtualizing) quite a few physical machines. So it sound that if you had a choice you will go for a blade solution?

Thanks

Jacques

Reply
0 Kudos
jacquesdp
Contributor
Contributor

Hi Kyle,

The blades we are looking at is quad quad core IBM machines. So there will be 16 processors available on each machine. The queastion really is whether we should considder buying one or two machines with even more processors available, or split them across a couple of blades. With VMotion and HA it makes it a pretty 'available' solution. We are running ESXi on a few machines currently, but will use ESX when the time comes

Jacques

Reply
0 Kudos
jacquesdp
Contributor
Contributor

Hi Ken,

We have about 150 servers. You are making a good point in that having big boxes are putting all your eggs in one basket. I think that is also what I am leaning towards. To have blades, but powerful ones (16 processors) and use VMotion between them. Just one question, is it possible for ESX on blade A to use resources on ESX on blade B (processor, memory etc.)

Thanks

Jacques

Reply
0 Kudos
TomHowarth
Leadership
Leadership

My personal view on Blades are that they are just adding a level of complexity and in the majority of cases a reduction in resilience.

In my experience, client who have gone for Blade Technology feel that they are getting more bang per buck, however they fail to see the fact that by packing 8 servers into a Blade Chassis they are compounding a resilience crisis.

for example

A client requires 8 servers to virtualise their environment, now I have not yet found a client who will purchase 2 Blade Chassis and put 4 in each, they will all buy one. Now what happens if your blade chassis goes south. bang no environment, OK suddenly they want to buy two chassis, now that is better, but again a Blade Chassis goes bang, Half your farm is gone. but hey thats OK HA and DRS will sort us out.

However now instead of having 40 VM's restarting you have 160 VM's restarting on only four blades, So much for your N+1 strategy now. can you survive a 50% failure on your farm, no so buy 3 or 4 chassis to minimise your risk,

my arguement is this, Unless your are going to have a very big farm the purchasing Blades can put your company in a very precarious predicament.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth

VMware Communities User Moderator

Blog: www.planetvm.net

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
Reply
0 Kudos
Ken_Cline
Champion
Champion

We have about 150 servers. You are making a good point in that having big boxes are putting all your eggs in one basket. I think that is also what I am leaning towards.

I would recommend at least four hosts. That way, in the event of a host failure, you're looking at an average of 50 VMs per host. This, of course, assumes "reasonable" workloads. The "average" loading is about four vCPUs per core, so with 16 cores, that would put you in pretty good shape (assuming you have enough RAM - general rule of thumb: 4GB/core)

To have blades, but powerful ones (16 processors) and use VMotion between them.

Sounds like a plan.

Just one question, is it possible for ESX on blade A to use resources on ESX on blade B (processor, memory etc.)

No. An ESX box is an island unto itself. When a VM is on that host, it can use the resources of that host only. A VMotion is like getting into a boat and going to another island - once you get there, you're limited to the resources on the other island (host)...

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
Ken_Cline
Champion
Champion

My personal view on Blades are that they are just adding a level of complexity and in the majority of cases a reduction in resilience.

In my experience, client who have gone for Blade Technology feel that they are getting more bang per buck, however they fail to see the fact that by packing 8 servers into a Blade Chassis they are compounding a resilience crisis.

Ah, Tom...I'm going to disagree with you on this one. A blade chassis full of blades, in most cases, is actually more reliable than a bunch of discrete servers. (I'm going to refer to the HP C-7000 chassis in this narative, but most others vendors are comparable) Now you ask "But Ken, how can that be?" - well, the chassis itself is a passive device. There are no moving parts - it's just a hunk of metal. And you say "Yes, that's true...but what about the fact that with eight discrete servers I have 16 power supplies?" - hmm...good question! Well, you have 16 power supplies, but only two per chassis. You could lose capacity with the failure of only two power supplies, whereas in the blade chassis, you have six power supplies and it is possible to run the whole shebang off of just one - so you would have to lose six power supplies before you lose capacity. Basically, there is no single point of failure in a blade chassis. By removing a bunch of moving parts from the "server" and putting them into the "infrastructure", you're improving the MTBF of an individual server. You're improving your MTTR because all you have to do is swap a blade to fix it - no plugging & unplugging cables (a major cause of outages). And, you're simplifying your cable plant.

Blades of old were problematic and did have some significant issues. I like the new blades and have no problem recommending them...

my arguement is this, Unless your are going to have a very big farm the purchasing Blades can put your company in a very precarious predicament.

I agree that you do need a "reasonable" number of VMs - if you're looking at HP blades, you could go with the c3000, which can hold 8 half-height or 4 full-height blades and get good ROI with as few as 100-150 or so VMs.

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
gary1012
Expert
Expert

We're primarily a blade shop due to density, power and cooling reasons. Plus we get to say we're "green-freindly." If single points of failure cannot be tolerated, then using multiple enclosures is a must. That being said, we've had good luck with the enclosures and have not had one go down. At some point you'll have to ask yourself what's good enough. As for the blade types, we use HP 480s and are considering the 495s and perhaps the Dell 905s.To my knowledge, there isn't a NUMA joined-bus blade design like the IBM x3950s but I could be wrong. You still have software options to provide resilience and pooled resources through HA and DRS...

As for the wide or high argument, each has pros and cons.

4 processor/multi-core hosts

Pros:usually has multiple PCI buses, more I/O slots, more memory slots with memory RAID capabilities,more VMs per host, less hosts/licenses to manage

Cons: higher cost per unit, RAM kit above 8GB are expensive, less resilience/higher pain threshold when a single host fails

2 processor/multi-core hosts

Pros: cheaper cost per unit, hardware is more affordable and commodity-like providing the ability to hot-spare servers, lesser pain threshold when a single host fails

Cons: more hosts/licenses to manage, lower VMs per host

I'm sure I've left something out and I'm sure some will disagree...

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
Reply
0 Kudos
jacquesdp
Contributor
Contributor

Hi Gary,

The thing is that we do not want to end up with one VM per blade. Some machine we want to virtualize already need 2 quad cores (but having said that it is probably overspecified by the vendor). But we need to provide them with their requirements. So I guess by getting powerful blades we will sort of be providing the best of both worlds.

Jacques

Reply
0 Kudos
mreferre
Champion
Champion

Ken,

do you remember the old good scale up Vs scale out discussions? I love them .... Smiley Wink

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
gary1012
Expert
Expert

You'll get far more than one VM per blade. On a BL480c, we're getting ~10-12 VMs per blade. As for those monster apps that require 2 quad cores, you'd be better off leaving those as physical machines. If I remember right, you cannot create a VM with more that 4 vCPUs. Even if you could create an 8-way vCPU VM, your ROI argument might not be as attractive or more than likely, won't work at all.

Community Supported, Community Rewarded - Please consider marking questions answered and awarding points to the correct post. It helps us all.
Reply
0 Kudos
jacquesdp
Contributor
Contributor

Hi Ken,

I have not really given thought to putting 50 VMs per blade. I am assuming we will need a good disk subsystem for that as well. We are planning to either upgrade our existing HPMSA1500 or replacing it with a MS2000. The reason for replacing is because it is EOL and the U320 drives are pretty much being phased out.

Can you tell me how you would allocate disk to the blades? Do you think we should rather allocate a mirrored partition per blade or RAID5 for a couple of blades?

Thanks!

Jacques

Reply
0 Kudos
Ken_Cline
Champion
Champion

Ken,

do you remember the old good scale up Vs scale out discussions? I love them .... Smiley Wink

Massimo.

I remember them well...I wonder whatever happened to our friend vmwareman???

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
Ken_Cline
Champion
Champion

I'm assuming you're talking storage for VMs here, rather than boot volumes.

Typically, you're going to want to create a few "larger" LUNs that will be seen by all of your ESX hosts. Plan on sizing each LUN to support on the order of 10-15 VMs, so in your case with 150 VMs, you should expect to have 10 to 20 LUNs provisioned that all of your ESX hosts can see. This will provide you with the ability to do VMotion (thus DRS) and HA. These are just general "rules of thumb" and your situation will need to be looked at individually, of course.

And yes, you will need some pretty decent storage to back 150 VMs spread across an assortment of hosts. I find that storage is the area (if any) where most people get into trouble with under-provisioning.Everyone's concerned about CPU, which is usually the least problematic ...

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
mreferre
Champion
Champion

He is still probably fu&%$&ng his DL585G1 under the desk ..... 😄

Massimo.

P.S. for those new to the forums...... it was a funny post: http://communities.vmware.com/thread/11034?tstart=0&start=0

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
msemon1
Expert
Expert

We use HP Bl 460 Blades and they have worked very well for us. It has been easier for us to scale and take servers down for maintenance or upgrades than

if we had small number of powerfull servers. The newer generation blades also are better on I/O and you can added more NICS to them (ours have 6).

Getting ready to upgrade the all of the hosts to 32GB of memory. Make sure and order as much memory as you can afford.

Mike

Reply
0 Kudos