VMware Cloud Community
ufo8mydog
Enthusiast
Enthusiast

HP vs IBM blades for ESX/vSphere

Hi all,

We currently run ESX on Dell rackmount servers but VM growth is out of control! Therefore, we are looking at a new platform to handle it. Now that Intel Nehalem is out we are looking at the blades scenario. My questions

1) Blades or Single servers? We have plenty of room and power, so I am looking at things like ease of deployment, cabling, management, performance, price and redundancy.

The one thing I have concerns about is that by going with a Blade Chassis you are consolidating risk - is it possible for current generation blade chassis to "fail" as it were, taking out everything? Or is there enough redundancy in every component to ensure this does not happen. Which option is more cost effective - budget and value are big considerations.

2) IBM or HP? Does anyone have experience using both chassis in an enterprise environment, and which would be more suited for a vSphere deployment?

3) Should I consider something slighter cheaper, e.g., Dell.

Thanks very much for your help

Reply
0 Kudos
26 Replies
runclear
Expert
Expert

Okay so this is were I've spent some time lately -

Before I dive in too much here, We currently have three generations of the Dell BladeCenters where im currently employeed (1855/1955 & M600's). It's hard to argue about the value/performace that dell delievers... the new M1000 Chassis that dell has is pretty good... not the best but good. We use the Integrated Cisco switches and passthru ethernet and passthru fibre in the M1000's

We are avg' 25-30 VM's per Dual Quad Core Blade with around 32GB per host, on the M600 Blades.

We spent some time looking at HP/DELL/IBM blades and the big iron (the IBM 3950M2's) in particular... Since we are in Houston, we were lucky enough to be able to visit HP's corporate office and scope our their "blade war room" and compare all the vendors side by side... Minus the Vendor hype... its hard to deny the superior design that HP has over the Dell/IBM... I personally love IBM hardware, but really think that IBM lacks features and capacity that HP has... (on the blades). Unless Dell or IBM has changed something very recent, HP is the only vendor that keeps POWER and SIGNAL (data) isolated on the BladeCenter Backplane.... Is this a huge deal? to most probably not... just shows the superior design HP has...

I REALLY like the HP VirtualConnect stuff that they have with their bladecenters.. pretty pimp if you ask me...

We have around 25 Nodes across our bladecenters, i always spread the nodes across different bladecenter chassis ie:

Cluster(s) -> ESXClusterA | ESXClusterB

BladeCenter Chassis 1: Node 1a / Node 1b

BladeCenter Chassis 2: Node 2a / Node 2b

BladeCenter Chassis 3: Node 3a | Node 3b

In this case, you could loose an entire chassis and still be okay.....

Obviously Dell will typically win the "hard upfront captive cost" argument with the bean counters in your office... your mileage may vary Smiley Happy This also comes down to the expertise of your team that will be supporting the hardware... If you have a strong team...(knowledgable with hardware) why spend more for the HP? Obviously this is a "oversimplification" but something to consider..

I dont know about you guys/gals, but I HATE Dealing with IBM/HP support... Dell has its shortcomings... but when it comes down to getting parts replaced.... my experice by far has been the best with Dell....

-


-------------------- What the f* is the cloud?!
Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

If you're going to use HA clusters with blades from different enclosures, do not include more than 4 blades from one enclosure since there is 5 primary HA agents.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
ufo8mydog
Enthusiast
Enthusiast

Is it possible for an entire blade chassis to fall over/fail? Or are the current models pretty good with having engineered out the single points of failure?

As budget only sees room for one chassis we would need to be totally confident that the chassis at least had 100% availability, and let HA/FT deal with blade failures.

Reply
0 Kudos
runclear
Expert
Expert

Eh, in that case.. if I were you, Id go with phsyical servers... If the backplane fails in the chassis... your going down... and all hardware will need maint cycles and updates... so a single bladechassis is probably not for you..

-


-------------------- What the f* is the cloud?!
Reply
0 Kudos
alex555550
Enthusiast
Enthusiast

Hy,

the right thing. Smiley Happy

x3650M2 2xNehalem and 2 Intel SSDs Raid 1 for the Datacenter OS.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

Whole enclosure can fail, or somebody can make a mistake and power off all the servers.

Situation when whole enclosure goes down is very rare, but it can happen.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

I'd prefer 4 middle size servers instead of 2 monstres. 1 server goes down = 50% resources gone vs 25%.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
runclear
Expert
Expert

Ahhh ... thus starts the "Scale OUT vs Scale UP" argument... Smiley Happy - we are constantly have that "discussion" here in our office

-------------------- What the f* is the cloud?!
Reply
0 Kudos
gboskin
Enthusiast
Enthusiast

We use blades BL460c in a c7000 enclosure. We have2 enclosures for redundancy with the esx hosts spread across the two so that we don’t have all the primary nodes on one enclosure. With the introduction of Intel Nehalem 5500 series we are looking at getting the BL460 G6 cause according to VMware with vsphere and intel 5500 series any system can be virtualised

Reply
0 Kudos
AsherN
Enthusiast
Enthusiast

It's always an interesting discussion. Ease of support is the main argument.

Just for giggles, I was lookig at the Dell offering. The enclosure is 10U and can hold 16 blades. Assuming that servers and switches can all be 1U, you gain 6U per enclosure. Blades are marginally cheaper than physical servers, but you also have to factor in the cost of the enclosure.

As others have said, while unlikely, it is possible to kill an entire enclosure.

My biggest gripe with blades is that I am at the mercy of the 1 vendor, for that enclosure. If another vendor comes up with a new whizbang technology, it won't fit in my blades.

OTOH, a rack full of blades has a fairly high 'cool factor' :smileycool:

Reply
0 Kudos
runclear
Expert
Expert

So the whole "Scale out/Scale Up"... this has also been a big "point" on those talks .....

"So if we are paying for HA/DRS, Clustering etc,.... why are we spending more money on "more hosts and thus more licensing cost".. when we could be taking advantage of the Redundancy features....... and using less power, less space, less $$$$ etc....

I'd like to hear some responses to that "Question".....

-


-------------------- What the f* is the cloud?!
Reply
0 Kudos
kjb007
Immortal
Immortal

Personally, I would use standalone rackmount servers, for ease of management. All of the blades come with their idiosyncracies, regarding configuration and setup. While they are nice and neat, they can be a pain to setup and setup. Granted, once they're setup, you usually don't have those issues, but initial setup and future modifications can be a pain. Our rackmount servers took a couple of hours to get up and running. The blades, and I've used several different vendors, including HP, IBM, and others, took a bit longer.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
ufo8mydog
Enthusiast
Enthusiast

I'm sensing the consensus here is that blades are not a good idea unless you have two for redundancy. Although the risk of an enclosure failing is small the risk is still there.

Reply
0 Kudos
runclear
Expert
Expert

Yes Grasshopper..... You are on the path to enlightenment Smiley Happy

For a smaller budget I'd stay away from blades and stick with a few stand alone hosts...

-


-------------------- What the f* is the cloud?!
Reply
0 Kudos
ufo8mydog
Enthusiast
Enthusiast

Fair enough runclear!

If I can take the discussion on a slight tangent - what would the recommendation be in terms of server vendor? - I'm sure a fair few people might be wrestling with these decisions as we speak.

I guess that everyone here has a favourite, but is there any clear front-runner out of the current 5500 series servers, does anyone have any (unbiased or otherwise) views on the capabilities of each of the three majors with respect to reliability, VM Density, performance, and value. I guess we all know that Dell probably has the best bang for buck, but cheap does not always equal good especially if dozens of VMs depend on that server, even with HA having a server down event is a pain. I guess it would be too early for any concrete comparison reviews to have surfaced?

While I'm at it, what do people think about the state of SAS vs SSD drives now in the datacenter. Does the slight improvement in reliability of the storage component justify the price premium of SSD (assuming of course that the VMs are all safely tucked away on shared storage, and the local drives are just for vSphere/ESX).

And finally, Intel have released the 5500s at various speeds. It looks like the 550x processors are duds (no HT or turbo boost). Would I be right in assuming that the sweet spot lies in the 5520 or 5530s? Given the dramatic improvements for virtualisation workloads in Nehalem IMO I don't think that opting for the top of the range would make a lick of difference for anything but the heaviest users. Similarly, the 1066 DDR3 seems to be more than adequate for the task. Heck, even the 800 is significantly faster than the previous generation DDR2.

Cheers guys

Reply
0 Kudos
kjb007
Immortal
Immortal

As you pointed out, Dell has the best bang for the buck. They are efficient and work well. HP makes a solid server as well, and so does IBM. I haven't had too many hardware issues with either, so I find them relatively similar in terms of reliability. The IBM's definitely come at the top tier of price, and if you read around, you'll have plenty of oppinions of the performance of each. Check out vroom, vmware performance site:

I don't think SSD is currently worth it. Most virtualization candidates are not your highest users of I/O, and so the price definitely does not justify the amount of storage and "reliability" that you're getting. How many drives have you had to replace lately? In the past six months, I'm not sure I've replaced more than a drive or two, if that, out of hundreds in the datacenter.

I've rarely seen processor as a bottleneck. These days, it's not even the speed of memory, but rather the amount of memory that you have available. Again, we're talking 70-80% of virtualization candidates here, so it's the amount that really counts, whether you oversubscribe or not, as opposed to the speed.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
TomHowarth
Leadership
Leadership

from a personal design perspective - the scale up scale out argument only holds water where you are using large SMP powered VM Guests as you would have more Cores available to services Multi Processor Guests. As Anton says scale out builds in more resliance by design.

as a Blade sweet point I like the BL 680/685s as they are Quad/Quad 64GB beasts. the IBM 3850 M2 is a beast of a blade too. I have no knowledge of recent Dell servers, but I have heard their build quality and reliability have inproved over earlier models

But if all you can afford in this budget cycle is a single blade chassis, then Blades are not a sensible option for your design. if you loose your backplane you are goosed. if a Admin has a dumb moment and powers off the blade chassis and not a blades bang no system.

I would look at Rack mounts in your particular case.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth VCP / vExpert

VMware Communities User Moderator

Blog: www.planetvm.net

Contributing author for the upcoming book "VMware Virtual Infrastructure Security: Securing ESX and the Virtual Environment”.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
Reply
0 Kudos
AsherN
Enthusiast
Enthusiast

I'm going through that right now. Looking at Dell 2950 and IBM 3550. There is no clear price winner. The downside of IBM is their RAM prices. I'm pricing the 3550 with Kingston RAM and the prices come down to just about equal.

As far as processors are concerned, I looked at the 5500, but simply could not justify the price. I'm going to go with 5440 or 5450. Most likely the 5440. Again price. The price increment between the two is rather high for the 0.17GHz increase.

I'm also scaling out. Going with 2 single proc servers rather than one dual, for redundancy.

If you're virtualizing mostly MS servers, take a lok at MS capacity planning tool. You may be surprised at how little power you'll need.

Reply
0 Kudos
alex555550
Enthusiast
Enthusiast

Hy,

when you buy an IBM, be aware that you should call an sales REP and ask for discount. He will give you an Contract Number which you can tell you`re local shop.

Reply
0 Kudos