VMware Cloud Community
DSeaman
Enthusiast
Enthusiast

Best blade servers for ESX? IBM or HP?

We are in the planning stages of a ESX consolidation project for several dozen VMs, and possibly doing VDI down the road. We are currently a HP shop, and like the Proliant servers. Management is having us look at IBM BladeCenter as part of the design process, which is fine. Its good to review the market when doing a major deployment to make sure you have the best technology. After extensive research on product specs, the clear leader seems to be HP. Between the Virtual Connect Flex-10, BL495c's 16 DIMM slots, virtual connect fibre channel, and more flexible I/O card options, I didn't see anything on the IBM side that could even compare. AMD also seems to be the best ESX CPU, from what I've read as well.

Is there anything compelling on the IBM side that I'm missing? Given their market share decline over the years and somewhat limited products and IDC rating HP #1 virtualization servers, it seems like a no brainer to use HP C-Class blades. IBM just doesn't seem to have the flexibility, not to mention the large cost difference when populating servers with 64GB of memory.

On the NIC side, using the Flex-10 NICs, the 495c two 10G NICs can be turned into eight, which would allow for robust console, vmotion, and production networks without any additional I/O cards needed.

Thoughts?

Derek Seaman
0 Kudos
24 Replies
java_cat33
Virtuoso
Virtuoso

Hi - I can't comment on the IBM blades as I'm not familiar with them - but had significant experience with the HP C-class blades. Haven't had any problems with them, as you mentioned.... very scalable. I've also found them very reliable and easy to manage.

0 Kudos
benma
Hot Shot
Hot Shot

We are using the c7000 enclosures for virtualization.

BL495c is THE Blade. I can only recommend it.

We don't use virtual connect because our storage is iscsi. 2port HW iSCSI + quadport nic

Flex-10 NICs is a good investment for the future.

0 Kudos
renegadeZA
Enthusiast
Enthusiast

Both Blade Enviroments have benifits, but the most cost effective of all blade envirments is Dell, Cheaper, faster and reliable.

HP and IBM overprice there Equipment far beyond any Small company's Budget. But IBm, HP ,Dell are the market leaders in Server Techonlogy.

Look if you have got the money, and you are happy with IBm/HP then cool, but if you have doubts contact them let then Consolidate your Envirmoent and have a look see at what they come up with.

Hp and IBm are amazing Servers but if i had to choose givin no Cost restraint, i would Opt for Dell.

Just some Info.

Have fun guys - May the VMware be with you. :smileycool:

Comptia A+ | Comptia N+ | CCNA |NES |HP ACT | EE N3 | MCP | MCTSx1 | MCITPx1 | SCS | VCP

-


Please Reward Points----


Kind Regards Gareth Ray Smith Comptia A+ | Comptia N+ | CCNA |NES |HP ACT | EE N3 | MCP | MCTSx5 | MCITPx2 | SCS | VCP 3 | VCP 4 | KLDST ---------Please Reward Points---------
0 Kudos
Ken_Cline
Champion
Champion

We are currently a HP shop, and like the Proliant servers.

From my perspective, that's about 75-80% of the decision making process. The technilogical differences between the platforms are "relatively" minor (that's likely to attract a flame or two!) in the broad view of things. What's more important (to me) is the skillset that is already present in your shop and the relationship that you already have with your vendor/VAR.

If you look strictly at technology, you'll be changing vendors every six to 12 months, because they constantly leapfrog each other. While HP has the "best" technology today (hypothetically), tomorrow IBM will come out with something "better". Wait another six to 12 months and HP will be back on top. That's the way this industry is. I can tell you that BOTH companies make GREAT products - I know from personal experience as well as from looking at market share data. You don't wind up with 50% (HP), 25% (IBM), or even 10+% (Dell) market share by volume by selling junk Smiley Happy

Ken Cline

Technical Director, Virtualization

Wells Landers

TVAR Solutions, A Wells Landers Group Company

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
khughes
Virtuoso
Virtuoso

I agree with Ken its like asking whats better for you coke or pepsi... The most important factor I believe is go with something YOU are comfortable with. God only knows how things are going to turn out if you go with one of our recommendations and you aren't 100% behind that choice sitting back with doubts. Another spin into the mix is how long are you looking to keep these servers? Granted we don't use blades, but we used to be a 100% HP/Compaq shop until we realized we couldn't get a warranty past 5 years with them. We then turned to IBM where they'll support things 10 years old if they can, and that was a big factor for us, not HAVING to replace hardware just because it was out of mfg warranty. As far as I know (HP could've changed their policy) but HP and Dell wont renew warranties without jumping through more hoops than a circus past 5 years of life.

Just another angle to consider. In the end just make sure you stay consistent, with relatively same hardware and same processors and it should work out fine.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
azn2kew
Champion
Champion

I've dealt with both IBM and HP blades for multiple clients, and they all seems to be pretty reliable and very well architected. I enjoy working with IBM HS21 from my previous project and I have not experience at issues lately just make sure you have updated latest drivers/firmware. But have some minor issues on IBM x3650 server themselves and had to replaced motherboards on 2 ESX hosts Smiley Sad . When you work with HP DL585 series, you probably experience with blue screen of death as well and all depends "god knows" nothing is perfect. I felt these two vendors can be trusted on the blade series and here's a quick comparision of both vendors on blade series so it depends what you decide but both are doing just great.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
0 Kudos
kbassham
Contributor
Contributor

SUN X86!

0 Kudos
AntonVZhbankov
Immortal
Immortal

I use HP BL460c and can't complain.

All major vendors have good blades, so the best blade for ESXi is the blade you already use for something else. Less vendors and less server models in your datacenter - less problems with support you have.

I believe HP allows more blades in the chassis of the same height.


---

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
0 Kudos
SunnyC
Contributor
Contributor

Hi DSeaman,

I work for IBM and was wondering if you came across the LS42 blades.

It's 4 socket, 16 DIMM slots (goes up to 128GB of RAM).

Was wondering if you are interested into having a conversation over email, my email address would be sunnyc@ca.ibm.com.

0 Kudos
DSeaman
Enthusiast
Enthusiast

After doing a thorough comparison, for us, the clear winner was HP. On virtually every aspect important to our project the BL495c and the C class chassis were far better choices. Management is agreeing, so I think that hurdle has been cleared.

Derek Seaman
0 Kudos
meistermn
Expert
Expert

Backplane of the chasis is key. HP tells that a chasis cannot fail. The reality is, it can fail. Talked with a guy from a big insurance company and he told, that the backplane of the HP backplane had gone

and so more than 400 Server Vm's were down. What did you think the Cio said?

I say with 4 socket and 8 socket rackservers. Many PCi Slots and many Dimm Slots and even better solution for smp fault tolerance in vmware esx 2010.

IBM and SUN rackserver's I prefer now.

If blades than IBM only. They have two backplanes, so redundancy for the backplane.

Look at the IBM Blade VS HP Blade video:

0 Kudos
Peter_Grant
Enthusiast
Enthusiast

Coke or Pepsi (Coke)

HP or IBM (HP)

Let's face it HP have the market share and for good reason. I'm sure IBMs work fine but my general experiance with IBM has not been that good in many respects. Also with HP buying up companys like LeftHand networks at least you know you won't be caught out in the cold later.

---

Remember, keep low, move fast, trust no one!

(If you found this helpful then please award points Smiley Happy )

------------------------------------------------------------------------------------------------------------------- Peter Grant CTO Xtravirt.com
0 Kudos
DSeaman
Enthusiast
Enthusiast

Well to be honest any company that has 400 VMs in a single rack with no redundancy doesn't know how to plan properly. If they really have high availability requirements, you should split your VMs between phyiscal racks. If one rack goes up in fire, then you are still protected. Putting all of your eggs in one phyiscal basket, no matter who makes it, is silly if the potential outage will cost major $$.

High availibity requires an end-to-end datacenter design from power, hardware placement in racks, HVAC, networking gear, storage, etc.

Derek Seaman
0 Kudos
guam58
Contributor
Contributor

IBM's touting of a redundant midplane is not that big of a deal. In fact the way IBM does their 'redundant midplane' is actually worse (less reliable) than HP's midplane!

HP's midplane is really just a bunch of cables in the form of a PCB with connectors on it. How often do high quality cables go bad? Think about it.

IBM's midplane has tons of active components (things like processors capacitors etc) that heat up, cool down and might break and potentially take part of the enclosure down with it. What IBM touts as being 'redundant' about their midplane is that each blade system has 2 connectors to the midplane which they claim makes it redundant.

Sadly this is not the case as if you lose one connector to the midplane you could lose half of your connections to your I/O equipment or lose enough power to shut your server down!

In this particular instance (a midplane) more parts is not necessarily better!

I have to agree, why would you put 400 VM's on one backplane. The biggest fault is that HP should have never told the customer that the backplane will never fail. There is no such thing as never fails. There is about a 2% failure rate on the HP backplane that I have found.

IBM is a strong company the biggest problem I have had with them is they are to FUD happy. And, going back to the gentleman who wants a 10 year services contract. Who keeps their equipment for 10 years now a days, when semi-conductor companies come out with new processors every 6-9 months. Even with the best engineering there is no way to put in a processor that increases in performance every 6-9 months and all the other parts keep pace. The I/O alone would bog down the system due to performance jumps. IBM does the 10 year services, because it is a way to keep customers. It is just smart business. It cost more to gain new customers than to keep customers.

As for the HP blade it is a great choice, because the management software is by far superior. Virtual Connect makes Cisco nervous, because it cuts your network hardware cost on average about 50%. You don't have to purchase more switches.

0 Kudos
mreferre
Champion
Champion

I don't usually jump into threads like this (which have usually fair comments) as I don't want to act as the "IBM sales man" (which I am not by the way) but when there are HP sales pitches I feel like I need to set the record straight.

Your analysis of redundancy is pretty weak.

>Sadly this is not the case as if you lose one connector to the midplane you could lose half of your connections to your I/O equipment or lose enough

power to shut your server down!

I am not sure what you are referring to. The closest thing I can think of was a situation on the BladeCenter E where if you were using specific off-limits configurations there were some end-user configurable parameters that had an option of shutting down one or more blades in particular conditions. Notice this is like allowing a user to configure an array in RAID 0. It's an option and you know what you get with that option. Having this said this was the case on the BC-E. This is NOT the case on the latest BC-H. Now, if you talk C-Class I'd talk BC-H. If you talk BC-E I'd tend to talk P-Class. Glad to do that if you want...

The fact is that the BC-H has a FULLY redundant backplane vs HPs single backplane. Now, I think/hope that you meant 0,00002% of backplane failures because if they really were 2% as you said ...... I can't believe there is so many customers buying a so crap piece of hardware that is a SPOF that fails 2 times out of 100. You must have been to pessimistic.

BTW it's not so much the backplane that could fail but what's running on top of it. If you put a number of redundant parts into the same pipe and one of the parts misbehave there is a chance that will bring your box down. I am not saying this. It's HP's words:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01519680&dimid=1012424238&di...

>HP has identified a potential, yet extremely rare issue with HP BladeSystem c7000 Enclosure 2250W Hot-Plug Power Supplies manufactured prior to

>March 20, 2008. This issue is extremely rare; however, if it does occur, the power supply may fail and this may result in the unplanned shutdown of the

>enclosure, despite redundancy, and the enclosure may become inoperable.

Let's face it HP have the market share and for good reason.

Peter, I don't agree. The reason for which they have the lead is because 1) they are very much focused on the platform and 2) they have a very strong channel.

This has little to do with the technology. Proof of it is that back 3 or 4 years ago we had a more or less 50+% marketshare whereas HP had something like 30/40%. That was the time of the BC-E and the P-Class. Have you ever seen a P-Class? Jeez... I can tell you that if we had an offering like the P-Class and HP had an offering like the BC-E we would have had 0% of marketshare and HP would have had close to 100%.

The fact is that both are very good products and each has their own plus and minus. I just feel the need to set the record straight when I see (too much) FUD.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
guam58
Contributor
Contributor

Well to you it is weak, but it happened during the proof of concept where the entire bladecenter came down twice. We were a mix of SUN and IBM, but rising cost of your support, which we were told from the fore front would not increase, which it did by 4x. Then the SUN roadmaps and the SUNspot demonstration really made us rethink our overall strategy. Then the biggest SNAFU your sales team came up with was z-Linux on main-frame. We went off the main frame, because your team stated oh yeah we can run your entire app stack on z-linux virtualized. Yeah after we saw the overall cost we asked how is that helping us save money.

It actually came down to Egenera and HP, but the overall architecture balance was just too great with HP. Egenera actual management is far surperior, but the overall HP blades from performance, management, power/coooling savings (biggest selling point) and performance was by far the best.

Yes you are probably an IBM sales rep and really the backplane piece is minimal compared to the overall product go forward strategy.

0 Kudos
enDemand
Enthusiast
Enthusiast

Reading all the responses, I agree that you have to factor all considerations, not just the servers themselves. In many cases, the blade server's chassis and environment is an extension of the core data center, and flexibility and compliance with data center standards is a must. Although HP does have the larger market share (with IBM coming in 2nd), we tend to see HP blade servers in SMB implementations and IBM blade servers in large enterprises...even in shops that had previously used HP ProLiant rackmounts. There are several reasons for the latter, but the one that I've seen come up the most is that IBM has the widest selection and variety of chassis I/O options, including the newly released Cisco Nexus 4001I for shops that want end-to-end Nexus technology. This is huge if you're planning on deploying Cisco Nexus 7000 at the core and run Cisco Nexus 1000v. Though not required, it's nice to define port profiles at the core for both physical and virtual and have that propogate down all the way to the 1000v (but only if you have Nexus in between). IBM does facilitate MAC and WWN assignements with Open Fabric Manager (OFM), which is similar to HP's Virtual Connect Enterprise Manager (VCEM)...both giving up to 8 interfaces for connections to 8 different fabrics. Our VMware vSphere reference architecture based on IBM BladeCenter is comprised of the BladeCenter H, IBM HS22 (with the Xeon 5500s) configured with 2 x 1GbE, 4 x 10GbE (for data and NFS datastore connectivity), and 2 x 8Gb FC connections...more than flexible enough for a robust vSphere deployment.

On another note, I'm not sure if I agree with AMD being the best selection. Intel's Nehalem microarchitecture is far more advanced for virtualization. Although limited to 2-socket right now, the 4+ socket Nehalem-EX will be out early 2010 and will further spread the divide between Intel and AMD.

If you find this or any other answer useful, please consider awarding points by marking the answer "correct" or "helpful".

If you find this or any other answer useful, please consider awarding points by marking the answer "correct" or "helpful".
0 Kudos
williambishop
Expert
Expert

Going to throw some info into the mix here. We're an IBM shop, so we did the HS chassis, and some things I've discovered are...1) super reliable, I haven't heard of ANY backplane or chassis failures...let alone 2%. Doesn't mean it can't happen, it very well could. Luckily I build redundancy in. 2) The chassis change rate is very long, one of the reasons we chose it (HP chassis changes are fairly frequent). Personally, I'd put the HP and the IBM bladecenters at equals, either is a good option, but the best option of all is to plan your environment carefully with the assumption that at some point it WILL fail. Knocking on wood that mine stays up forever.

Ita feri ut se mori sentiat

--"Non Temetis Messor."
0 Kudos
enDemand
Enthusiast
Enthusiast

You may find this of interest...[IDC x86 and Blade Server market analysis|http://www.idc.com/getdoc.jsp?pid=23571113&containerId=prUS22100809], published on 12/2/2009.

If you find this or any other answer useful, please consider awarding points by marking the answer "correct" or "helpful".

If you find this or any other answer useful, please consider awarding points by marking the answer "correct" or "helpful".
0 Kudos