VMware Cloud Community
ufo8mydog
Enthusiast
Enthusiast

HP vs IBM blades for ESX/vSphere

Hi all,

We currently run ESX on Dell rackmount servers but VM growth is out of control! Therefore, we are looking at a new platform to handle it. Now that Intel Nehalem is out we are looking at the blades scenario. My questions

1) Blades or Single servers? We have plenty of room and power, so I am looking at things like ease of deployment, cabling, management, performance, price and redundancy.

The one thing I have concerns about is that by going with a Blade Chassis you are consolidating risk - is it possible for current generation blade chassis to "fail" as it were, taking out everything? Or is there enough redundancy in every component to ensure this does not happen. Which option is more cost effective - budget and value are big considerations.

2) IBM or HP? Does anyone have experience using both chassis in an enterprise environment, and which would be more suited for a vSphere deployment?

3) Should I consider something slighter cheaper, e.g., Dell.

Thanks very much for your help

Reply
0 Kudos
26 Replies
AsherN
Enthusiast
Enthusiast

I deal with a CDW corporate rep for the IBM stuff. And a Dell corporate rep for Dell.

Tell them you are seriously looing at the other guy. You would be surprised the deals you can get.

Also don't assume relationship between vendors mean lower prices. I had Dell and CDW quote me on an AX4-5F, exact same config. AAMOF, I sent CDW the Dell quote to make sure everything was the same. CDW came back $5K cheaper.

Reply
0 Kudos
ufo8mydog
Enthusiast
Enthusiast

Great points everyone-

Asher N said:

"I'm also scaling out. Going with 2 single proc servers rather than one dual, for redundancy."

Do people here generally like to go Dual Quad or Single Quad? E.g.

Scenario 1:

3 x servers comprising:

Dual Quad

48 or 64 GB per server

Advantages: less cabling, less power consumption, easier to remediate 3 hosts rather than 6, smaller datacenter footprint (ports, powerslots, RU). ~50% cheaper than scenario 2?

Disadvantages: more vulnerable to host failure, more "eggs in one basket" so to speak, higher I/O utlization on the network links?

Scenario 2:

6 x servers comprising:

Single Quad

24 or 32 GB per server

Advantages: the risk is spread across more hosts. Can potentially have N2 instead of N1

Disadvantages: more to manage, bigger footprint. The opposite of Scenario 1's advantages really.

(I do apologize if this has been discussed at length elsewhere ).

Edit - I should mention that in both scenarios 99% of the VMs are single core with up to 2GB of RAM, due to growth there could be a hundred of more of these in the not too distant future.

Edit #2 - While googling the topic I found this presentation which is most relevant to the discussion -

Reply
0 Kudos
VirtuAdmin
Contributor
Contributor

Regarding blades: have both HP and IBM (both small and large one). At moment running IBM Blade E, Blade S and Blade H. On blade S iSCSI on separate adapters works nice. IBM is cheaper to expand than HP, but HP got some additional SNMP software for ESX. On IBM blades we use SSD 16Gb as boot disk. IBM 21 XM blades have enough memory space (unlike non-XM 21 blades), but place only for one internal drive (no mirroring on ESX OS, think this is irrelevant on VMware or with SSD).

Blades are too expensive to have them with one processor. At moment most cost effective CPU's are QC 2.5GHz, speed gains with more GHz are too expensive

Reply
0 Kudos
AsherN
Enthusiast
Enthusiast

I would go with the least number of servers that give me redundancy. The farm must be sized so the in the event of the loss os a server, the rest of the farm must have enough resources to take over that workload.

My scenerio may be atypical. I can run all my servers on a single quad. I may be able to run them all on a single dual :). I just want the cheapest redundancy, with 'scale-up' possibility. I'm buying 2 socket servers because if I start to require more resources, I'll add CPUs to my hosts.

Reply
0 Kudos
SunnyC
Contributor
Contributor

If budget and value is a concern, I would probably go with x3650 M2 instead of blades if power and storage space is not a factor.

If there's anything I can help, as I do work for IBM in hardware, feel free to message me.

Reply
0 Kudos
devzero
Expert
Expert

>We currently run ESX on Dell rackmount servers but VM growth is out of control! Therefore, we are looking at a new platform to handle it.

if VM growth is out of control, i`d better setup a process to handle VM management first instead of throwing hardware at the problem.

Reply
0 Kudos
cdc1
Expert
Expert

Check into boot from SAN. If you have the infrastructure to support it, you can get rid of the local disk, which will lower the price of anything that you purchase.

Also, like others have mentioned, if you contact a sales rep from whichever vendor you choose, you can usually get much better deals than you see on their websites (i.e.: "sticker" price).

I've seen environments that have both HP blades and IBM blades. When the customers were asked about which one they prefer, they typically say "IBM". Mainly for these reasons:

1. When HP sold them their blades and EVA8000 storage, they were assured by HP that the EVA's could do mirroring for DR purposes. This was not true, as the customer found out a few months after they purchased the hardware. Not sure if this has changed since then ... it was about a year and a half ago.

1a. Their internal HP drives tended to have more failures than the IBM's (see last paragraph about internal component layout in a blade.)

2. When they purchased their EVA8000's, they also purchased a mixture of SAS and SATA drives to go with them (tiered storage planning). However, they are unable to mix the two together in certain configurations. HP has told them it's a known issue, and one they are working on. Last I heard (about a month ago), HP was still working on a fix.

3. They were given a very attractive price at the start, which HP basically did to get their foot in the door at a new customer site. Some of them that were in this situation are now finding that they can no longer get the deals they were able to get before.

Most of them, however, do like the management interface for the HP blades more than the IBM one. Personal preference? Maybe. I don't have a preference.

So, what I'm saying is, do your homework. Shop around. Post in more forums besides the VMware ones. Read up on the products until you start dreaming about this stuff.

As an aside, I'm not sure about all of the IBM BladeCenter chassis out there, but the BladeCenter H does have redundant backplanes. Everything in them is fully redundant.

One final note: HP blades have the RAM laid out so that it's behind the internal drives/CPU's, if I remember correctly. The IBM HS22's have the RAM laid out down the center. Fans in the front, forcing air down the blade from front to back. RAM gets as much air flow as the CPUs. Which reminds me, decreased heat from having no internal drives is one of the reasons why I suggest a boot from SAN environment (decreased heat = less cooling required = longer life for internal components ... and also = less power consumed per blade.) Just something to think about while you're researching and trying to decide what to go with.

Reply
0 Kudos