VMware Cloud Community
rossb2b
Hot Shot
Hot Shot

Blade survey

Are you using HP, Dell or IBM blades for your ESX environment? What do you like or dislike about them and if you could do it all over again would you choose the same solution?

I little over a year ago my Dell shop decided not to go with the Dell Blades because we couldn’t get enough nics / fiber cards in there. The chassis have changed a lot in the last year so I don’t think it will be an issue any longer. So, I would like to hear others experiences.

Thanks,

-rick

0 Kudos
24 Replies
polysulfide
Expert
Expert

I use HP Blades at home, the design allows for plenty of NICs and HBAs, they're pretty much just virtual backplane connections to the interconnect modules.

It;s tough to get the ROI on a blade infrastructure and on the VM licensies since blades don't have as much bang per core. so if you have a lot of money and density is the only factor, great, otherwise look at standard servers in my opinion.

If it was useful, give me credit

http://communities.vmware.com/blogs/polysulfide

VI From Concept to Implementation

0 Kudos
dkfbp
Expert
Expert

We have two HA/DRS clusters at our location. One consists of 10 IBM x3850 and the other of 5 x HP480c Blades. I really love the blades due to power consumption, cooling, rackspace savings and ease of cabling. When we want

to expand our blade farm it is as simple as to plugin the blade, and start installing ESX. On a normal rack server I have to use half a day on labeling cables, finding switch ports, and patching the cables.

I am really fund of Blades and I hope someday we can switch our other cluster to blades.

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
0 Kudos
Hairyman
Enthusiast
Enthusiast

We use Dell 1955 blades with Dell branded EMC CX3-20f. yes the 1955 blades only have 2 nics but the only downside that i have is that i can't purchase more of them at the moment. Oh we are an all Dell shop by the way (shameless plug..........)

They were bought for much the same reason as the last post, power saving, cooling, server consolidation

0 Kudos
Rodos
Expert
Expert

Our company does a lot of blade installs with VMware. More blades than standalone servers. The Dell blades were of little use, you could not get enough IO ports on them. Ours have been mostly HP P and then C class.

The thing you want to put effort into on your blades is the interconnects. That needs to be designed and implemented right. Is it best to go pass through devices or integrated FC/network switches. Doing the teaming off the interconnects into the datacenter switches. Do you need to pass the FC through another FC switch which may already be in place or are you going to connect direct to the SPs. Understanding the redundancy of the add on IO modules and how they interface to your interconnects so you really do have the redundancy you thought you had. Its really a case of knowing your hardware.

If you ask me blades are a great platform to run VMware on.

Considering awarding points if this is of use

Rodos {size:10px}{color:gray}Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/{color}{size}
0 Kudos
williambishop
Expert
Expert

We went the blade route when we discovered it would be far cheaper to stick in a few chassis of blades, than to fill half a datacenter with 4 or 8 way boxes and we still ended up with more power. I don't understand the user who complained about bang per core....These are dual quad core xeons with (now) 12 meg cache and 24 gigs of memory(we went cheap, as 32 gig was a bit higher). We'll be near 100 blades next month, and I have yet to be less than thrilled. We've had 1 blade go out in the last year, otherwise--rock solid.

Cabling wise, 4 fiber connections (dual path and dual fabric), and 8 connections for ethernet, for up to 580 vm's. That would be equivelant to about a rack of "singles" which would have substantially more connections, and MIGHT have the same processor and memory power as that single chassis. Talk about your savings in HBA's, connectivity, and ports!

Don't even get me started on power savings! Anyone who doesn't see the merit in blades....Hasn't done it either enough, or properly. Nothing personal, but fiscally, blades are very attractive, and they don't have ANY performance impact that I've ever seen.

--"Non Temetis Messor."
dmorgan
Hot Shot
Hot Shot

We also use Dell 1955's. I am not sure about the complaints about number of ports or bang per buck... Each blade has dual quad core Xeons, and with the Fibre passthrough we get enough Fibre ports, and dual NIC's for each blade. Since we use Fibre for just about everything, and very little iSCSI at all, dual nics works fine for us. Only complaint I guess is that they max out at 32GB of RAM per blade. RAM is typically the most limiting factor is the number of VN's per server anyways, so it would be nice if these could take more.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos
rossb2b
Hot Shot
Hot Shot

Thanks for the info William. What manufacturer did you go with for your

blades?

-rick

0 Kudos
williambishop
Expert
Expert

We're IBM, but I like the HP as well. I don't want to dismiss Dell, because it's been a while, but I've never had an abundance of luck with Dell server lines...

--"Non Temetis Messor."
0 Kudos
williambishop
Expert
Expert

You can generally get more memory(I Know IBM has an addon that allows you to hit 64G on one blade but it increases the blades profile), but with blades being as cheap as they are, you're better off adding more blades and decreasing your density if you need to give vm's more memory per.

--"Non Temetis Messor."
0 Kudos
mikepodoherty
Expert
Expert

IBM Blades - ls 20s and hs20s

almost 300 blades - a few physical machines - ms sql, exchange, otherwise vmware

reliablility issues - most caused I believe by IBM not being able to keep up with demand in 2006

Pros- IBM Management module, density

Cons - reliability

0 Kudos
williambishop
Expert
Expert

What kind of reliability issues are you seeing? Are you updated on firmware, etc.?

Are you seeing the issues on the ls(amd's) or the hs(intel) side of the house? Every experience differs, but we've had such good luck with our blades....

--"Non Temetis Messor."
0 Kudos
murph182
Contributor
Contributor

what version of ESX are you running on your LS20's? We have some LS20 hosts and can't upgrade to 3.5. I don't know what the exact issue is, but the LS20's are officially NOT compatible with 3.5, but they are with 3.0.2. Anyone know why this is?

Our new blades are IBM HS21's, on which 3.5 runs just fine.

I had a chance to play with some Egenera stuff the other day. Wow.

0 Kudos
mreferre
Champion
Champion

I don't know of any "compatibility issue" by design. From this respect 3.5 is just a big functional patch to 3.0.x (to include additional tools / utilities etc etc). Sounds like a testing statement to me (but I'll try to know more).

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
mikepodoherty
Expert
Expert

ESX 2.5.x and 3.0.1

Planning to upgrade to 3.0.2

Can't upgrade to 3.5 unless/until officially supported

Firmware updated as needed - most of issues haven't been directly related to firmware - example DIMM problems - only 1 LS20 DIMM problem that my team handled was resolved by firmware upgrade - rest required replacing DIMMS. Not sure about DIMM problems for blades handled by other teams but the sense I get is a lot of DIMMS have been replaced.

Most problematic are part of batch from October 2006 (JS21s in same delivery have been flawless) - LS20s from June 2006 order far fewer problems, although higher than we like

Since the most problematic were manufactured within a couple of days of each other, probably a bad batch of compnents but ...

IBM support has been excellent but we are now on first name basis with tech

0 Kudos
mreferre
Champion
Champion

I have got confirmation from engineering that it is a testing statement. They don't foresee any problem with running 3.5 on the LS20's.

My suggestion is that you touch base with your local IBM rep and ask him/her to have the lab run a "SPORE" (he/she knows) on that config.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Dave_Mishchenko
Immortal
Immortal

Do you know if there are any plans to support 3.5 or 3i on the HS20s?

0 Kudos
mreferre
Champion
Champion

Dave,

3i embedded no way.

The discussion I had regarding 3.5 / 3i installable would lead me to think there will not be further tests on xSx0 models of blades. BUT THIS IS NOT AN OFFICIAL ANSWER. I suggest that those that have these blades and are interested in upgrading them get in touch with their local IBM rep and talk about it.

Please everybody ... don't quote me on "IBM said they will not support older blades with newer ESX versions" .....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
langonej
Enthusiast
Enthusiast

I've used HP P-Class, HP C-Class, IBM H and Fujitsu blades.

My preference: HP C-Class blades. When people I talk to have a bad taste in their mouth about blades, I always ask them if they've used the HP C-Class and they always answer, "no." I believe they may be more expensive than some of the others, but they are solid.

P-Class = Not sure they even sell these anymore. Able to get a lot more density out of the C-Class.

C-Class = Virtual Connects for Ethernet and FC are very nice. The active cooling modules use less watts than competitors' chassis.

IBM = Very poor remote management. Single concurrent session (at least in the datacenter I was in) which resulted in "console jacking." Can only have two ethernet switches per chassis as opposed to the four switches per chassis in the C-Class (I like having 4 NICs in my blade but I want those 4 NICs to connect to four separate switches. When it connects to two switches I'm essentially running with two NICs).

Fujitsu = Limited use thus far. Performance and uptime have been poor upon initial impression. Remote console leaves much to be admired as well.

0 Kudos
williambishop
Expert
Expert

So, you're basically seeing bad memory pieces when they ship in, or did back in 2006? Were they 2 gig parts? I hate to bother you, just trying to get a feel for the problem...Are they reliable after you get the replacement memory?

We're around 96 blades at the end of this month, and we've been lucky thus far. We did have to send two sticks of memory back out of one batch, but I didn't count really as an IBM blade problem, since the occasional bad memory stick is pretty much universal, I had the same experience with HP once, but I didn't hold it against them. It wasn't a HP issue, it was a memory problem. I don't think any of them make their own memory.

I do understand what the previous poster says about IBM's remote management, only 1 user being able to access it...is frustrating at times. But since we're running primarily ESX on ours, we don't often need to get into a blade physically. It is really convenient though when you need to get into the blade, I've not seen a better mechanism yet(though I really do like HP's quite as much for different reasons)

--"Non Temetis Messor."
0 Kudos