VMware Cloud Community
rossb2b
Hot Shot
Hot Shot

Blade survey

Are you using HP, Dell or IBM blades for your ESX environment? What do you like or dislike about them and if you could do it all over again would you choose the same solution?

I little over a year ago my Dell shop decided not to go with the Dell Blades because we couldn’t get enough nics / fiber cards in there. The chassis have changed a lot in the last year so I don’t think it will be an issue any longer. So, I would like to hear others experiences.

Thanks,

-rick

0 Kudos
24 Replies
mreferre
Champion
Champion

I hate to do this but I want to, at least, to set the records straight. I really think that the c-class is a great product but there are a number of misinformation in this post that I feel need to be clarified.

Virtual Connects for Ethernet and FC are very nice

It is indeed. If you like the concept I suggest you (also) have a look at this: http://www-03.ibm.com/systems/bladecenter/hardware/openfabric/openfabricmanager.html

The active cooling modules use less watts than competitors' chassis.

This is the never ending battle. HP publish results where they say they are best. IBM publish results where they say they are the best. Dell publish etc etc etc etc etc. I must admit that "active cooling" is a cool name .... but it really boils down to which marketing machine you trust more.

IBM = Very poor remote management

Well I can't say much ... that's your opinion and I respect it.

Single concurrent session (at least in the datacenter I was in)

Well other DTC can do multiple concurrent sessions... Smiley Happy Joking aside the currently shipping blades with the currently shipping chassis do support multiple concurrent sessions.

Can only have two ethernet switches per chassis as opposed to the four switches per chassis in the C-Class

? The BC-H supports up to 4 standard switches + 4 x high-speed switches. Alternatively you can have up to 8 standard switches in the chassis in order to support up to 8 I/O ports x single-wide blade (example 6 NICS + 2 FC). What you describe is a 2 years old limitation bound to the BC-E chassis we have announced back in 2002 and still sell in volume (not all customers need to go to the moon with I/O).

Again I am not here to try to sell IBM kits ...... but I think a "repositioning" of what has been discussed was due.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
langonej
Enthusiast
Enthusiast

My apologies. I am at the mercy of another contractor and contract company that handles all of the IBM kit. They are onsite as "the IBM blade experts" so I had to essentially send an RFP to them and they came back and told me - four ethernet switches and two FC switches in a chassis is not possible. They also told me that concurrent remote management sessions is not possible.

The rest I'll agree with you and chalk up to propaganda from the various vendors. However, it's easy to see why I think these blades are rubbish if my above two statements are true. If they are not (and it's quite possible) then our contractors need better education.

My completely hands-on experience with the C-Class has been nothing but positive.

0 Kudos
mreferre
Champion
Champion

No need to apologize. As I said my post was more for helping others get a clear picture of what's available in the industry rather than entering into a vendor discussion with you.

Pictured like this I can't say whether these people are NOT blades expert or what. The only thing I am sure is that, with the currently shipping technology (and since a while actually), I can assure you that what I have outlined is technically possible.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
mikepodoherty
Expert
Expert

For us, we still get occassional memory problems - from both batches - but the frequency has decreased over time.

The HBA problem on one batch is probably a bad batch of HBAs but it is frustrating.

0 Kudos
kingsfan01
Enthusiast
Enthusiast

I'll toss my opinion into the ring as well... I am using a IBM BladeCenter H chassis w/ HS21 blades at our DR site. It is my first blade system and overall I am pleased with it. I had a problem on initial implementation with a bad management module (shipping damage courtesy of UPS) and a nasty time trying to get the Nortel L2/L3 switches configured and working correctly. I ended up pulling them and returning them in favor of the Cisco switches which work perfectly. The other issue I had was the Qlogic iSCSI daughter card for the blades... the thing kept locking up VMware on boot so I pulled it and installed the I/O expansion module w/ a Qlogic 4052C instead which works beautifully (BTW - the iSCSI daughtercard works fine on Win Server 2003, I updated the firmware but had already setup the 4052 and didn't feel like switching back).

Our current setup is the H Chassis (8852) w/ 5 HS21 (8853-AC1) w/ 2xQuad-Core Intel ULV @ 1.86Ghz. 3GB ram on 5 blades, 16GB on the ESX server, 2x73GB 15K SAS R1 on each blade for local boot. 4 NICs per blade (dual on-board w/ TOE teamed for iSCSI + Ethernet expansion card teamed for LAN) w/ an additional IBM 4 port GBE NIC on the I/O expansion module as well as the 4052C. All blades are connected to a dual module LeftHand iSCSI SAN (NSM2060 @ 3TB ea).

I attached a couple of images in its original iteration (first config'd with the copper passthrough module since replaced with the Cisco switches)... sorry for the crappy quality, it's all my Blackberry could muster but you get the idea.

Tyler