VMware Cloud Community
JonRoderick
Hot Shot
Hot Shot

Experiences of ESX and IBM Bladecenter

I'm looking for feedback on experience people have using ESX 3.x.x on IBM Bladecenter (specifically Bladecenter S but any will do).

Specifically, hardware configurations used, any issues, limitations - what storage virtualisation solution did you use?

Thanks

Jon

Reply
0 Kudos
3 Replies
kingsfan01
Enthusiast
Enthusiast

Hi Jon,

I am running a BladeCenter H chassis with 3 vHosts out at my DR site and I love the system. Quick run down on my system:

IBM BladeCenter H (8852-HC1) with 4x IBM Server Connectivity Module, 4x Power modules, 2x Chassis Cooling modules and a standby management module.

vHost 1 & 2 - HS21 (8853-AC1) - 2x 1.86Ghz Quad-Core Intel Xeon, 24GB ram, Memory & I/O blade & 2 IBM CFFv Ethernet expansion modules (8 NICs per host)

vHost 3 - HS21XM (7995-HVU) - 2x 3Ghz Quad-Core Intel Xeon, 32GB ram, PCI Expansion board w/ 2 Dual port NICs & 1 IBM CFFv Ethernet expansion module (8 NICs total)

For shared storage, I am using 2 LeftHand NSM2060's @ 3TB (SATA) each and I am using the VMware software iSCSI initiator.

Thus far, everything has been great. My 3 major complaints when I implemented the system:

1) At the time, IBM only offered ethernet expansion cards with 2 NICs which meant I had to get an expansion module and additional daughtercards or NICs if I wanted more NICs per host. They have since released a quad port nic expansion card which will get you to six NICs in a single blade slot which would be ideal for me. I may go back in the future and put in the Quad port nic daughter card if I need to free up some additional blade slots.

2) I did the initial deployment using 4 x Nortel Layer 2/3 gigabit switch modules which were a massive pain to work with. The firmware was old and finicky to update and when I finally got everything working, I needed additional ports for all of my other hardware. I was still within my return window so I returned the Nortel Switches and swapped them with 4 IBM server connectivity modules. The connectivity modules (while somewhat limited for what some others may be doing) were perfect for my implementation. They allowed me to setup a few aggregated links per module to my various switches (iSCSI, LAN, etc) and then dictate which internal nic went to which uplink. As they support VLANs and link aggregation, it was perfect for what I needed to do.

3) The first management module that came with the chassis was DOA which made it impossible to manage the system (obviously). I had a 4hr response window for new parts and was able to get up and running quickly after the new part came in. Moral of the story... the standby management module is a necessity (in my opinion).

At the time I implemented the system, I looked at HP & Dell as well but I passed on HP because they just had just released their new chassis and caught a lot of flack that old blades wouldn't work with their new chassis. I liked that IBM was committed to compatibility in future products. I also passed on Dell because of negative past experiences with them (primarily relating to customer support, not hardware). The primary reason I went with the BladeCenter (over large purpose built boxes) was density. As these servers were going to a single cabinet in a CoLo, I needed to ensure I wasn't taking too much space/power (trying to minimize costs). I now have 10 blades in the chassis, my redundant LeftHand SAN, switches, firewalls, tape backup drive and various other support equipment in a little over half a cabinet mirroring my entire infrastructure at my datacenter which is distributed in 3 cabinets.

I don't have any experience with the S other than what I have read but if you have the space and power capabilities to put in an H, I think you should do so. The S is a nice chassis and the shared, partitionable storage is nice as well, but with only 6 slots available and 4 I/O modules useable, you quickly limit yourself and future expandability options. Obviously, the big benefit of the S (aside from cost) is it's ability to use 110v power.

Let me know if you have any questions or would like any additional information pertaining to my setup.

Tyler

dafortune123
Contributor
Contributor

Hey Jon / Tyler

I was wondering how you configured the Service console, VMkernal, Virtual machine network and secondary service console on Bladecenter S.

for a setup of 2 Blades with ESX 4.0 (clustered) for HA.

Also if I add a 2 port ethernet expansion card on each of the blades ---then how do i configure?

@ Tyler I had a terrible experience with 2 Server connectivity Modules present in the Chassis now. well you can blame my lack of expertise on that.

Any help on this would be greatly appreciated.

Regards,

RK

Reply
0 Kudos
mreferre
Champion
Champion

Have a look at this if interested: http://it20.info/blogs/main/archive/2008/11/14/162.aspx

As you can depict from one of the pictures you can "only" have 4 NICs and 2 SAS ports per each of the blade on the S. The way you configure those 4 NICs is totally dependant to what your goals/concerns/requirements/limitations are.

Ken Cline has a wonderful 8 parts series on NICs configuration for ESX hosts (http://kensvirtualreality.wordpress.com/?s=switch+debate) that is more than worth a read.

As Tyler suggested, if 4 is too low for you, the H is your best option. Consider that the S was really originally meant for remote sites / small shops, most of which would do well with even 2 NICs.

Massimo-

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos