VMware Cloud Community
vincedoepker
Contributor
Contributor

Dell Blade Hardware

Our shop is in the process of refreshing our ESX hardware and one of the options on the table is a refresh using the Dell 1000e enclosure and a mixture of the M905 and M605 blade servers.

Just wanted to ping the community and get some feedback on this hardware - the good, the bad, and the gotchas.

After playing with hardware configurations a couple of questions came up that I didn't have ready answers for:

  1. Are the Opteron processors that are designated as "HE" (for example, Opteron 8425HE) on the VMWare HCL? The Opteron 8425 is supported for this hardware, but no mention of the HE model.

  2. If we mix and match blade models, the same model of Opteron processor isn't always available. Would vmotion work between an Opteron 2425 and an Opteron 8425?

  3. Looking at the configuration options available for the chassis, what has everyone's experiences been with the PowerConnect switching? We had some PowerConnect 6k series layer 3 switches on our network before and they were easily the most unreliable/unstable switching platform I've worked with. Is the Cisco switching option worth the extra money (we'd make use of some QOS features to tag packets from certain applications.)

Thank You!

Tags (3)
Reply
0 Kudos
5 Replies
awliste
Enthusiast
Enthusiast

Man oh man. You're in for a treat.

I run M905's in our clusters. VERY good blades. I don't think you'll have any troubles with your Opterons and vMotion - the only heartache in this arena I ever had with vmotion on these blades was when we added an R900 that was intel to our cluster with the blades. We knew it wouldn't work, but some knucklehead tried anyway and made a scene out of it.

Couple of bumpy bits - the ethernet pass-through module addressing schema takes a little decoding when you're running full-height blades (the 905's). Oh, and keep an eye on your iDRAC revisions. When you get the servers in, the first thing you should do is upgrade the iDRAC settings. Had a problem in our solution where the iDRAC would not let us get a capital 'S' passed through it. Update fixed this.

Other than this, it's been nothing but butter smooth. And be prepared when you first turn on that M1000. It'll sound like it's about to achieve liftoff. We had some.... anxious moments when we turned it on. ((I remember saying - or actually yelling - "Dude, the sales guy didn't say anything about this feature!" It'll calm down after a while...))

Good luck!

- abe

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
sketchy00
Hot Shot
Hot Shot

I second that. I have and M1000e enclosure with M600 blades and passthroughs, and it was pretty pain free. It took me a bit how to get started on configuring the unit, and once I had the electrician wire in the 208v feed, I was good to go. ....I will tell you that I remotely powered it up, and heard the thing crank up like a jet from 100+ feet away in a very soundresistant server room. The initial noise was unreal!! Then it spins down and you're good.

I did have one of my 4 blades have a CPU go out. Everything vmotioned fine off of it, and Dell had a replacement chip delivered to me in 2 hours. I put that back in the blade, turned it up, and everything balanced itself out. It was beutiful.

vincedoepker
Contributor
Contributor

Good info here. I am feeling better about the Dell blade solution after the positive comments.

What switching solution did everyone go with - PowerConnect or the Cisco?

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

For connections to my SAN, I have just a couple of Dell PowerConnect 5424 switches, which are hooked up to an Equallogic PS5000 array. The switches are not stackable, but that is not a big deal, as you just establish 3 or 4 ports on each switch for interconnects. Super affordable switches, and endorsed by Equallgic to be used on their array, and the other nice part is that Equallogic puts out instructions that spoon feeds you the settings needed for setting these switches to be used for an iSCSI array. (disable iscsi optimization, enable portfast, enable flow control, disable storm control, etc.) On the LAN facing side, I have 3 stacked managed D-Link Gig-E switches that are actually pretty darn good. But for iSCSI, stick to the ones that are approved and endorsed by the SAN mfr.

Reply
0 Kudos
awliste
Enthusiast
Enthusiast

We ran with Cisco. Work fine, last long time.

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
Reply
0 Kudos