VMware Cloud Community
ir1shm1ike
Contributor
Contributor

Vmware Infrastructure - Blade Servers or Single Servers? Best Option?

Hoping someone can provide some experience or what their thoughts on from what they know. We currently have 9 ESX Servers in our Environment. Haven't had any problems, they are all by Dell and seem over all pretty decent. We have a HP EVA 6100 for Storage and together everything just fits. We are looking to consolidate the rest of our Physical Servers that we have and also we have expanded into replacing desktops with thin client boxes. As I said 9 ESX Boxes all running well. We are looking to get 3 or 4 more Servers. What I was looking at is the PowerEdge Rack 900, which seems to be a decent Server. It was recommended about going the Blade Route instead of the Single Servers. Has anyone had any experience with Blades?? Dell, Hp, Thoughts on them?? Is it worth it to buy the Enclosure even if you only pop in like 3 or 4 Blades to start? or you should fill it in order to get your money's worth? How well does ESX handle itself on Blades? Any thoughts I would apprecaite it.

0 Kudos
15 Replies
Dave_Mishchenko
Immortal
Immortal

Here's a recent discussion on blades - http://communities.vmware.com/message/856896. A common concern is the number of NIC ports available (early blade models just had 2) but that's been rectified with current offerings. Last time I did a cost analysis and it's been a while we had to add 7 blades to a chasis before we started saving on hardware costs so if you plan to add more then starting with 3 or 4 isn't too bad.

0 Kudos
Ken_Cline
Champion
Champion

In my "prior life" at HP, I had quite a bit of experience with ESX on HP blades. In all honesty...it's a great fit. As Dave said, early blades had I/O problems, but with the new blades, you can get extreme configs with up to 16 pNICs. A reasonable config with six or eight pNICs and two HBAs is very doable. Also, as Dave mentioned, you normally need to fill an enclosure to better than half full to realize hard savings.

I wouldn't recommend blades "just because" - where I normally ran into them was in large datacenters. The "secondary" benefits of reduced cable clutter, fewer core network switch ports, virtual connect, etc. were normally what drove the decision to blades.

With the new c3000 enclosure, I'm not sure where the break-even point is, but it would be fewer blades. This could be a very attractive solution for the other end of the spectrum (the small shops who don't have all the datacenter infrastructure issues), but I've not had hands-on experience with them.

Good luck!

KLC

Ken Cline

Technical Director, Virtualization

Wells Landers

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos
hugop
Hot Shot
Hot Shot

I've had the pleasure of designing and installing ESX onto Dell, IBM and HP blades and rack mount servers. The main advantage is number of cores per rack that you can achieve by using blades vs normal servers and less cable clutter as said by Ken. If you think about it... lets take HP as an example, using DL360, 2 x quad core, with 6 x NICs, this server is 1U so in a 42U rack (Assuming it's just for servers) would get you 42 servers with 336 cores, and a spaghetti of 42 x 6 network cables plus 42 iLO cables for a total of 294 network cables out of one 42U rack! Consider the cost per port for fiber and network.

Now lets use the HP c7000 chassis, filled with BL460, each chassis is 10U and can accomondate 16 half height blades (BL460c), this configuration will achieve a density of 4 chassis, 64 BL460 servers, with a total of 512 cores. By using Cisco or Brocade or VirtualConnect swtiches, it is possible to minimise cables by as low as 72 cables for the entire rack, this is based on 2 fiber cables for each fiber switch module and 2 network cables for each network switch module. Thats 25% of cables vs using 1 U rack servers.

This is an extreme example, but the advantages are obvious.

Hugo's 0.02c

0 Kudos
meistermn
Expert
Expert

I would not agree that blades are the best solution for esx hosts. If you have a heating problem in the hole chassis, than there is much more risk that more esx blades fail.

And cabling is all though not argument for blades, because this rack servers and a 3 leaf solution I can allthough reduce cabeling.

and for reducing cabeling.

I prefer any 4u box which are stabel and have no heating problem.

0 Kudos
dkfbp
Expert
Expert

I prefer Bladeservers whenever possible. The reduced installation time, the reduced power consumption, the reduced cable management and flexibility is

enoumous. I can move one blade from on datacenter to another in matter of minutes. If I had to move a typical rack server we are talking hours.

It really comes down to your own particuler situation. In our enviroment we have 10 x IBM x3850 3U servers. When ever I have to deploy one of them I have

to use a LOT of time labeling cabels, Checking for available fiber and ethernet ports etc. We also have a HP c3000 blade chassis with 7 HP BL460C blades.

From that entire chassis that can hold 16 ESX servers we have 10 network cables and 2 fiber channel cables running. Provisiong a blade is just a matter of minutes

especially when ESXi takes pace and the server is preinstalled it is pretty much just put it in and do the zoning and you have added an extra server to your esx farm.

Best regards

Frank Brix Pedersen

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
0 Kudos
jhanekom
Virtuoso
Virtuoso

The single biggest benefit of blades is, in my view, also a detractor for virtualisation. The fact that they're optimised for high-density computing often mean that they can't take as much memory.

You can generally fit almost twice as much memory into an HP 580/585, Dell R900/R905 or IBM 3850 than you can into an equivalent blade from those vendors, allowing you to get more bang for your buck in terms of VMware licensing.

0 Kudos
meistermn
Expert
Expert

If you use 2 x 10 Gb ethernet cards per rack mounted server, you have three cables(one for ilo) per server.

In my opinion 2008 is not a good investment year for big consolation projects.

In 2009 the blades will have double the dimm slos as of today. The 10 Gb cards will be cheaper and fcip protocol willhave an huge impact for datacenters.

0 Kudos
gdesmo
Enthusiast
Enthusiast

I have been quite happy with 8 DL580's. Very stable with lots of copper and fiber cables.

We have purchased c7000 enclosures. With lots of ethernet and hba virtual connects. Loaded with BL 680c's quad core 32 gig. I am getting a lot more vm's per host. I can see I will run out of memory before cpu. A scarry situation happened last week in dev. I updated the on board administrator firmware to 2.20. This update completed sucesfully on both oba's. But it caused all blades to become "not configured for virtual connect" So I could not ping any of the blades. The oba's were reset to factory default somehow. And needed to be re-configured from scratch with the factory default password. All of the vc info was still there after the oba's were re-configured. But I had to power down each blade to get them to connect back to they're virtual connects. I have an open case with HP.

This was scarry as it demonstrated how a single update could bring down an entire enclosure full of blades. Which could have been hundred's and hundreds of production vm's.

0 Kudos
williambishop
Expert
Expert

Unless someone shows me a reason to warrant it, I'll never not use blades again. They're great for virtualization, providing density while saving power and costs for supplementals. Anyone who's ever priced a san switch knows that if you can get more hosts on a single switch, your costs will be a LOT less. Same with ethernet. There are a ton of threads here about the joys of blading, a quick search should net you MORE than enough research. I could buy a chassis a few extra blades for the cost difference of san attaching 14 servers. Those upstream switches are EXPENSIVE. I get 2 quad core xeon's and 24 gigs of ram per blade, no way that is not enough for anything short of a big beast. Literally, even if I only get one vm on a blade(say it needs that 24 gigs of ram) it's still cheaper that getting a 4 or 8 way box with 64 gigs just so I can get a few more vm's on it.....

--"Non Temetis Messor."
0 Kudos
jhanekom
Virtuoso
Virtuoso

Valid. As long as you keep in mind that those interconnects aren't free (and, in fact, often carry a premium - I know we pay for them in the HP world.)

Also, there are 2-way servers with 64GB RAM (HP ML370, Dell R805) and numerous 4-way servers with 128GB RAM. Depending on your load profile, that's double the capacity of most vendors' 2-way and 4-way blades...with half as many ESX licenses.

I do agree that there are soft costs to consider, however, and that these will vary from environment to environment. This should definitely influence which hardware choice you make - there's no one-size-fits-all (otherwise I'd be out of a job!)

0 Kudos
chandlm
Expert
Expert

As you can see from the answers you have already, I'd say it really warrants a serious look at your environment before you decide that. I don't currently use blades, but have in the past when the NIC ports were a problem (and NIC teaming was nowhere near as easy as in 3.x, I couldn't be more thankful that process was made easier). I wish in my current environment that we did use them because I consistently have issues getting servers physically installed once they arrive onsite (my team does not manage the server installations so I'm at the mercy of others on that). If I had blades I could go through the pain of getting a chassis installed and then have room to grow before having to go through the pain again each time. I guess the roadblocks would probably be bigger for an entire chassis than an individual server but in my opinion I'd rather have it in large chunks making small increases in capacity much easier.

On the other hand, if your environment doesn't grow quickly and you can't fill a good portion of the chassis right off the bat, blades may not be a good fit...

One last comment, it may well just come down to what the operations teams in your environment are comfortable with. If you don't currently use blades sometimes that's a sales/marketing job you'll have to take on and like virtualization anything that goes wrong will be blamed on the new technology.

0 Kudos
williambishop
Expert
Expert

But your 4 way server is going to cost a LOT more than that blade with 24 or 32 gigs of ram. And all your license cost just got wiped out with savings of not having to connect that 1 extra box to a san switch. Sure, you can do a two way with twice the ram, but ram is not going to be as big a limiter as cpu anyway in most cases.

Now add in energy savings, footprint, and redundancy(because you can add in another blade a LOT cheaper). You add in the redundant 4 way, and your costs are probably 3x more than my blade center scenario. I can pay 7k and get a dual proc, quad core xeon and 24 gigs of ram, dual fiber and dual ethernet....

When it comes to performance level, as long as you don't need more cpu or memory than the blade allows, there are few instances where the 4 way makes more sense. Trust me, we calculated it seven ways from sunday when IBM tried to sell us on that path, it came out better with blades any way they, or we, could slice it.

I should add for the pedantic, that this scenario assumes a near capacity blade center, and an equal number of servers. Since we don't deal with really small sites, we did work out our costs and savings based on 8 or more blades, and 4 or more 4 ways.

--"Non Temetis Messor."
0 Kudos
jhanekom
Virtuoso
Virtuoso

Yep. But just to clarify, I think my points were:

- In the example, I'm not using 4-way systems with 64GB RAM. I'm using 2-way systems with 64GB RAM, or 4 way with 128GB RAM

- In the workloads I deal with, memory is the limiting factor (sorry, I think we had a communication mismatch on where our respective bottlenecks were)

- In the HP World, the Blade VirtualConnect or FibreSwitch interconnects costs as much or more than the equivalent standalone product. From my experience with IBM in the past, I'm fairly confident that this is the case there too. (Recently price a 10-port 4Gbit IBM BladeCenter FibreChannel switch, and we would have sold it for just under US$5000.)

Since your environment is different (CPU is your limiting factor), I appreciate that the benefit of additional RAM you get on freestanding servers is lost completely.

(Also, I fully agree that 4-way technology is more expensive than 2-way technology.)

0 Kudos
williambishop
Expert
Expert

Ah, I see. In IBM bladecenters, my internal san switches are not very expensive....However, ports on my 9509's very much are. Come to think on it, the internal network switches aren't particulary high either.

I can't fathom the workload that so little abuses the cpu but eats up that much memory, but in that case, you can always get a higher density blade could you not? I can cheaply buy a 32 gig ibm blade, so I don't see where I would not just as easily buy an extra blade and divide the worload in half. Sure, it's an extra license(which aren't that much), but I also buy redundancy, and I'm not buying any additional ethernet or san connections....so it's pretty much a wash at best.

--"Non Temetis Messor."
0 Kudos
ToddMuirhead
VMware Employee
VMware Employee

I know that Exchange 2007 is an example workload that uses tons of RAM and not that much CPU.

The cost of memory and the number of memory slots in single servers (or should we say non-blade servers?) has improved. So that you now have more memory slots and less expensive memory. The blade servers also benefit from the reduced cost, but don't have as many slots. So with the Dell R805 (2U 2-socket) there are 16 memory slots and you can get a 64 GB RAM configuration for under 10K. The Dell R905 (4U 4-socket) has 32 memory slots and you can get a 128 GB RAM config for under 20K. (Check out dell.com for specific pricing)

So if RAM is the key concern - it may be better to go with non-blade servers.

I agree that there are tons of potential cost savings in terms of upstream ports with blades - so that may outweigh all other costs.

Todd

0 Kudos