VMware Cloud Community
bhirst
Contributor
Contributor
Jump to solution

ESX Hardware Choice FTW – Blades vs Boxes Intel vs AMD

Guys – wanted to get your opinions on hardware. We currently have 5 DL380 G5’s (they’re turning out to be dogs BTW) and I need to prepare for 2009-2010 growth. Here are the points:

  • I only have about 10U left in my current cab at the co-lo, and for recurring cost issues I don’t want to go to another cab in the near future.

  • Goal is to fill that 10U with the best performance per U within cost reasoning

  • Everything is NFS so don’t worry about HBA’s iSCSI etc.

  • Blade architecture seems very attractive because of its density. I can put in an IBM Bladecenter that would have all the redundancy, management & switching in a tight 9U package – room for a dozen blades I believe. But what about performance?

  • What about these mega-big Dell PowerEdge R905 Rack Serves with AMD Opteron’s & 48GB RAM? Are they a better option for VMware than blades?

  • How much more performant are AMD chips at running ESX than Intel? I’m a big AMD guy, I run em’ at home, and we all know the FSB architecture is cleaner than Intel.**

p.s. DL380’s are configured thus: 2x quad core xeon @ 2.33Ghz – 32GB ram

Weigh in on your sever religion!

Cheers,

Bart

0 Kudos
1 Solution

Accepted Solutions
MrBiscuit
Enthusiast
Enthusiast
Jump to solution

I'm working on a site with a range of hardware, the oldest installs are on PE6950, there's a VDI installation on R905 and a new migration environment being built on HP c7000 with BL685c G5 (Full Memory, Cisco/Brocade). All the hosts are AMD either 82xx or 83xx (Dual or Quad Core). We're working on migrating the server VMs from the 6950 onto the newly built (and lightning fast) HP Blades.

VDI on the R905's is easily capping on vcpu per core scheduling without user performance complaints, giving us 128 VDI sessions per R905 (quad quads, 64Gb) we're moving to a connection broker shortly.

Regarding server, we're averaging 40 guests per 6950 and are memory limited (8 cores, 32Gb Ram)

We're hoping to achieve 90 guests per Bl685c, but are only in the build and test stage with no complaints so far - especially with the minimal cabling required. My Math suggests that 96Gb will be the sweet spot for memory installed in BL685s although they can support 128Gb at greater expense.This results in a theoretical capacity of 600 - 700 VMs per chassis running across 128 cores, which is just staggering for such a small bit of rack space.

We went with AMD because at the time of decision (many moons ago) their roadmap showed that they would be stable on the new Barcelona socket for some time, whereas intel where yet to release the new instruction set which would likely break vmotion compatibility - the wisdom here is definitely check the roadmaps on both cpu and server vendor before commiting.

Also, don't overlook the benefits of integrated remote management with your blades; it's yet another additional expense on individual servers.

View solution in original post

0 Kudos
13 Replies
khughes
Virtuoso
Virtuoso
Jump to solution

Honestly it comes down to a lot of how you want your virtual network set up. Blades sound like a good option since you're running out of rack space and seeing how you're running NFS you could probably make a bladecenter work just fine. The big choice between your blades idea and the monster rack servers is obviously how many VM's run on a ESX host. If you put in a giant rack server, you're going to get a lot of VM's on that host, but also know if you have to do maintanence on that server or if that server goes down, you're going to have to find a home for all those VM's. Contrast to the blades where you can't put as many VM's per host (blade) but it would be easier to do maintanence on the blade or if the blade fails HA should be able to find other available blades to balance the load.

There is some sort of Intel vs AMD comparison that I found at one point but it would take me an hour to sift through all my replied messages to find it. I used to be a big AMD fan as well for the FSB architecture like you said but Intel's Quad Xeons are pretty beefy and run clean. You'll get different opinions because its like asking whats better, coke or pepsi?

In the end you'll have to weigh your pro's and con's and how your choice will fit into your current environment. Hope some of those notes helped.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
AndrewSt
Enthusiast
Enthusiast
Jump to solution

You say you have 10u space left - well, the other consideration when talking blade chassis is power.

How many amps you have left?
How many network ports you have left?

As for the IBM blade center, they are nice. Fill one with HS21 blades, and you get a lot of performance. However, be aware you are now sharing uplinks, unless you spend money on built in switches, and stuff like that.

Now, we always end up running out of power in a rack, before we run out of space.

Intel vs. AMD - not going there - both work well.

-


-Andrew Stueve

-Remember, if you found this or any other answer useful, please consider the use of the Helpful or Correct buttons to award points

----------------------- -Andrew Stueve -Remember, if you found this or any other answer useful, please consider the use of the Helpful or Correct buttons to award points
0 Kudos
bhirst
Contributor
Contributor
Jump to solution

AndrewST - I have 2x30AMP circuits in the cab - using about 10A each. If we put in a blade enclosure i'll undoubtely run new copper directly to the chassis. As for networking - built in switches all the way.

p.s. i will be awarding points after one week.

0 Kudos
AndrewSt
Enthusiast
Enthusiast
Jump to solution

If you have the power and bandwidth, then go with blades. Sticking a bunch of 1u servers in the space you have would work, but you would have to deal with all the additional network ports required, and you probably wouldn't get as big a bang for the space. Note I said space - money still comes into play, and sometimes the blades do not price as well as individual servers.

Specifically reqarding the IBM blade center - my experience with the management functionality - console access and hardware sharing - has been pretty good. I think for the solution it would be better than going with a HP C3k series chassis. My only problem with the H series chassis, has been that the HS21 blades (intel) has two options - 2 sockets, and 32gb of memory - or 1 socket and 64gb of memory. I would love to have 2 sockets and 64gb of memory. Talk to your vendor and find out if this is still the case. Alternatively - go with the LS22 blades (opteron) and you can get up to 64gb of memory.

-


-Andrew Stueve

-Remember, if you found this or any other answer useful, please consider the use of the Helpful or Correct buttons to award points

----------------------- -Andrew Stueve -Remember, if you found this or any other answer useful, please consider the use of the Helpful or Correct buttons to award points
0 Kudos
meistermn
Expert
Expert
Jump to solution

1. For performance comparision look at the vmmark benchmark.

2. For Planning to buy 2009 and 2010 servers, look at the roadmap of intel and amd

3.) To make the decission for a AMD or Intel Plattform keep in mind that the application software is only supported on one cpu vendor.

4.) Blade or Boxes

Boxes very time will beat blades in dimms per socket , as boxes have more room.

Blades beate every time boxes on cpu desity.

So for vm with much RAM needed boxes should the better choice.

If much cpu per vm is needed , blades are the better choice. So a mix of both boxes and blades are the best i found in big enterprise environments.

5.) Look at Intel Network Cards witch support vmdq and vt-c.

6.) Look at IOMMU from AMD coming with amd Instanbul and Intel vt-d coming with Intel nehalem plattform

7.) Mix SSD and classic physical sas disk -> (Hybrid Storage) reduces latency and gives higher IOPS

8.) 2010 comes graphic virtualization and pci virtualization

williambishop
Expert
Expert
Jump to solution

For come concise, and good write-ups on blade vs. boxes, try Aaron Delps blogs at Scott's site. There are three or four focusing on IBM and HP blades, and the math on savings. Personally, the minimum I save a year is $160k from going blades over normal servers.

http://blog.scottlowe.org/

--"Non Temetis Messor."
0 Kudos
JoeTowner
Contributor
Contributor
Jump to solution

Hmm, well, do you need to use all 10U? I'm tempted to say use 7U with a HP c3000 and fill it with blades. HP's BL495's will take all the RAM you can buy (16 slots x 8gb)

One thing to keep in mind is that when blades shift every ~2 years, it's better to fill the cabinet, just for consistancy across the platform. Had a client with a half full p-class blade setup, just had to replace it due to their not filling it when the "last call" was made on that series.

Then consider breaking your load between two different technologies, like an FC san for your I/O loads and NFS/iSCSI for you Space loads.

HP just came out with their MSA2000G2 (2u 2.5" SAS to FC SAN)

Keep a U for cooling, maybe even a few fans to shove all that heat out the back.

0 Kudos
williambishop
Expert
Expert
Jump to solution

You can indeed get whiplash buying HP blade chassis, because they do change frequently (2-5 years)...IBM however does not. IBM chassis rarely ever switches in fact-so if that's your hangup, try the IBM. Personally, I think inventory should change every 3-4 years, because you've jumped a category further in tech by then, if not two. The first blades in my environment were dual core with 8 gig of ram, and they were the big dogs of their day. The newest ones are quad core with up to 64 gig of ram, and they still fit in the same chassis, but are WAY faster. Do I need to keep my old ones when I can put 2x the workload on a single newer blade? It's not financially sound, so why would I. Prices on blades are so low that it becomes a no-brainer to exchange them every few years to get better density.

--"Non Temetis Messor."
0 Kudos
tailwindALWAYS
Contributor
Contributor
Jump to solution

Another department where I work is running ESXi with a Dell R905. I believe 4 CPUs and 64GB RAM is what he has. He is running 27 VMs right now and said he could easily add a bunch more but ran out of physical servers to destroy! Smiley Happy He said, the major limitation for him will be the internal storage he uses.

I would for sure check out Silicon Mechanics: http://www.siliconmechanics.com. I spent some serious time on their site today and was blown away by their prices. I'm looking into some SAN/NAS devices possibly for ESXi setup.

They have everything from blades chassis, to 1U twin servers. I haven't read up on these much, but when I was pouring through the site, I did find some blade servers running 4 CPUs - too hot? Also, the 1U twins really impressed me, though again, I would think they would run hot. Basically, they are two 2 CPU servers crammed side-by-side in a 1U form factor. CRAZY! And they aren't too bad priced either. I've seen the half depth servers, but not the side-by-side. That way you would fit 20 servers in your 9U's you have left.

For fun I just built a 1U twin server. Remember, this is ALL in 1U server. So, the two side-by-side servers EACH with 2xquad-core AMD 2GHz, 64GB RAM, two 160GB SATA drive (which is fine for the OS running ESX/ESXii right?), two Gigabit NICs, and 3yr warranty all for $7739. Seems pricey for a 1U box, but seems really cheap for how much you get!!!! Here's the server I configured: http://www.siliconmechanics.com/i19537/1U-twin-server.php. This, as long as you can keep it cool, seems unbeatable to me!

I would also check out their blade chassis as you can get blades. I don't know much about the HPs or IBMs anymore, but these ones you can fit 14 blades in only 7U (9100 series). I suppose that's equivalent to the 1U twins. However, the series that can hold 14 can only have up to 12GB RAM with their dual procs! Smiley Sad Their other blade chassis (8100 series) allows only 10 blades in 7Us, but one of the blades you can throw in there can hold 4 CPUs and 64GB RAM (~$6000/blade with that config).

Whew ... so, in summary, if I were in your shoes, I would either checkout the 8100 series blade (due to less complexity and shared resources) or the 1U twin servers. I would just be a little worried about keeping them cool! Best of luck. I'm curious to see what you did.

BTW, if you don't mind, what are you using for your NFS storage? What brand/model? I'm debating about making my own from hardware I have or purchasing some from Silicon Mechanics possibly.

0 Kudos
meistermn
Expert
Expert
Jump to solution

In 2010 I would agree with 1 U Server with one socket and 12 Cores.

0 Kudos
bhirst
Contributor
Contributor
Jump to solution

tailwindALWAYS - thanks! - I will definitely check those twin 1U's out - for storage i'm using an EMC NS20. I like it, but nowadays I would also check out Compellent, netapp etc.

0 Kudos
MrBiscuit
Enthusiast
Enthusiast
Jump to solution

I'm working on a site with a range of hardware, the oldest installs are on PE6950, there's a VDI installation on R905 and a new migration environment being built on HP c7000 with BL685c G5 (Full Memory, Cisco/Brocade). All the hosts are AMD either 82xx or 83xx (Dual or Quad Core). We're working on migrating the server VMs from the 6950 onto the newly built (and lightning fast) HP Blades.

VDI on the R905's is easily capping on vcpu per core scheduling without user performance complaints, giving us 128 VDI sessions per R905 (quad quads, 64Gb) we're moving to a connection broker shortly.

Regarding server, we're averaging 40 guests per 6950 and are memory limited (8 cores, 32Gb Ram)

We're hoping to achieve 90 guests per Bl685c, but are only in the build and test stage with no complaints so far - especially with the minimal cabling required. My Math suggests that 96Gb will be the sweet spot for memory installed in BL685s although they can support 128Gb at greater expense.This results in a theoretical capacity of 600 - 700 VMs per chassis running across 128 cores, which is just staggering for such a small bit of rack space.

We went with AMD because at the time of decision (many moons ago) their roadmap showed that they would be stable on the new Barcelona socket for some time, whereas intel where yet to release the new instruction set which would likely break vmotion compatibility - the wisdom here is definitely check the roadmaps on both cpu and server vendor before commiting.

Also, don't overlook the benefits of integrated remote management with your blades; it's yet another additional expense on individual servers.

0 Kudos
richardmcmahon
Contributor
Contributor
Jump to solution

My only word of warning when going down the blade route would be to ensure your colo facility can supply enough cooling to the front of your chassis. Its great that you have dual 30A feeds. If it is a standard DC then it is likely to be rated between 2-4kW which you are in all likely hood using at the moment with your 2x10A commit.

Thanks,

Richard

0 Kudos