VMware Cloud Community
kamesh_a
Contributor
Contributor

Deploy ESX 3.5 on Blade Servers

I am required to purchase & deploy 4 ESX3.5 servers on a mid sized H/W HP DL380 G5 series servers. I could see an addition of 4 more hosts in next few months. I am in a confusion whether to buy the servers (DL 380's, DL560's, xxx series ) individually or introduce Blade servers instead.

I could see HP BL45p(2Proc), HP BL680c G5 (4Proc) are supported by ESX3.5. However I am not sure with the advantange and disadvantages of using Blade servers with VMware.

Could anyone suggest me or post me a document which gives this information?

Thanks,

Kamesh

Reply
0 Kudos
18 Replies
LarsLiljeroth
Expert
Expert

I would go the Blade way. ! We just installed a HP blade center with 5 BL460C g1 and we have just bought 2 more.

The installation time is one big argument for going the blade way.

If we had to install 5 individual server we hade to use 5 x 7 = 35 cables in the blade center we have 4 eth switches with 4 cables etch and 2 FC switches with 2 cables etch +

Blade center mgmt 2 cables in all = 22 Smiley Happy And now we can install 11 more blades without having to add any more cables ! Thats one hell of a time saver !

I only see advantages !

Easy installation

Good management

super fast provisioning of new VI3 host

Disadvantages

all eggs in a basket i.e. if you only have one blade center Smiley Wink

// Lars Liljeroth -------------- *If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
Luis_F
Enthusiast
Enthusiast

I have 9 480c in two DC (4+9) with 8 nics and two HBA. It works very fine with less cables than rack servers, less power consumtion, etc.

I love HP blades. I know IBM blades and they are one (or two) low step than HP. I don´t know Dell ones.

Regards

Reply
0 Kudos
LarsLiljeroth
Expert
Expert

I agree if you are going blade ... HP is the choice... IBM is not at the same level yet.... Less I/O pr slot ( size)

We have 8 IBM Blade centers but none is used for VM and now 1 HP only used for VM... and we want more HP blade centers...

// Lars Liljeroth -------------- *If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
mreferre
Champion
Champion

Kamesh,

this Blades Vs Rack is one of the many religious-driven questions (like AMD Vs Intel or Scale-Up vs Scale-Out). Reality is that you need to focus on advantages vs disadvantages for you (users will tend to tell you what you need to do based on their own experience - which is ok but maybe not a fit for your own scenario).

Sure I/O limitations were a limiting factor for VMware scenarios... I am not familiar with Dell and Fujitsu systems blades but I can confirm you that both HP and IBM blades can support up to 8 I/O slots of I/O which is more than enough for many situations.

> IBM is not at the same level yet.... Less I/O pr slot ( size)

Lars, which IBM blades are you comparing the brand new HP blades with? The HS21 XM does support 8 I/O slots like the HP blades.

Back to the original question .... you need to find your trade-off between reducing cables / footrpint and introducing a brand new architecture in your datacenter. Size does matter in this case .... i.e. if you need to buy 100 new physical hosts you might leverage all blades advantages .... if you need to buy 4 hosts the learning curve (and let's admit it ... the vendor lock-in) might not be enough to compensate the advantages.

I am not against blades. I would go always blades .... but as all of the other posters underlined the positives ... I felt like I would underline the negatives so that you can trade-off.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
LarsLiljeroth
Expert
Expert

I agree that you can get the same amount of IO slots on a IBM blade but then you need the IO blade "on top". This brings down the number of blade in 1 blade center to 6 blades.

Compared to HP's 16

So now i would only have 6 server to share the eth switch and FS switch investment on insetead of HP's 16 on the same investment.

I know IBM has it's XM blades with extra mem slots that is cool and here HP has a problem for the single slot blade.

I also agree that choosing Blade system is a matter of finding out what YOU need you can always find pros and cons for all Bladecenters/ blade servers.

IBM is know for using that same platform for many years so your investment here might last longer, but who know about the future other than.... Hiro, Isaac Mendes and Sylar Smiley Wink

/lars

// Lars Liljeroth -------------- *If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
mreferre
Champion
Champion

Lars .... nooooooooooooo Smiley Wink

>I agree that you can get the same amount of IO slots on a IBM blade but then you need the IO blade "on top". This brings down the number of blade in 1 blade center to 6 blades.

>Compared to HP's 16

This is not correct. With the Bladecenter H you can have up to 8 standard switches in the back of the chassis and you can configure a SINGLE-WIDE HS21 blade with up to 8 I/O ports. This makes 14 blades (with 8 ports) in 9U Vs HP 16 blades in 10U. Sure if you have the Bladecenter E in mind that cannot be done ...... that has been announced in 2002 (and will leave through at least 2010) but it has those I/O limitations. Our strategy is to use both the BC-E and BC-H and choose which one makes more sense for any given situation. If you only need 2 Nics and 2 HBA there is no need to go BC-H etc etc etc

That's the idea. Sorry to hijack this thread ..... I don't want to sell IBM kits but on the other hand I want to provide the exact picture for users to choose.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
epping
Expert
Expert

hi

I started out with HP blades but am now going back to DL servers for our 3rd generation virtual infrastructure.

The DL580G5 is a very impressive box, where it is better than blades IMHO is memory, you can fit in 64GB in 2GB dimms --no blade can do this, so unless you go for the v expensive 4GB dimms you are going to have 32GB RAM for 16 cores !! 2 GB of RAM per core --- i dont think this is enough.

I do agree that the cableing is great with blades, however i have gone through the P class to C class and you have to buy new enclosures, ethernet and fibre switchs and this i feel is way too expensive a model.

If i had to go and design a big environment i would consider blades, i.e. 20 hosts + however for anything smaller i would go for big rack servers and have far fewer of them. a 4 way quad core with 64GB ram is the sweet spot right now.

Reply
0 Kudos
dkfbp
Expert
Expert

Massimo,

We choose HP blades over IBM's HS21XM blades due to the fact that the IBM cannot have 8 DIMM sockets and two internal 2,5" disks.

The XM sacrifises one disk for 4 ekstra DIMM sockets. Furthermore the disks in the HP are hot-swap a really nice feature in a blade, and

you get room for 16 blades contra 14 in IBM bladecenter H.

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
Argyle
Enthusiast
Enthusiast

We run ESX on HP BL460c but noticed a single point of failure in the HP c-Class chassi. We lost power to 5 of the 10 fans at once. The only way to fix it was to mount down the entire chassi, split it in half and replace the middleplane that controlled the power. So if you plan to go for 8 or 16 blades with ESX I would invest in a second chassi and divide the blades among the two chassis. Now this might be a rare problem to lose the power like that so running with one chassi could be a calculated risk. We have plenty of other HP chassis that never had this problem.

And as mentioned already, blades are a new infrastructure with new hardware and switches to learn. But as long as you have the manpower and time it shouldn't be a problem.

Reply
0 Kudos
mreferre
Champion
Champion

> and you get room for 16 blades contra 14 in IBM bladecenter H

True, but the C-Class is 10U and the BC-H is 9U.... but I don't think people would choose one over the other for a + - 0.23546 Units of Rack.....

As per the disks ..... everything is a trade-off..... for non-vmware environments we can debate whether it's ok having two internal hot-swap drives Vs alternative boot technologies (i.e. FC SAN, SAS SAN, sw iSCSI, hw iSCSI, Solid State Drives, PXE provisioning, etc etc etc). Fact is that in the x86 space we are moving very fast across the board in terms of technology advancement .... but for these legacy "2 x hot swap mirrored hard drives"..... and I think there are much better way to deal with that (both from an IT and from an environmental perspective). But as I said.... for standard Linux / Windows environments .... we can go on forever with this discussion and I appreciate many people would still prefer to have the 2 hot swap drives. Which is ok. We do know that many customers were / are not ready to jump the river yet ....

I think we have a slightly different idea of blades than HP. For HP blades are sort of re-engineered DL servers..... for us blades are sort of a different platform with different deployment tehcniques etc etc. I am not saying we are right and they are not (or viverversa).... it's just two different ways of thinking about it.

But for virtualized environments... come on you have to admit this is a no brainer. 2 HS hard drives for a hypervisor with no local IO and basically no persistent data is just a waste of everything. The HS21XM comes with a Flash option as well as a Solid State Drive option which are ideal for hosting the hypervisor (3i is for example the obvious fit). These media consumes between 1-2 Watts versus the 25/30 Watts of the two legacy hard drives... to support 32MB worth of read-only code? No way ....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
mreferre
Champion
Champion

> and you get room for 16 blades contra 14 in IBM bladecenter H

True, but the C-Class is 10U and the BC-H is 9U.... but I don't think people would choose one over the other for a + - 0.23546 Units of Rack.....

As per the disks ..... everything is a trade-off..... for non-vmware environments we can debate whether it's ok having two internal hot-swap drives Vs alternative boot technologies (i.e. FC SAN, SAS SAN, sw iSCSI, hw iSCSI, Solid State Drives, PXE provisioning, etc etc etc). Fact is that in the x86 space we are moving very fast across the board in terms of technology advancement .... but for these legacy "2 x hot swap mirrored hard drives"..... and I think there are much better way to deal with that (both from an IT and from an environmental perspective). But as I said.... for standard Linux / Windows environments .... we can go on forever with this discussion and I appreciate many people would still prefer to have the 2 hot swap drives. Which is ok. We do know that many customers were / are not ready to jump the river yet ....

I think we have a slightly different idea of blades than HP. For HP blades are sort of re-engineered DL servers..... for us blades are sort of a different platform with different deployment tehcniques etc etc. I am not saying we are right and they are not (or viverversa).... it's just two different ways of thinking about it.

But for virtualized environments... come on you have to admit this is a no brainer. 2 HS hard drives for a hypervisor with no local IO and basically no persistent data is just a waste of everything. The HS21XM comes with a Flash option as well as a Solid State Drive option which are ideal for hosting the hypervisor (3i is for example the obvious fit). These media consumes between 1-2 Watts versus the 25/30 Watts of the two legacy hard drives... to support 32MB worth of read-only code? No way ....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
mreferre
Champion
Champion

> and you get room for 16 blades contra 14 in IBM bladecenter H

True, but the C-Class is 10U and the BC-H is 9U.... but I don't think people would choose one over the other for a + - 0.23546 Units of Rack.....

As per the disks ..... everything is a trade-off..... for non-vmware environments we can debate whether it's ok having two internal hot-swap drives Vs alternative boot technologies (i.e. FC SAN, SAS SAN, sw iSCSI, hw iSCSI, Solid State Drives, PXE provisioning, etc etc etc). Fact is that in the x86 space we are moving very fast across the board in terms of technology advancement .... but for these legacy "2 x hot swap mirrored hard drives"..... and I think there are much better way to deal with that (both from an IT and from an environmental perspective). But as I said.... for standard Linux / Windows environments .... we can go on forever with this discussion and I appreciate many people would still prefer to have the 2 hot swap drives. Which is ok. We do know that many customers were / are not ready to jump the river yet ....

I think we have a slightly different idea of blades than HP. For HP blades are sort of re-engineered DL servers..... for us blades are sort of a different platform with different deployment tehcniques etc etc. I am not saying we are right and they are not (or viverversa).... it's just two different ways of thinking about it.

But for virtualized environments... come on you have to admit this is a no brainer. 2 HS hard drives for a hypervisor with no local IO and basically no persistent data is just a waste of everything. The HS21XM comes with a Flash option as well as a Solid State Drive option which are ideal for hosting the hypervisor (3i is for example the obvious fit). These media consumes between 1-2 Watts versus the 25/30 Watts of the two legacy hard drives... to support 32MB worth of read-only code? No way ....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
kamesh_a
Contributor
Contributor

Thank you very much for sharing the knowledge and experiences you had put on here. Most of my doubts are cleared with all of your comments and recommendations.

After browsing through HP website for the blade servers and the VMware supported list:

I requested a quote for "HP BladeSystem c-Class ( c7000 enclosure) with a combination of BL 680c & BL480c and with a storage "EVA 4100 starter kit for HP BladeSystem".

However, I do have another question here, whether I can provision 4 NIC's (on board + external) + FC HBA for each blade?

I do agree this is a VMware forum and not the Blade center forum. However I would like to check here if possible with the configuration I generated using HP sizer. Please suggest me for any additions or deletion. My requirement is max possible resilience.

Thanks once again.

Kamesh

Reply
0 Kudos
Luis_F
Enthusiast
Enthusiast

As I said, I have 9 HP Proliant BL480c blade with 8 nics and 2 HBA each one. It works fine.

For any blade you can buy mezzanines to expand capabilities

MZ11
Enthusiast
Enthusiast

Hi kamesh,

you can use all 4 NICs of the Blades and add an fc-mezzanine-card.

Use the switch options to build an environment that meets your requirements. There are some Nortel (HP) switches and an Cisco-switch for ethernet as long as infiniband and virtual connect modules.

For fc you need a emulex or qlogic mezzanine card per server and two cisco or brocade fc-switches (or the fc-pathtru).

Reply
0 Kudos
meistermn
Expert
Expert

Blades are for now not a good solution.

1.) With ESX 3i you need no more hard disks

2.) Blades have not as much memory banks as Rack servers

3.) Cabeling , with Infiniband or 10 Gigabit cabeling will be although reduced for rack servers

4.) More Fault tolerances wil be needed for today 4 socket , 8 socket servers and for octo cores in 2009

5.) Nice presentation IBM Blade VS HP Blade

Reply
0 Kudos
jhanekom
Virtuoso
Virtuoso

Like epping I'd also like to add my vote for the bigger DL580 / DL585 boxes, especially if your estimate is that you're going to grow to 8x 2-way hosts. Memory capacity is almost always the determining factor of a virtualisation platform's true capacity.

Example: To get at 128GB of RAM with N+1 redundancy, you would need 5x 2-way hosts (except for the ML370, all HP 2-ways are realistically limited to 32GB.) You can achieve that with only 2x DL58x hosts, meaning that you save 1x expensive ESX license (which will go a long way to offset the additional costs of 4-way technology.) Once you get to 8x 2-way hosts, this picture is even more dramatic, since the equivalent is 3x DL58x hosts (for 256GB usable memory capacity.)

Also remember to factor in the operational and hardware support contract costs of more, less expensive servers vs. fewer, more expensive servers. It may swing the decision one way or the other.

Reply
0 Kudos
kingsfan01
Enthusiast
Enthusiast

I have ESX 3.5 running on IBM BladeCenter HS21 (2 Quad-Core Xeon, 16GB, 73GB 15K R1) blades at my DR site and have been very happy with the performance and I/O options available. Granted I don't have many of the same requirements as others... but it works great for us. As we use an iSCSI SAN (LeftHand), our cabling/network infrastructure complexity & costs were not a huge factor. Once LeftHand releases support for 10 Gig-E, we will add a couple of 10 Gig-E modules and really boost our throughput.

I evaluated HP & IBM's blades and chassis prior to implementing and decided to go with IBM due to their thoughts on upgrade paths and backward compatibility. From what I remember, past HP blades and chassis components have been unable to be transfered into new models and has effectively limited their customers upgrade paths. While this may have changed since... it was the deciding factor in our purchase.

Tyler

Reply
0 Kudos