VMware Cloud Community
JonRoderick
Hot Shot
Hot Shot

HP BL460c blade - any good for virtualisation?

If so, what kind of config would you recommend for a PROD datacentre implementation?

New to HP (IBM shop mainly).

Cheers

Jon

Reply
0 Kudos
18 Replies
dandeane
Enthusiast
Enthusiast

I've had great success with HP BL480C's - dual quad core, 48GB of memory. I have also heard of good results with the BL 680C's as well.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

I'm using 460c. Just perfect.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
jbruelasdgo
Virtuoso
Virtuoso

HP BL460c, very good for virtualization

IBM has some very good too!! (specially the new ones)

Jose

Jose B Ruelas http://aservir.wordpress.com
Reply
0 Kudos
HIsgett
Enthusiast
Enthusiast

I am running 4 bl460c blades and getting ready to add 2 more. Very very good so far.

Reply
0 Kudos
JonathanT
Contributor
Contributor

How many NICS do the blades have in them and how do you structure the VLANs on the BL460. I'm looking at HP blades and the Flex-10, but want to make sure I'm not over engineering stuff.

Reply
0 Kudos
BUGCHK
Commander
Commander

I really like it. Just a ProLiant server in a blade Smiley Wink

Put in what fits and what you need:

- CPU, memory

- BBWC (battery backed writeback-cache), if its missing

- a pair of SAS disk drives

- an additional 2-port LAN adapter

- an additional 2-port FC-adapter (chose the vendor you like most: Emulex/QLogic)

JonRoderick
Hot Shot
Hot Shot

Thanks for the feedback - is it possible to have separate SC, Vmotion and guests port groups (using separate, teamed NICs) in the 460c (in other words, are there 6 NIC ports available)?

Ta

Jon

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

There are 2 integrated 1Gb ports plus 2 mezzanine ports for expansion cards.

You can install 4port NIC into mezzanine slot.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
depping
Leadership
Leadership

I've used the complete range of Blade servers by now and I must say so far I'm impressed with the quality / easy install etc.

Duncan

VMware Communities User Moderator

-


Blogging:

Twitter:

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
msemon1
Expert
Expert

We have had good luck with the HP BL 460's. Just add the mezzanine card for your nics like the others recommended along with your dual port HBA for FC. Ours have 32GB of memory. Only problem we had were a few RAID controller batteries died. When these need replaced we will probably get something like the BL495's.

Mike

Reply
0 Kudos
habibalby
Hot Shot
Hot Shot

hello,

I'm using 2 BL460 G1 each with 20GB memory, single CPU dual-core, single HBA dual-ports & 6 pNics. The problem i'm facing is the limitation in adding more pNics due to the limitation in adding more mizzenine cards.

And if you want to add more pNics, you can through the mezinnein cards, but the problem is that you will have to buy more Blade Gb Ethernet switches. So, if you are going to connect all the six pNics, you will have to buy additional 4 Blade Gb Ethernet Switches in order to connect all the pNics.

I'v compared between these hosts & DL380 G5 in the same cluster, having the same configuration. And found Performance wise, 25% ~ 30% better than BL460 G1 & they are cheaper, and in High Availability it's also better than blade!! Why? Becuase, in the Blade Enclosure fully populated with blade, but if this Enclosure goes down, Bye Bye for your entier services. But with Stand-Alone hosts such as DL380, or 585, if one server goes down, still you can bring your services on the alive hosts.

Best Regards,

Hussain Al Sayed

If you find this information useful, please award points for "correct" or "helpful".

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
Reply
0 Kudos
jayp00001
Contributor
Contributor

I have to agree with habibalby. The performance just isn't there, and once you add in the limitations on the number of nics you can add to a chassis (without flex 10 which is cool but expensive). I evaluated bl460s, bl480s , dl360 and dl580s. We ended up going with dl580s because once you added in flex 10 to the chassis cost any savings you got from using blades got eaten up. In addition there were no hex core, quad socket blades available.

Reply
0 Kudos
JonRoderick
Hot Shot
Hot Shot

I see what you're saying but that applies to blades and bladecenters in general so if that was the case, would blades be as successful as they evidently are? True, virtualisation may compound the issue but a disaster taking out both power supplies, both networking routes, both FC paths would have to be pretty comprehensive and I'd wager you have bigger things to worry about than your ESX blade chassis going down.

Thanks for the info though - much appreciated.

Jon

Reply
0 Kudos
JonRoderick
Hot Shot
Hot Shot

Can you tell me a bit more about the Flex 10 and where it fits into the chassis (physically and architecturally!) - is it required to provide the 6 pNiCs I'm looking for?

What sort of cost is it? It's this kind of stuff HP don't tell you up front that makes them look favourable in compaison to other vendors until you get down to the nitty gritty.

Jon

Reply
0 Kudos
thehyperadvisor
Enthusiast
Enthusiast

check out this blog here for how i have it setup.

If you want 6 nics in each blade it can be done without flex10 but it will cost you. I can't give you pricing between the 2 solutions.

But pretty much without flex10 you will need 6x network modules in bays 1, 2, 5, 6, 7, & 8, this can be switches, passthru, or vc modules for network and 2x san fiber modules in bays 3 & 4 in your enclosure. This means all of i/o bays will be full to start and you have to put the modules in the correct order. In each blade you will need your 1x dual port hba and 1x quad port nic card placed correct in the blade. If you checkout my blog it explains part of this config but I only use 4 nic dispite vmwares recommendation for 6 and have no issues.

Now with flex10 modules and a HP G6 blade you would only need 2x flex10 modules in bays 1 & 2, 2x san fiber modules in bays 3 & 4 and because the G6 and the BL495c G5 have the flex10 feature compatible nics thats it. Less equipment and 10g network. So, you would think its cheaper which in the long run it is if configured properly.

hope this helps - thehyperadvisor.com

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

VCP3,4,5, VCAP4-DCA, vExpert hope this helps - http://www.thehyperadvisor.com If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
Kake
Contributor
Contributor

We are using BL460 on both of our ESX servers. Two QuadCore processor in each and 16 GB ram. We also have HP EVA 4000 SAN as disk storage and I have to say it's working most beautifully.

We have 9 Virtual Machines per ESX used in development and Production (other and other). Mainly Windows 32 and 64 standard servers. Users are reporting good performance so far.

Reply
0 Kudos
msemon1
Expert
Expert

I agree. A failure of a blade enclosure is pretty rare. If both power supplies, networking routes and FC paths go down then your network mostly likely has bigger problems.

Reply
0 Kudos
Chris_Lynch
Enthusiast
Enthusiast

Our c-Class chassis midplane was from our Nonstop division, which requires 24/7, 99.999% uptime. There are no active devices, and both the power and signaling midplanes are seperate.

As for Flex-10, it is one of those revolutionary technologies that we have added to our G6 platforms, the BL280, BL460, and BL490 G6. Flex-10 was designed to reduce infrastructure costs, while maximizing throughput in a more granular manner, all in hardware. So, regardless of the hypervisor, a server admin can allocate specific bandwidth requirements within hardware. While, at the same time reducing time to deployment, cabling, and making the server admin more self sufficient.

I implore all that are looking at our c-Class G6 platforms to please read our Virtual Connect Flex-10 whitepaper.

In a simple TCO I recently performed for a customer, purchasing (2) Flex-10 modules instead of (6) Cisco Catalyst 3120G or even (2) Cisco Catalyst 3120X and (4) Cisco Catalyst 3120G, was between 38%-50% diference, and could provide 25% (8 NICs vs 6 NICs with Half Height blades) more Ethernet connectivity. I purely used our web pricing, which is available in our Product Bulletin application.

With out BL460 G6, we have stepped up the game when compared to our older generation. And when you look at the BL490 G6, you have even better platform, where RAM density is key.

Reply
0 Kudos