VMware Cloud Community
N815ncgall
Contributor
Contributor

What HP Blades Systems do you prefer?

Hi,

Running VMware on Dell R710's and R910's currently but my company has dropped Dell and is now going with HP. HP is suggesting we discuss their Blade systems for VMware going forward. Could I get some advice from those pf you running ESXi 4.1 on HP Blade systems? What models seem to be best performance?

Thank-You

Reply
0 Kudos
9 Replies
idle-jam
Immortal
Immortal

it all depends on your needs and requirements, at the end of the day do you really need bladesystems? assuming that if each host can house 15VMs, and 12 hosts would give you 70 +++ VMs. if you do not have such then it's advisable to go rack mounted as the cost of chassis and etc do need X number of blades to break even ..

Reply
0 Kudos
N815ncgall
Contributor
Contributor

Space and Power in the DC are of course an issue I have to consider. Right now one cluster of 8 hosts currently is housing 165 VM's. My second one is reaching similar capacity and the requests for VM's do not stop. I have been told that after the 3 blade purchase, I will start to see the cost savings.

Reply
0 Kudos
peetz
Leadership
Leadership

We have recently purchased two c7000 enclosures with BL620c blades (8 of them fit into a c7000 enclosure).

You can supply these blades with two 8core or 6core-Intel-CPUs and plenty of RAM (we run them with 2x 8cores and 256GB RAM).

Performance is very good, just what you can expect from the latest Intel CPUs and lots of RAM. And I like their FlexFabric-modules which allow you to implement a converged infrastructure (FCoE) inside the blade enclosure.

Check out my VMware Front Experience Blog where I will regularly post about my experiences with this hardware.

Andreas

Twitter: @VFrontDe, @ESXiPatches | https://esxi-patches.v-front.de | https://vibsdepot.v-front.de
Reply
0 Kudos
mcowger
Immortal
Immortal

Honestly, the HP gear and the dell gear are using identical CPUs and nearly identical memory architectures.

You can expect that simlarly configured systems from both companies will perform similarly I would recommend choosing your HP systems based on how you want to manage them rather minimal perf differences.  Decide if FlexFabric is a good choice for you - that would be the strongest reason to go blade vs. rack.

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
pdeul
Contributor
Contributor

I really like the bl490c, running esxi off of a sdhc card(no internal drives on the blade) and all the vms running off the san...

Reply
0 Kudos
biokovo
Enthusiast
Enthusiast

We have two c7000 enclosures connected to FC storages, and two c3000 enc connected to SAS storages.

Many blade servers in that enclosures are running esx/esxi.

Three 460bl servers are running about 100 desktop VM (VDI).

Other 13 BL servers are 460/680 and they are running many many Windows/Linux servers.

I can't say that blade is better than standalone server regarding virtualization, but my experiences with blade are generally very positive.

Starting with blades is expensive, but after purchase few blades (maybe 4-5) you will have many benefit of that.

The most important benefit for me is maintance and fast deployment.

CPU and memory is not the problem.

I think you have to concentrate about network and storage type to avoid disproportion between cpu/memory and network and storage IO.

Reply
0 Kudos
ewilts
Enthusiast
Enthusiast

We have a half-dozen C7000 enclosures with a mixture of old and new blades.  Our most recent purchases were for BL460c G7 with dual 6-core processors and 96GB of the fastest memory.

The big win with the C7000 is not the blade itself - it's the Virtual Connect modules.  With 4 10G connections, we can service 8 blades running many hundreds of VMs, accessing NFS storage, and not stress the network.  If we had 8 standalone servers we'd have to consume 16 10G ports and those 10G ports are not cheap on the switch end - just the cost of SFPs will hurt.

We priced out a C3000 versus a C7000 and didn't find the cost savings (around 5% I think) )worthwhile - the actual slots themselves aren't that expensive but it's the Virtual Connect, power supplies, fans, etc. that make up the bulk of the cost.

Going forward with blades also gets you a big win in time to implement.  Adding another server takes a long time by the time you find more rack space, run power and cables, etc.  Popping in another blade is easy once you've got the enclosure installed.  We provisioned 4 blades in under a day yesterday - most of the effort was done byscripts while we were in meetings.

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

I'll second what's been said.

Flex-10 is amazing and makes the blade system worth the purchase.

You'll be buying the same expensive infrastructure for both the C3000 and the C7000, it makes no sense not to buy a C7000 and give yourself room to expand.

Reply
0 Kudos
bulletprooffool
Champion
Champion

Personally, I like DL380 / 580 for VMware - rather than blades.

Hosting a cluster of ESX on a blade chassis still leaves that chassis as a single point of failure.

I'd find the equivalent HP servers to your current Dell ones and stick with the current approach.

One day I will virtualise myself . . .
Reply
0 Kudos