VMware Cloud Community
meistermn
Expert
Expert
Jump to solution

HP ProLiant BL495c G5 Virtualization Blade

There is coming a new blade BL 495c from HP on Sep. 15.

What I like is, that this blade from HP has 16 dimm slots, so half of on 585 G5 (32 dimm lots).

Allthough 2 non-hot plug Solid State Drive (SSD) bays.

Reply
0 Kudos
1 Solution

Accepted Solutions
mreferre
Champion
Champion
Jump to solution

Meistermn,

we need to focus on the fact that the goal is not to throw dimms slots for the sake of ..... having more dimms slots. The final goal is to have a BALANCED system.

I wrote this article a while back: where I said:

> Rule of thumb #2: Per every brand new Intel/AMD core configured you should have between 2 and 4 GB of RAM to obtain a "balanced system".

I think that is representative of what many (almost all?) VMware customers are doing. Can I ask you .... how much memory do you usually configure per core? Based on my understanding 8-dimms slots should be enough to counter a 2P quad-core system (with 4GB hitting now more than reasonable street prices).

I am interested in your practical feedback on the matter.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info

View solution in original post

Reply
0 Kudos
15 Replies
azn2kew
Champion
Champion
Jump to solution

It sounds like a good box for ESX 3.x hosts which max out at 128GB RAM and also included dual 10GBe card which is great for newly build iSCSI 10GB infrastructure. It has two expansion slot for extra cards which is good. Has there been a quad 10GBe port cards available? This is going to be a good consolidation piece of hardware slim and powerful as it can be.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Kevin_Gao
Hot Shot
Hot Shot
Jump to solution

wow that's a nice looking blade...wonder what the price would be. Smiley Happy

2 non-hotplug drives but people are going with embedded USB 3i anyways. seems that all new HP servers have embedded usb slots onboard now.

Reply
0 Kudos
meistermn
Expert
Expert
Jump to solution

Is blade has integrated onboard 10 Gigabit Ports . I expect they are from broadcom. I don't know this really, but amd is using broadcom chip sets on their motherboards.

There was a rule for 1 Gigabit card. If using a 1 Gigabit card then 1 GHZ of cpu and little ram was used.

Now what does it mean for 10 Gigabit Cards, if 10 gigabit full used.

Reply
0 Kudos
mreferre
Champion
Champion
Jump to solution

Stefan,

just a minor correction:

> ESX 3.x hosts which max out at 128GB RAM

It actually max out at 256GB.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
java_cat33
Virtuoso
Virtuoso
Jump to solution

Not according to the specifications here from HP.....

I'm looking forward to implementing some of these!!

This blade will be available as of Sept 15th.... and according to TechTarget the pricing will start from $2,500

#

Reply
0 Kudos
mreferre
Champion
Champion
Jump to solution

It depends on what Stefan's "max out" was referring to. ESX in general or the blade itself? I was referring to ESX.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
java_cat33
Virtuoso
Virtuoso
Jump to solution

Sorry yes you are correct in regards to ESX :smileygrin: - imagine a blade with 256GB of memory.... that's some serious hardware

Reply
0 Kudos
meistermn
Expert
Expert
Jump to solution

How will the server vendors will in future solve more dimm slots in blades. In a two socket blade with 16 dimms slots there is no more place for more dimms.

If moving hard drives completely away, th en this place can be used for more dimms.

So are rack based servers the better choice in future?

Reply
0 Kudos
mreferre
Champion
Champion
Jump to solution

Meistermn,

we need to focus on the fact that the goal is not to throw dimms slots for the sake of ..... having more dimms slots. The final goal is to have a BALANCED system.

I wrote this article a while back: where I said:

> Rule of thumb #2: Per every brand new Intel/AMD core configured you should have between 2 and 4 GB of RAM to obtain a "balanced system".

I think that is representative of what many (almost all?) VMware customers are doing. Can I ask you .... how much memory do you usually configure per core? Based on my understanding 8-dimms slots should be enough to counter a 2P quad-core system (with 4GB hitting now more than reasonable street prices).

I am interested in your practical feedback on the matter.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
meistermn
Expert
Expert
Jump to solution

We use 4GB Dimms per Core. So for our 585 G5 and X3755 ESX we configure 64 GB and 128 GB.

What we see practical from the applications, that we get more and more .Net and Java applications which need much ram.

Allthough we see that windows 32 bit os system with more than 2 GB is inefficent. If the application is 64 bit able and needs much ram,

then we try to move it to 64 Bit windows os.

On the following Url is a good articel from Mark Russinovich Pushing the Limits of Windows: Physical Memory:

http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx

Reply
0 Kudos
mreferre
Champion
Champion
Jump to solution

I think that that kind of confirms my theory of an average of 4GB per core (even for you that seem to be an "above-average" end-user).

I will read Mark's article. Sounds interesting.

Thanks.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

I'll chime in with Massimo...I've found that 4GB per core has been a good rule of thumb for sizing ESX servers. It seems to give a good balance between CPU and RAM. As users begin deploying more and more 64-bit guest OS instances, I may have to rethink the size of my thumb, but for now, 4GB it is.

Ken Cline

Technical Director, Virtualization

Wells Landers[/url]

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
lmonaco
Hot Shot
Hot Shot
Jump to solution

We try and stick with 4GB/core also. There's always an exceptions to the rule, but then life would be boring if there weren't :smileygrin:

Reply
0 Kudos
steven_catania
Contributor
Contributor
Jump to solution

Is anyone seeing this 495c blade drop a connection on the switch or have the port stay in failure mode? We are seeing issues where the 10GB on-board connection will not drop to 1GB because the switch is not 10GB. It stays in failure mode.

Steve

Reply
0 Kudos
rDale
Enthusiast
Enthusiast
Jump to solution

I thought the 10GB card was 10GB only not 100/1000/10000

might want to check the specs and the bios for link speed detection

R

Reply
0 Kudos