Meistermn,
we need to focus on the fact that the goal is not to throw dimms slots for the sake of ..... having more dimms slots. The final goal is to have a BALANCED system.
I wrote this article a while back: where I said:
> Rule of thumb #2: Per every brand new Intel/AMD core configured you should have between 2 and 4 GB of RAM to obtain a "balanced system".
I think that is representative of what many (almost all?) VMware customers are doing. Can I ask you .... how much memory do you usually configure per core? Based on my understanding 8-dimms slots should be enough to counter a 2P quad-core system (with 4GB hitting now more than reasonable street prices).
I am interested in your practical feedback on the matter.
Massimo.
It sounds like a good box for ESX 3.x hosts which max out at 128GB RAM and also included dual 10GBe card which is great for newly build iSCSI 10GB infrastructure. It has two expansion slot for extra cards which is good. Has there been a quad 10GBe port cards available? This is going to be a good consolidation piece of hardware slim and powerful as it can be.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
wow that's a nice looking blade...wonder what the price would be. ![]()
2 non-hotplug drives but people are going with embedded USB 3i anyways. seems that all new HP servers have embedded usb slots onboard now.
Is blade has integrated onboard 10 Gigabit Ports . I expect they are from broadcom. I don't know this really, but amd is using broadcom chip sets on their motherboards.
There was a rule for 1 Gigabit card. If using a 1 Gigabit card then 1 GHZ of cpu and little ram was used.
Now what does it mean for 10 Gigabit Cards, if 10 gigabit full used.
Stefan,
just a minor correction:
> ESX 3.x hosts which max out at 128GB RAM
It actually max out at 256GB.
Massimo.
It depends on what Stefan's "max out" was referring to. ESX in general or the blade itself? I was referring to ESX.
Massimo.
Sorry yes you are correct in regards to ESX :smileygrin: - imagine a blade with 256GB of memory.... that's some serious hardware
How will the server vendors will in future solve more dimm slots in blades. In a two socket blade with 16 dimms slots there is no more place for more dimms.
If moving hard drives completely away, th en this place can be used for more dimms.
So are rack based servers the better choice in future?
Meistermn,
we need to focus on the fact that the goal is not to throw dimms slots for the sake of ..... having more dimms slots. The final goal is to have a BALANCED system.
I wrote this article a while back: where I said:
> Rule of thumb #2: Per every brand new Intel/AMD core configured you should have between 2 and 4 GB of RAM to obtain a "balanced system".
I think that is representative of what many (almost all?) VMware customers are doing. Can I ask you .... how much memory do you usually configure per core? Based on my understanding 8-dimms slots should be enough to counter a 2P quad-core system (with 4GB hitting now more than reasonable street prices).
I am interested in your practical feedback on the matter.
Massimo.
We use 4GB Dimms per Core. So for our 585 G5 and X3755 ESX we configure 64 GB and 128 GB.
What we see practical from the applications, that we get more and more .Net and Java applications which need much ram.
Allthough we see that windows 32 bit os system with more than 2 GB is inefficent. If the application is 64 bit able and needs much ram,
then we try to move it to 64 Bit windows os.
On the following Url is a good articel from Mark Russinovich Pushing the Limits of Windows: Physical Memory:
http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx
I think that that kind of confirms my theory of an average of 4GB per core (even for you that seem to be an "above-average" end-user).
I will read Mark's article. Sounds interesting.
Thanks.
Massimo.
I'll chime in with Massimo...I've found that 4GB per core has been a good rule of thumb for sizing ESX servers. It seems to give a good balance between CPU and RAM. As users begin deploying more and more 64-bit guest OS instances, I may have to rethink the size of my thumb, but for now, 4GB it is.
Ken Cline
Technical Director, Virtualization
VMware Communities User Moderator
We try and stick with 4GB/core also. There's always an exceptions to the rule, but then life would be boring if there weren't :smileygrin:
Is anyone seeing this 495c blade drop a connection on the switch or have the port stay in failure mode? We are seeing issues where the 10GB on-board connection will not drop to 1GB because the switch is not 10GB. It stays in failure mode.
Steve
I thought the 10GB card was 10GB only not 100/1000/10000
might want to check the specs and the bios for link speed detection
R
