VMware Cloud Community
meistermn
Expert
Expert

Intel and Dell Nehalem reaches a vmmark result of 23.55@16 tiles.

From reading the following Intel news, a two socket nehalem server is faster than a 4 socket amd or 4 socket intel.

"Using the VMmark* benchmark, which measures virtualization performance, several Xeon 5500 series-based platforms shattered the previous record by as much as 150 percent versus the previous-generation Intel Xeon processor 5400 series, including a Dell PowerEdge R710 platform* score of 23.55@16 tiles.!

Compare to VMmark results:

Dell

Dell PowerEdge R710 platform*

**23.55@16 tile

2 sockets

8 total cores

??? (8 or 16 with hyperthreading) total threads

03/30/09

IBM

IBM System x3850 M2

VMware ESX v3.5.0 Update 3

VMmark v1.1

20.50@14 tiles

View Disclosure\

4 sockets

24 total cores

24 total threads

03/24/09

HP

HP ProLiant DL585G5

VMware ESX v3.5.0 Update 3

VMmark v1.1

20.43@14 tiles

View Disclosure\

4 sockets

16 total cores

16 total threads

01/27/09

0 Kudos
34 Replies
FredPeterson
Expert
Expert

Honestly thats frakin' incredible.

Too bad we have to wait another year for the Nehalem MP processors so we can see what a quad socket 16 core (or 32 core as that'll become the new defacto standard probably) Nehalem box will do.

0 Kudos
meistermn
Expert
Expert

Now there are some from 4 vendors listed

Submitter

System Description

VMmark Version & Score

Processors

Published

Date

HP

HP ProLiant DL370 G6

VMware ESX Build #148783

VMmark v1.1

23.96@16tiles

View Disclosure\

2 sockets

8 total cores

16 total threads

03/30/09

Dell

Dell PowerEdge R710

VMware ESX Build #150817

VMmark v1.1

23.55@16tiles

View Disclosure\

2 sockets

8 total cores

16 total threads

03/30/09

Inspur

Inspur NF5280

VMware ESX Build #148592

VMmark v1.1

23.45@17tiles

View Disclosure\

2 sockets

8 total cores

16 total threads

03/30/09

Intel

Intel Supermicro 6026-NTR+

VMware ESX v3.5.0 Update 4

VMmark v1.1

14.22@10 tiles

View Disclosure\

2 sockets

8 total cores

16 total threads

03/30/09

0 Kudos
meistermn
Expert
Expert

Microbenchmarks for Intel Nehalem EP::

0 Kudos
meistermn
Expert
Expert

New Multithreading with Intel and some slides:

Intel Youtube Video:

Memory Latency benchmark

One negative Benchmark for Intel Nehalem.

0 Kudos
MattG
Expert
Expert

One negative Benchmark for Intel Nehalem.

This is a little confusing. That page explicitly states that it is an "unbuffered memory" test. The HP DL-380 G6 comes with a choice of unbuffered or registered memory. If that is what they are referring to, would the registered memory make a big difference?

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
MattG
Expert
Expert

Honestly thats frakin' incredible.

Too bad we have to wait another year for the Nehalem MP processors so we can see what a quad socket 16 core (or 32 core as that'll become the new defacto standard probably) Nehalem box will do.

More importantly, for those of us that are 2U form factor fans, the 8-core Nehalem should be released around the same Q1 2010 timeframe.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
DanielMeyer
Expert
Expert

Hm, any idea why the Intel Supermicro 6026-NTR+ is so slow compared to the other Nehalem systems?

0 Kudos
depping
Leadership
Leadership

I really need a couple at home, and a rack... but I don't think my wife will agree.

Duncan

VMware Communities User Moderator

-


Blogging:

Twitter:

If you find this information useful, please award points for "correct" or "helpful".

0 Kudos
DanielMeyer
Expert
Expert

Yeah, those systems really look quite impressive. We were about to order 4 dell R805 (would make 6 R805 total), but i guess with the Dell R710 available we're going to get 4 of them and turn our existing R805 into a secondary ESX cluster for test and development...

0 Kudos
aleph0
Hot Shot
Hot Shot

same need and same issue Smiley Happy here in Italy, too

\aleph0

____________________________

(in italian)

###############

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

\aleph0 ____________________________ http://virtualaleph.blogspot.com/ ############### If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
0 Kudos
mreferre
Champion
Champion

I really need a couple at home, and a rack... but I don't think my wife will agree.

Tell your wife that these systems are so powerful that they can do laundry, housekeeping and homemade pasta... she will want to install a rack right in the middle of the living room.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
geekinabox
Contributor
Contributor

the Supermicro 6026-NTR+ has 3 fundamental differences in it's test:

  • though the DIMMs are 1066 MhZ, they populated all of the slots & channels, which slows the speed to 800 MhZ (Nahalem has strict rules regarding the # of slots/channels you populate and how that affectss the result memory speed)

  • they you loaded up 72 GB of memory, not 96 (as the others have)

  • they used ESX 3.5 u4 beta -- the others used vSphere

Unfortunately it's difficult to determine to what extent each of these contributed to the different results. I'd love to know!

0 Kudos
geekinabox
Contributor
Contributor

I would love it if Dell would re-execute this same test using a r710 fully populated w/ 4 GB DIMMs. This would lower the resultant memory speed, but it would be interesting to compare.

Right now we're trying to price out the optimal r710 config for our infrastructure ... loading a R710 w/ (12x) 8GB DIMMs nearly doubles the price of the box over using (18x) 4 GB DIMMs! This price differential is something we're trying to balance against performance differential between the 800/1066 memory speeds. I haven't seen any tests yet that help me understand the net performance (VMware, SPEC, etc.) difference between those speeds.

Another test tweaking that variable would be very helpful !

0 Kudos
MattG
Expert
Expert

I need to understand this as well. I plan on buying a DL-380 G6 and the price difference between DIMM is pretty dramatic:

  • HP 4GB 2Rx4 PC3-10600R-9 Kit 500658-B21 - $200 per (x18 = $3600 for 72GB)

  • HP 8GB 2Rx4 PC3-8500R-7 Kit 516423-B21 - $990 per (x12 = $12,000 for 96GB)

If memory speed and latency caused the low SuperMicro VMMark # and I need to choose the 8GB ones because to get close to the HP numbers listed here, then I am not as apt to consider moving to the G6 as it would be cost prohibitive. I would either consider staying with the G5 or even consider moving to the DL-385 when AMD releases the 6-core Shanghai.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
geekinabox
Contributor
Contributor

Just got this slide from Intel ... (see attachment) ...

Seems to say that even the slowest (800 MhZ) memory speed in the Nahalem line still has nearly 3x the memory bandwith as (previous gen) Harpertown procs ...

Hopefully this is being read correctly and implies that even the slowest Nahalem memory configs are still packed w/ benefit.

(note that the slide is measuring based off of the 5570 -- the highest end Nahlem proc -- not sure how the proc speed in turn affects the memory bandwith)

Kaj_Laursen
Contributor
Contributor

I would also really, really like to see how much influence the memory speeds have on the performance. And how much influence it has that they can add more RAM, and thus run more tiles. VMware says:

"If two different virtualization platforms achieve similar VMmark scores with a different number of tiles, the score with the lower tile count is generally preferred". I don't know if that means, that comparing scores with 8, 10 and 16 tiles is difficult?

Those 8GB DIMMs really add to the price. From what I understand you can run 12 DIMMs and still run at 1066 (48GB), and would really like to know how much performance you would loose going to 18 DIMMs.

My hope is that it's not much. I don't think the bandwidth will matter much, and don't think the latency will suffer much from the lower memory speed.

But then, the AMD systems have sort of the same problem from what I understand? If you go above 4 DIMMs per CPU the speed is lowered to 533 from 677 or 800?

Regards,

Kaj

0 Kudos
meistermn
Expert
Expert

AMD Systems from IBM do not have memory downgrades.

Look at ibm-1.gif and ibm-2.gif

HP, Dell and SUN have memory speed downgrade.

Look at the File Kingston.gif

0 Kudos
meistermn
Expert
Expert

At the moment we run HP DL585 G2 and HP DL585 G5.

After coming across the following website, I think the new IBM X3755 M2 with AMD Istanbul CPU is a cool solution.

Allthough would really like to see the new X3850/X3950 with Intel Nehalem EX , but this comes next year 2010 with IBM Xcelerated Memory technology.

0 Kudos
meistermn
Expert
Expert

scottloewe blogged something interessting about the HP DL380 G6

What HP has done here to differentiate themselves from some of the other server vendors is spent extra engineering time on the signal integrity of the memory bus so that they can actually preserve the bus speed in some instances. For example, when populating a memory bus with 2 DIMMs, HP ProLiant G6 servers can continue to run at 1333MHz instead of having to drop back to 1066MHz. In a server with 18 DIMM slots (like the DL380 G6, the BL490c G6, or the ML370 G6), this means the server can be loaded with up to 96GB of RAM and the memory will still run at 1333MHz. This helps to maximize both the capacity gains and the bandwidth gains of QPI and the Xeon 5500 CPU. To take advantage of this functionality, there is a BIOS setting that must be enabled.

Which bios setting is this.?

Allthough this means to ask the other server vendors IBM, DELL and SUN if they have allthough a feature like HP.

0 Kudos