VMware Cloud Community
bbricker
Contributor
Contributor
Jump to solution

Mixed memory sizes in Dell R710 to achieve 1333mhz speeds versus 800mhz

We have some Dell R710's that were originally ordered with 48GB in a 12x4GB 1333mhz config (RDIMMs). We ordered 6 additional 4GB 1333mhz memory sticks to fill out all 18 banks for a total of 72GB. The memory speed then decreased to running at 800mhz. We immediately had complaints about performance in some of our mission critical VM's (terminal servers and SQL DBs), so we VMotion'd them over to an R710 still on the original 12x4GB config running at 1333mhz and the complaints went away. I have been talking to both Dell pre-sales and post-sales technical support and they are confirming that filling all 18 banks drops the speed to 800mhz but I'm getting conflicting reports that 12 banks (which is setup as 2 of the 3 available banks in each of the 6 channels) should be running at 1066mhz right now, which it is not. Regardless, both are saying that my best option is to return the recently purchased 4GB modules, and also rip out the original 12x4GB modules and replace all of them with 12x8GB which is 96GB and should run at 1333mhz. That's really more memory than I need, and almost triple what I already budgeted and spent on buying extra 4GB sticks.

The other option they say I could do, and the point of my question, is to mix 6x4GB, and 6x8GB, to achieve 72GB. I also see this is an offered memory configuration on the Dell website when building an R710, and it says 1333mhz as well. Pre and post tech support had these things to say, and it makes me a bit nervous because it's not an absolute "you'll be fine":

pre-support guy:

"Everything I have read or been told confirms that it is perfectly fine to mix module sizes as long as the modules in each channel are the same size. I think it may be recommended by most to keep the sizes consistent but as I mentioned before we have configurations in our system for new servers to be quoted with mixed sizes so that makes me even more confident that if you went this route you would be fine and experience little to no drop at all in performance."

post-support guy:

"As far as your new question about using 6 DIMMs of 4 GB mixed with 8 GB for a total of 72 GB, that is not considered an optimal configuration but I don’t believe you should really run into many issues with it. I’d mentioned before that if you mix speeds, it will downclock the faster DIMMs to the speed of the slower DIMMs, so definitely would advise against mixing speeds. As far as size, the one thing you might come across is as it’s booting, it might say it’s not an optimal configuration but it should detect all the memory and be able to make use of all of it. Don’t believe there would really be a performance hit unless you’re missing speeds. Also, remember to mirror your configuration all across all channels, so you’d want to put all 4 GB and 8 GB in same configuration such as putting all 8 GB in slots 1/2/3 and then putting the 4 GB DIMMs in 4/5/6 on both A and B slots."

So the real question is- if it's not an "optimal" configuration, what does that mean to the actual performance of my vSphere servers? Is anyone doing this and have they had any problems?

Thanks,

Ben

Reply
0 Kudos
1 Solution

Accepted Solutions
golddiggie
Champion
Champion
Jump to solution

How about using 8x8GB sticks per host? That would get you to 64GB each... I would see if Dell will allow you to trade in the memory you're not going to have inside the hosts towards the purchase of the new memory... Or look for another memory vendor that will provide you with the correct memory at a better rate.

Network Administrator

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

View solution in original post

Reply
0 Kudos
11 Replies
golddiggie
Champion
Champion
Jump to solution

How about using 8x8GB sticks per host? That would get you to 64GB each... I would see if Dell will allow you to trade in the memory you're not going to have inside the hosts towards the purchase of the new memory... Or look for another memory vendor that will provide you with the correct memory at a better rate.

Network Administrator

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Reply
0 Kudos
mikejroberts
Enthusiast
Enthusiast
Jump to solution

I am running the mixed 4 and 8GB configuration (72GB total same as you mentioned) and it runs at 1066MHz optimized. I haven't had any issues. Here is the line item from my quote: 72GB Memory (6x4GB+6x8GB), 1066MHz Dual Ranked RDIMMs for 2 Processors, Optimized (317-9975)

LucasAlbers
Expert
Expert
Jump to solution

I had this argument with my boss, I argued for less memory at 1333 versus more memory at slower speed.

In practice after comparing two systems with different memory speeds they where within a few percentage points of performance.

Dell memory whitepaper on how it affects performanace:

One quote:

http://i.dell.com/sites/content/business/solutions/whitepapers/en/Documents/11g-memory-hpc-wp.pdf

The memory latency and floating point rate benchmarks show a performance difference of only 1-2%. Therefore, although the theoretical memory bandwidth difference between the DIMM speeds is 25%, single server and clustered applications should show no more than 16% performance improvement with the faster DIMMs.

bbricker
Contributor
Contributor
Jump to solution

I am running the mixed 4 and 8GB configuration (72GB total same as you mentioned) and it runs at 1066MHz optimized. I haven't had any issues. Here is the line item from my quote: 72GB Memory (6x4GB+6x8GB), 1066MHz Dual Ranked RDIMMs for 2 Processors, Optimized (317-9975)

Did you buy 1066mhz memory or did you buy 1333mhz?

I am curious because based on the PDF document that LucasAlbers provided, it looks like that as long as you use 1333mhz and the right processor, that (like I was suspecting above) there isn't a problem having 2 of 3 banks in the channels filled and still achieving 1333mhz. This is from that document at the bottom of page 10:

“Finally, memory population rules dictate that 1 DIMM-Per-Channel (DPC) or 2 DPC can run at either 1066 or 1333 MHz, depending on server model and DIMM. Populating 3 DPC will force the operating memory speed to 800 MHz.“

So my thought is that maybe your setup is running at 1066mhz because that's the memory speed you were sold, not because of any architectural limitation. Whereas hopefully if one was to buy the 1333mhz in that same 6x4GB+6x8GB config, it would then run at 1333mhz.

Reply
0 Kudos
mikejroberts
Enthusiast
Enthusiast
Jump to solution

I just looked up the part numbers and it is 1066MHz RAM. I am not sure they had a 1333MHz option at the time for that configuration, but I can't say for certain.

Reply
0 Kudos
bbricker
Contributor
Contributor
Jump to solution

Thanks for checking that, I appreciate it.

Reply
0 Kudos
bbricker
Contributor
Contributor
Jump to solution

I'm going to answer my own question here with a final update. It is possible to have 72GB and it run at the max 1333mhz speed by doing 12 rdimms, mixed half with 4GB and half with 8GB, as long as they are all 1333mhz modules. No matter what speed you use, if you utilize the 3rd channel banks (like in an 18 bank fully populated setup), it will drop it all the way to 800mhz. It is also apparently possible to only use 2, such as in a 12 slot setup, and only get 1066mhz, if some of the memory is 1066, or if you don't have the processor selection that allows the memory to run at 1333. The link above outlines that.

As far as the Dell support tech saying there might be a warning message on boot about it not being an optimal configuration, I've seen no such thing, and it clearly is running at 1333mhz. We've been testing several days now with production DB and TS servers and have had no complaints like we did before when we had the 18 x 4GB = 72GB 800mhz configuration.

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

I had this argument with my boss, I argued for less memory at 1333 versus more memory at slower speed.

There is a situation everyone is overlooking. It may not be the memory at all but HOW its inserted into the slots.

For DELL 710 (actually any of the newer R series as well) memory MUST not only be in 3's of the speed, but having ALL 18 slots filled will actually result in SLOWER memory speed anyway, don't care if you have 1333 memory or NOT. So lower density memory fully populated will give you slower memory, not necessary performance because VM's don't have access to the hardware ANYWAY, so all their performance is shared with the HyperVisor, so you can't get 1333, or 1066, or probably even 800 Mhz performance in a VM. Emulated hardware by hypervisor will restrict memory speed.. the ESX HOST may benefit but VM's will not.

Reply
0 Kudos
bbricker
Contributor
Contributor
Jump to solution

For DELL 710 (actually any of the newer R series as well) memory MUST not only be in 3's of the speed,

but having ALL 18 slots filled will actually result in SLOWER memory speed anyway, don't care if you have 1333 memory or NOT.

Yeah that's what I just said...

So lower density memory fully populated will give you slower memory, not necessary performance because

VM's don't have access to the hardware > ANYWAY, so all their performance is shared with the HyperVisor,

so you can't get 1333, or 1066, or probably even 800 Mhz performance in a VM. Emulated hardware by hypervisor

will restrict memory speed.. the ESX HOST may benefit but VM's will not.

Did you read my first post? Yes, it does affect the VM's. They were running much slower when the memory config was at 800mhz. I had many angry yelling users letting me know that in fact. So I VMotioned the VM's back to the host running memory at 1333mhz and the performance was returned.

Reply
0 Kudos
bbricker
Contributor
Contributor
Jump to solution

By the way - I meant to give the correct answer to this thread to LucasAlbers since he linked to the documentation that helped me figure this out. Apparently this can't be undone, so I apologize for that.

Reply
0 Kudos
julienkim
Contributor
Contributor
Jump to solution

Just a contribution with my own experience.

I have a R710 with bi-CPU and 8 x 4GB (4GB RDIMM, 1066MHz, 2RX4X72).

I wanted to upgrade memory to the maximum, presales engineer told me to buy 6 x 8GB RDIMM, 1066MHz, 2RX4X72, mixed like that : 6 x 8GB + 6 x 4GB, 72GB in total.

I created a 32bit diagnostic application bootable USB key to upgrade the bios version to 6.1.0 (OM 6.50 SUU is too old).

Then I changed the bios setting to « Optimizer mode » in Memory settings.

Finally, I put memory modules like this : 8GB on 1-4-2 banks and 4GB on 5-3-6 banks.

I have no lower performances for the moment.

Thanks to this topic and the Dell ProSupport technician.

Reply
0 Kudos