VMware Cloud Community
rustbutt
Contributor
Contributor

more memory or more speed?

I was recently hired into an on-line retailer that is just now getting into virtualization.  They brought me in for my VMware and Linux background.  We currently have a 3 node ESXi DRS cluster using Dell R710 machines with 64 GB memory each.   I have been asked for my recommendation regarding growth of the cluster.  We're currently nearing the capacity now as 2 of the machines are running full out if we take one machine out of service.

We also have two more Dell R710 machines as spares we could add to the cluster, as well as a Dell R810 with 128 GB memory that could be added.

I recommended not using the Dell R810 and instead bringing both of the R710 machines in and maxing out memory for all of the R710 machines and there lies my problem.

The Dell R710 can handle a maximum of 192 GB of memory.  I'm not going to go into the details, but if you take it to this amount of memory, the memory bus speed goes down to 800 Mhz.  The choices we have are:

mem config               bus speed

=====================================

144 GB (or less)         1300 Mhz

192 GB                    800 Mhz

So the issue here is price vs. performance.  It's much cheaper to add capacity to your cluster by adding memory as opposed to buying more machines.  But if we max out the memory to 192 GB, we lose about 1/3 of our memory bus speed.

What would you do?  How does memory bus speed impact VM performance?

Russ

Reply
0 Kudos
14 Replies
JoJoGabor
Expert
Expert

It depends on the workloads you're running. I recently did a lot of benchmarking of hypervisors and servers with different memory for Win7 VDI workloads. I didnt notice a discernable impact on session performance when using higher amounts of memory with the associated memory bus speed. But In order to increase density the increase in memory capcity was absolutely necessary.

If you are getting memory bound my first recommendation would be to add more memory, but you need to look at your apps and determine what is required. A High performance SQL server which needs a high memory throughput might suffer with the lower bus speed.

Reply
0 Kudos
bulletprooffool
Champion
Champion

you say that 2 servers are runbnbing near full out  -  I assume that you mean they are maxing memory utilisation, not CPU?

I'd suggets that upgrading your Memory tp these large amounts will be a nice move as it will allow more VMs to run on the hosts, but you have to remember that as you drop more VMs on the hosts, you'll need more CPU etc. Given this detail, as you are only running R710s, I'd imagine that you'll probably start running out of CPU before you near the 144GB of RAM Mark (though this depends on the config / allocation of your VMs)

So, if you have a high memory, low CPU environment - go with the 192GB . . but if not, go with the fatser memory (my $0.02) - then when you max that out . . get more servers

One day I will virtualise myself . . .
Reply
0 Kudos
rustbutt
Contributor
Contributor

The CPU load on these R710s is actually quite minimal, even showing memory use up at 90%.    resource_usage.jpg

This is typical.  Mind you, two of the VMs are configured with 16 GB memory and chances are likely they mostly idle.  It's likely we could get away with a some memory over-commitment here, but how much of that do I really want to do?  I've been telling people here 25% was relatively safe for memory  over-commitment, but I've never seen anyone give a rule-of-thumb as to how much memory over-commitment was safe.

Russ

Reply
0 Kudos
JoJoGabor
Expert
Expert

Again, it depends on what is running inside your VMs. If you are running over 90% of memory, chances are your VMs are ballooning which has a negative effect on performance. Check your memory performance chart for your host to see how much memory is ballooning.

From my benchmarks for Win7 VDI on hosts with 144GB RAM, I found I could get 3 or 4GB saved through Transparent Page Sharing, which is OK, but as soon as I started ballooning memory the performance suffered. I really wouldn't want to have more than 5 - 10% of the memory ballooning. 25% you ar elikely to suffer performance wise, but if this is just a few low utoilised file servers for example this probably won't matter.

Reply
0 Kudos
RTalaber2011101
Contributor
Contributor

I would test the real memory requirement here.  The biggest memory consumers are database engines.  Here is the thing....databases typically consume all the memory you allow them to.  If you configure a database VM with 16 GB of memory, it will most likely consume it.  Does that mean it neads 16GB of memory, absolutely not.  It may very well need it, but there is one way to really determine it.  First, establish a baseline for database cache performance.  A data base cache is where the dbms stores frequently used pages of data, index, code, etc.  in an effort to reduce I/O.  A cache that is performing very well will find what it needs in cache 95 out of 100 times.  Or, a cache efficiency of 95%.  I have evaluated statistics on hundredes of thousands of database servers around the world and have found the average efficiency to be over 99%.  The question is, will reducing the amount of memory you allow a dbms to consume reduce the cache efficiency to the point that it makes a performance difference.  In most cases, the answer is no.  The only way to find out for sure is to reduce the amount of memory you allow that VM to consume and monitor the dbms buffer cache efficiency.  If the efficiency stays up over 95%, and the IO rate does not change for the VM significantly....you are good to go.  I have found that the average dbms can have their memory allocation cut almost 50% before any real IO rate changes are noticed.  Of course, there are going to be those highly active, high performant database that actually will need all the memory so you have to be careful.

Reply
0 Kudos
NTShad0w
Enthusiast
Enthusiast

rustbutt,

I have the same problem right now (from 2 weeks), I creating my own servers for small DC, and I have to choose between 96GB of ram at 1333MHz and 144GB of ram at 800MHz,

I do a lot of tests and I may say, that most of applications rather dont give a special speedup when you run memory at 1333MHz comparing to 800MHz (observe that at lower speed CAS latency is lower too, so bandwidth are lower but faster/more ofen CPU can acces the memory, so 1333 vs 800 is not a 40% lower speed overall), in my opinions/tests it is likely 10-20% lower speed overall, so I decide to use 144GB of ram (my max) at 800MHz rather than 96GB at 1333MHz, because its only little faster, at most appliactions you will not see it.

In your example we can see that you running out of memory and have good 2x 6cores CPU in your servers, its at my knowledge and experience (I'm a VMware Infrastructure Architect) enought for up to 192MB for 2 such CPU with middle cpu intensive vms and up to 384GB with little CPU intensive vms.

So if you need a speed of vm world... add mor RAM, as much as you can for Good Flexible Virtualization.

kind regards

Dawid Fusek

Virtual infrastructure Architect

Reply
0 Kudos
Dev09
Enthusiast
Enthusiast

Is these servers  has NUMA topology? If yes, then adding more memory on each NUMA node will not be impact by bus speed and VM performance will be better if VM configured properly. If you are using UMA topology then adding more memory and VMs will have more congestion on bus. Which will directly impact the performance.

Reply
0 Kudos
PizzamanHoxie
Contributor
Contributor

The Dell poweredge R710 can now go up to 288 gb ram. (see specs here: http://www.dell.com/us/business/p/poweredge-r710/pd)

It has six channels with three banks per channel, for a total of 18 banks. Maximum ram per bank is 16 gb.

Bank 1 = 1333 mhz

Bank 2 = 1066 mhz

Bank 3 = 800 mhz

All ram runs at the speed of the slowest populated bank.

If you populate 1 module per channel (bank 1), get ram capable of doing 1333 mhz.  If you populate 2 modules per channel (bank 1 and 2), 1066 mhz ram will do.  It's best to avoid populating three modules per channel (bank 1, 2, 3), as that will drop the speed of all ram to 800 mhz.

If you go with 16gb x 6 (1333 mhz) modules, only populating bank 1 for each channel, it will give you 96gb ram per host, running at the max speed of 1333 mhz.  With the new vmware ram licensing requirements, that is an optimal setting.

Reply
0 Kudos
JoJoGabor
Expert
Expert

I don't think that's quite correct with the memory speeds. it is the standard config for DDR3 DIMMs. ie if memory is balanced across all 3 banks equally, the seed will run at 1333. if only 2 banks it drops down. Can't remember the exact details, but is something like that

Reply
0 Kudos
russbuttonfromm
Contributor
Contributor

I'm the guy who made the original post.

When I couldn't get a definitive response from VMware, or here, on this question, we took it upon ourselves to do our own benchmarking.

We configured one machine in our cluster so that it ran with a memory bus speed of 1366 mhz and another machine so that it ran at 800 mhz.  We ran some jobs on a couple different VMs, one of which I remember as running Oracle.    Using the "time" command, we ran those jobs on each machine and compared the performance.  As you'd expect the machine with the faster bus speed performed better, but not by much.  Even though 1366 mhz is about 70% faster than 800 mhz, the perceived performance improvement was less than 10%.

We decided to fully populate all of our Dell R710 machines to the full 288 GB configuration.

This is not a hard test to do yourself and is one I'd recommend.  If your machines are currently configured with only 64 or 96 GB, with a memory bus speed of 1366 mhz, reconfigure one machine in your cluster so that it runs with a bus speed of 800 mhz.  Take a couple of VMs, and run your benchmarks on each machine.  See for yourself how this does or does not impact your VM performance.  It will be a very interesting exercise and much more informative than simply talking about it here.

Russ

Reply
0 Kudos
JoJoGabor
Expert
Expert

That's good to know for a database load, as I said above I noticed minimal performance differential when running VDI loads using different memory speeds

Reply
0 Kudos
PizzamanHoxie
Contributor
Contributor

Good to know the real world results.  Thanks Russ.

Reply
0 Kudos
Tyomni
Enthusiast
Enthusiast

Sorry I didn't get, why sould it drop down on 2 banks?

Reply
0 Kudos
JoJoGabor
Expert
Expert

Here's a paper on it. Its hard reading but worth it. ftp://ftp.hp.com/pub/c-products/servers/options/Memory-Config-Recommendations-for-Intel-Xeon-5500-Se...

So you should be balancing DIMMs across the 3 channels of the CPU. To get the best speed each channel should have the same DIMM configured. Ie for a single socket CPU, you should populate in sets of threes.

Reply
0 Kudos