Sandy Bridge Memory Scaling: Choosing the Best DDR3
by Jared Bell on July 25, 2011 1:55 AM ESTTest Configuration and Settings
For our testing, we used the following system.
Memory Benchmarking System Configuration | |
CPU | Intel Core i7-2600K (Stock with Turbo Boost enabled: 3.5GHz - 3.8GHz) |
Motherboard | ASUS P8P67 Pro - BIOS version 1502 |
Memory | Patriot Viper Extreme Division 2 4GB (2x2GB) DDR3-2133 Kit |
Graphics | MSI GTX 580 Lightning - Stock clocks (832MHz/1050MHz) |
SSD | OCZ Agility 2 120GB |
PSU | Corsair HX850 Power Supply |
OS | Microsoft Windows 7 Professional 64-bit |
You’ll notice that we list only one specific set of memory; I don't have specifically rated modules for each of the memory speeds tested. Instead, I used a pair of DDR3-2133 modules that worked flawlessly at all of the lower speeds. Thanks to Patriot for supplying the DDR3-2133 4GB kit used for today's testing. To ensure my results weren't skewed, I tested a pair of DDR3-1600 CL9 modules against the DDR3-2133 CL9 modules running at the lower DDR3-1600 CL9 speed. The results of this test were identical. There may be minor variations between memory brands, but as a baseline measurement of what to expect our testing will be sufficient. We then used the following clock speeds and timings:
Tested Memory Speeds | |
DDR3-1333 | 7-7-7-18-2T |
8-8-8-18-2T | |
9-9-9-18-2T | |
DDR3-1600 | 7-8-7-21-2T |
8-8-8-21-2T | |
9-9-9-21-2T | |
DDR3-1866 | 8-9-8-24-2T |
9-9-9-24-2T | |
DDR3-2133 | 9-11-9-27-2T |
Testing Procedures
Each of the tests were performed three times with the average of those three runs used for the final results. However, there were a few exceptions to this. First, PCMark 7 was only ran once because it loops three times before providing its score. Second, the x264 HD Benchmark was only ran once because it looped four times in a single run. Third and finally, the LINPACK Benchmark was looped twenty-five times because it was also used to test for stability. And with that out of the way, let’s get to the test results.
76 Comments
View All Comments
mga318 - Monday, July 25, 2011 - link
You mentioned Llano at the end, but in the Llano reviews & tests, memory bandwidth was tested primarily with little reference to latency. I'd be curious as to which is more important with a higher performance IGP like Llano's. Would CAS 7 (or 6) be preferrable over 1866 or 2166 speeds wtih CAS 8 or 9?DarkUltra - Monday, July 25, 2011 - link
How about testing Valves particle benchmark or a source based game at low reslution with a non-geometry limited 3d card (fermi) and overclocked cpu? Valve did an incredible job with their game engine. They used a combination of fine-grained and coarse threading to max out all the cpu cores. Very few games can do that today, but may in the future.DarkUltra - Monday, July 25, 2011 - link
Why test with 4GB? RAM is cheap, most people who buy the premium 2600K should pair it with two 4GB modules. I imagine Windows would require 4GB ram and games the same in the future. Just look at all the .net developers out there, .net usually results in incredible memory bloated programs.dingetje - Monday, July 25, 2011 - link
hehe yeah.net sucks
Atom1 - Monday, July 25, 2011 - link
Most algorithms on CPU platform are optimized to have their data 99% of time inside the CPU cache. If you look at the SisSoft Sandra where there is a chart of bandwidth as a function of block size copied you can see that CPU cache is 10-50x faster than global memory depending on the level. Linpack here is no exception. The primary reason for success of linpack is its ability to have data in CPU cache nearly all of the time. Therefore, if you do find an algorithm which can benefit considerably from global memory bandwidth, you can be sure it is a poor job on the programmers side. I think it is a kind of a challenge to see which operations and applications do take a hit when the main memory is 2x faster or 2x slower. I would be interested to see where is the breaking point, when even well written software starts to take a hit.DanNeely - Monday, July 25, 2011 - link
That's only true for benchmarks and highly computationally intensive apps (and even there many problem classes can't be packed into the cache or written to stream data into it). In the real world where 99% of software's performance is bound by network IO, HD IO, or user input trying to tune data to maximize the CPU cache is wasted engineering effort. This is why most line of business is written using java or .net, not C++; the finer grained memory control of the latter doesn't benefit anything while the higher level nature of the former allows for significantly faster development.Rick83 - Monday, July 25, 2011 - link
I think image editing (simple computation on large datasets) and engineering software (numerical simulations) are two types of application that benefit more than average from memory bandwidth, and in the second case, latency.But, yeah, with CPU caches reaching the tens of Megabytes, Memory bandwidth and latency is getting less important for many problems.
MrSpadge - Wednesday, July 27, 2011 - link
True.. large matrix operations love bandwidth and low latency never hurts. I've seen ~13% speedup on part of my Matlab code going from DDR3-1333 CL9 to DDR3-1600 CL9 on an i7 870!MrS
Patrick Wolf - Monday, July 25, 2011 - link
You don't test CPU gaming benchmarks at normal settings cause you may become GPU limited so why do it here?http://www.xbitlabs.com/articles/memory/display/sa...
dsheffie - Monday, July 25, 2011 - link
....uh...Linpack is just LU which in turn is just DGEMM. DGEMM has incredible operand reuse (O(sqrt(cache size)).