Benchmarking Setup

Testing with Spectre and Meltdown Patches Applied

For our testing on the new AMD Ryzen 2000-series processors and the Intel processors, we enabled the latest version of Microsoft Windows with the latest updates and also BIOS microcode updates to ensure that the Spectre and Meltdown vulnerabilites were as patched as could possibly be. This means that some of the data used in this review is not comparable to previous reviews, however in time we expect our benchmark database to be up to date with these patches.

Test Bed

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency.

It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
  AMD Intel Core
Processor AM4 FM2+ LGA1151 LGA1151 LGA2066
R7 2700X
R7 2700
R5 2600X
R5 2600
R7 1800X
R5 1600
A12-9800
A10-7870K i7-8700K
i7-8700
i7-7700K
i7-6700K
i7-7820X
i9-7980XE
Motherboards AM4: ASUS Crosshair VII Hero 
FM2+: ASUS A88X Pro
LGA1151 (CFL): ASR Z370 Gaming i7
LGA1151: GBT X170-Gaming ECC
LGA2066: ASR X299 OC Formula
BIOS 0508 2502 P1.70 F21e P1.40
Smeltdown Yes OS-level Yes Yes Yes
Cooling AM4: Wraith Prism RGB
FM2+: Arctic Freezer 13 CO
LGA1151: Silverstone AR10-115XS
LGA2066: Thermalright TRUE Copper
Power Supply Corsair AX760i PSU 
Memory G.Skill SniperX
Crucial Ballistix
G.Skill RipjawsV
Memory Settings Ryzen-2000: DDR4-2933 16-17-17
Ryzen-1000: DDR4-2666 16-17-17
Bristol Ridge: DDR4-2400 15-17-17
Kaveri: DDR3-2133 9-11-11
Coffee Lake: DDR4-2666 16-17-17
Kaby Lake: DDR4-2400 15-15-15
Skylake: DDR4-2133 15-15-15
Skylake-X: DDR4-2400 14-16-16
GPUs MSI GTX 1080 Gaming 8G
Hard Drive Crucial MX200 1TB
Optical Drive LG GH22NS50
Case Open Test Bed
OS Windows 10 Enterprise RS3 (1803) with OS Patches

 

Power Analysis

One of the key debates around power comes down to how TDP is interpreted, how it is measured, and what exactly it should mean. TDP, or Thermal Design Power, is typically a value associated with the required dissipation ability of the cooler being used, rather than the power consumption. There are some finer physics-related differences for the two, but for simplicity most users consider the TDP as the rated power consumption of the processor.

What the TDP is actually indicating is somewhat more difficult to define. For any Intel processor, the rated TDP is actually the thermal dissipation requirements (or power consumption) when the processor is running at its base frequency. So for a chip like the Core i5-8400 that is rated at 65W, it means that the 65W rating only applies at 2.8 GHz. What makes this confusing is that the offical turbo rating for the Core i7-8700 is 3.8 GHz on all cores, well above the listed base frequency. The truth is that if the processor is limited in firmware to 65W, we will only see 3.2 GHz when all cores are loaded. This is important for thermally limited scenarios, but it also means that without that firmware limit, the power consumption is untied to the TDP: Intel gives no rating for TDP above that base frequency, despite the out-of-the-box turbo performance being much higher.

For AMD, TDP is calculated a little differently. It used to be defined as the peak power draw of the CPU, including turbo, under real all-core workloads (rather than a power virus). Now TDP is more of a measure for cooling performance. AMD defines TDP as the difference between the processor lid temperate and the intake fan temperature divided by the minimum thermal cooler performance required. Or to put it another way, the minimum thermal cooler performance is defined as the temperature difference divided by the TDP. As a result, we end up with a sliding scale: if AMD want to define a cooler with a stronger thermal performance, it would lower the TDP.

For Ryzen, AMD dictates that this temperature difference is 19.8ºC (61.8 ºC on processor when inlet is 42ºC), which means that for a 105W TDP, the cooler thermal performance needs a to be able to sustain 0.189 ºC per Watt. With a cooler thermal performance of 0.4 ºC/W, the TDP would be rated at 50W, or a value of 0.1 would give 198 W.

This ultimately makes AMD's TDP more of a measure of cooling performance than power consumption.

When testing, we are also at the whim of the motherboard manufacturer. Ultimately for some processors, turbo modes are defined by a look-up table. If the system is using X cores, then the processor should run at Y frequency. Not only can motherboard manufacturers change that table with each firmware revision, but Intel has stopped making this data official. So we cannot tell if a motherboard manufacturer is following Intel's specifications or not - in some reviews, we have had three different motherboard vendors all have different look up tables, but all three stated they were following Intel specifications. Nice and simple, then.

It should also be stated that we are at the whim of a lottery. While two processors could be stamped as the same, how the processor responds to voltage and frequency could actually be very different. The stamp on the box is merely a minimum guarantee, and the actual performance or thermal characteristics of the processor can vary from the minimu guarantee to something really, really good. Both AMD and Intel go through a process called binning, whereby every processor off the manufacturing line is tested to meet with certain standards - if it surpasses the best standards, it gets stamped as the best processor. If it doesn't meet those standards, it might be labelled as something else. There is also the fact that if a manufacturer needs more mid-range components, they might alter the percentage of parts that do meet the high standard but will be stamped as if they meet a medium standard. So a lottery it is.

Power: Total Package (Full Load)

Power: Cores Only (Full Load)

In our testing, we take the power value readings from the internal registers on the processor designed to estimate the power consumption and apply the right turbo and fan profiles. This method is strictly speaking not the most accurate - for that we would be applying our multimeters. But what it does do is give us more information than a multi-meter would. Modern multi-core processors use different voltage plans for different parts of the processor, or even for each core, so the software readings give us a good breakdown of power for the different regions. This is good if the processor makes it available, but this is not always the case. In most situations, we are able to get the two main important numbers: the estimated power consumption of the whole chip, and the estimated power consumption of just the cores (not the memory controller or interconnects).

What is noticable between the Intel and AMD chips is the difference between core-only power and full-chip power. AMD's interconnect, Infinity Fabric, combined with the other non-core components of the chip, draw a lot more power than the Intel chips do. This arguably leaves more power budget for Intel to push the frequencies. That being said, AMD is keeping power consumption around the TDP values: our Ryzen 7 2700 is especially efficient, while we seem to have an average Ryzen 5 2600. By contrast, the Intel Core i7-8700K blasts past its TDP value very easily, whereas the older Kaby Lake processors are more in line with their TDP values.

Many Thanks To

Thank you to Sapphire for providing us with several of their AMD GPUs. We met with Sapphire back at Computex 2016 and discussed a platform for our future testing on AMD GPUs with their hardware for several upcoming projects. Sapphire passed on a pair of RX 460s to be used as our CPU testing cards. The amount of GPU power available can have a direct effect on CPU performance, especially if the CPU has to spend all its time dealing with the GPU display. The RX 460 is a nice card to have here, as it is powerful yet low on power consumption and does not require any additional power connectors. The Sapphire Nitro RX 460 2GB still follows on from the Nitro philosophy, and in this case is designed to provide power at a low price point. Its 896 SPs run at 1090/1216 MHz frequencies, and it is paired with 2GB of GDDR5 at an effective 7000 MHz.

We must also say thank you to MSI for providing us with their GTX 1080 Gaming X 8GB GPUs. Despite the size of AnandTech, securing high-end graphics cards for CPU gaming tests is rather difficult. MSI stepped up to the plate in good fashion and high spirits with a pair of their high-end graphics. The MSI GTX 1080 Gaming X 8GB graphics card is their premium air cooled product, sitting below the water cooled Seahawk but above the Aero and Armor versions. The card is large with twin Torx fans, a custom PCB design, Zero-Frozr technology, enhanced PWM and a big backplate to assist with cooling.  The card uses a GP104-400 silicon die from a 16nm TSMC process, contains 2560 CUDA cores, and can run up to 1847 MHz in OC mode (or 1607-1733 MHz in Silent mode). The memory interface is 8GB of GDDR5X, running at 10010 MHz. For a good amount of time, the GTX 1080 was the card at the king of the hill.

Further Reading: AnandTech’s NVIDIA GTX 1080 Founders Edition Review

Thank you to Crucial for providing us with MX200 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX200 units are strong performers. Based on Marvell's 88SS9189 controller and using Micron's 16nm 128Gbit MLC flash, these are 7mm high, 2.5-inch drives rated for 100K random read IOPs and 555/500 MB/s sequential read and write speeds. The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 320TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX200 (250 GB, 500 GB & 1TB) Review

Thank you to Corsair for providing us with an AX1200i PSU. The AX1200i was the first power supply to offer digital control and management via Corsair's Link system, but under the hood it commands a 1200W rating at 50C with 80 PLUS Platinum certification. This allows for a minimum 89-92% efficiency at 115V and 90-94% at 230V. The AX1200i is completely modular, running the larger 200mm design, with a dual ball bearing 140mm fan to assist high-performance use. The AX1200i is designed to be a workhorse, with up to 8 PCIe connectors for suitable four-way GPU setups. The AX1200i also comes with a Zero RPM mode for the fan, which due to the design allows the fan to be switched off when the power supply is under 30% load.

Further Reading: AnandTech's Corsair AX1500i Power Supply Review

Thank you to G.Skill for providing us with memory. G.Skill has been a long-time supporter of AnandTech over the years, for testing beyond our CPU and motherboard memory reviews. We've reported on their high capacity and high-frequency kits, and every year at Computex G.Skill holds a world overclocking tournament with liquid nitrogen right on the show floor.

Further Reading: AnandTech's Memory Scaling on Haswell Review, with G.Skill DDR3-3000

StoreMI: The Way To A Faster JBOD Benchmarking Performance: CPU System Tests
Comments Locked

545 Comments

View All Comments

  • jjj - Thursday, April 19, 2018 - link

    I was wondering about gaming, so there is no mistake there as Ryzen 2 seems to top Intel.
    As of right now, I don't seem to find memory specs in the review yet, safe to assume you did as always, highest non-OC so Ryzen is using faster DRAM?
    Also yet to spot memory letency, any chance you have some numbers at 3600MHz vs Intel? Thanks.
  • jjj - Thursday, April 19, 2018 - link

    And just between us, would be nice to have some Vega gaming results under DX12.
  • aliquis - Thursday, April 19, 2018 - link

    Would be nice if any reviewer actually benchmarked storage devices maybe even virtualization because then we'd see meltdown and spectre mitigation performance. Then again do AMD have any for spectre v2 yet? If not who knows what that will do.
  • HStewart - Thursday, April 19, 2018 - link

    I notice that that systems had higher memory, but for me I believe single threaded performance is more important that more cores. But it would be bias if one platform is OC more than another. Personally I don't over clock - except for what is provided with CPU like Turbo mode.

    One thing that I foresee in the future is Intel coming out with 8 core Coffee Lake

    But at least it appears one thing is over is this Meltdown/Spectre stuff
  • Lolimaster - Thursday, April 19, 2018 - link

    Intel 8 core CL won't stop the bleeding, lose more profits making them "cheap" vs a new Ryzen 7nm with at least 10% more clocks and 10% more IPC, RIP.
  • HStewart - Thursday, April 19, 2018 - link

    I just have to agree to disagree on that statement - especially on "cheap" statement
  • ACE76 - Thursday, April 19, 2018 - link

    CL can't scale to 8 cores...not without done serious changes to it's architecture...Intel is in some trouble with this Ryzen refresh...also worth noting is that 7nm Ryzen 2 will likely bring a considerable performance jump while Intel isn't sitting on anything worthwhile at the moment.
  • Alphasoldier - Friday, April 20, 2018 - link

    All Intel's 8cores in HEDT except SkylakeX are based on their year older architecture with a bigger cache and the quad channel.

    So if Intel have the need, they will simply make a CL 8core. 2700X is pretty hungry when OC'd, so Intel don't have to worry at all about its power consuption.
  • moozooh - Sunday, April 22, 2018 - link

    > 2700X is pretty hungry when OC'd
    And Intel chips aren't? If Zen+ is already on Intel's heels for both performance per watt and raw frequency, a 7nm chip with improved IPC and/or cache is very likely going to have them pull ahead by a significant margin. And even if it won't, it's still going to eat into Intel's profit as their next tech is 10nm vs. AMD's 7nm, meaning more optimal wafer estate utilization for the latter.

    AMD has really climbed back at the top of their game; I've been in the Intel camp for the last 12 years or so, but the recent developments throw me way back to K7 and A64 days. Almost makes me sad that I won't have any reason to move to a different mobo in the next 6–8 years or so.
  • mapesdhs - Friday, March 29, 2019 - link

    Amusing to look back given how things panned out. So yes, Intel released the 9900K, but it was 100% more expensive than the 2700X. :D A complete joke. And meanwhile tech reviewers raved about a peasly 5 to 5.2 oc, on a chip that already has a 4.7 max turbo (major yawn fest), focusing on specific 1080p gaming tests that gave silly high fps number favoured by a market segment that is a tiny minority. Then what happens, RTX comes out and pushes the PR focus right back down to 60Hz. :D

    I wish people to stop drinking the Intel/NVIDIA coolaid. AMD does it aswell sometimes, but it's bizarre how uncritical tech reviewers often are about these things. The 9900K dragged mainstream CPU pricing up to HEDT levels; epic fail. Some said oh but it's great for poorly optimised apps like Premiere, completely ignoring the "poorly optimised" part (ie. why the lack of pressure to make Adobe write better code? It's weird to justify an overpriced CPU on the back of a pro app that ought to run a lot better on far cheaper products).

Log in

Don't have an account? Sign up now