Translating to IPC: All This for 3%?

Contrary to popular belief, increasing IPC is difficult. Attempt to ensure that each execution port is fed every cycle requires having wide decoders, large out-of-order queues, fast caches, and the right execution port configuration. It might sound easy to pile it all on, however both physics and economics get in the way: the chip still has to be thermally efficient and it has to make money for the company. Every generational design update will go for what is called the ‘low-hanging fruit’: the identified changes that give the most gain for the smallest effort. Usually reducing cache latency is not always the easiest task, and for non-semiconductor engineers (myself included), it sounds like a lot of work for a small gain.

For our IPC testing, we use the following rules. Each CPU is allocated four cores, without extra threading, and power modes are disabled such that the cores run at a specific frequency only. The DRAM is set to what the processor supports, so in the case of the new CPUs, that is DDR4-2933, and the previous generation at DDR4-2666. I have recently seen threads which dispute if this is fair: this is an IPC test, not an instruction efficiency test. The DRAM official support is part of the hardware specifications, just as much as the size of the caches or the number of execution ports. Running the two CPUs at the same DRAM frequency gives an unfair advantage to one of them: either a bigger overclock/underclock, and deviates from the intended design.

So in our test, we take the new Ryzen 7 2700X, the first generation Ryzen 7 1800X, and the pre-Zen Bristol Ridge based A12-9800, which is based on the AM4 platform and uses DDR4. We set each processors at four cores, no multi-threading, and 3.0 GHz, then ran through some of our tests.

For this graph we have rooted the first generation Ryzen 7 1800X as our 100% marker, with the blue columns as the Ryzen 7 2700X. The problem with trying to identify a 3% IPC increase is that 3% could easily fall within the noise of a benchmark run: if the cache is not fully set before the run, it could encounter different performance. Shown above, a good number of tests fall in that +/- 2% range.

However, for compute heavy tasks, there are 3-4% benefits: Corona, LuxMark, CineBench and GeekBench are the ones here. We haven’t included the GeekBench sub-test results in the graph above, but most of those fall into the 2-5% category for gains.

If we take out Cinebench R15 nT result and the Geekbench memory tests, the average of all of the tests comes out to a +3.1% gain for the new Ryzen 2700X. That sounds bang on the money for what AMD stated it would do.

Cycling back to that Cinebench R15 nT result that showed a 22% gain. We also had some other IPC testing done at 3.0 GHz but with 8C/16T (which we couldn’t compare to Bristol Ridge), and a few other tests also showed 20%+ gains. This is probably a sign that AMD might have also adjusted how it manages its simultaneous multi-threading. This requires further testing.

AMD’s Overall 10% Increase

With some of the benefits of the 12LP manufacturing process, a few editors internally have questioned exactly why AMD hasn’t redesigned certain elements of the microarchitecture to take advantage. Ultimately it would appear that the ‘free’ frequency boost is worth just putting the same design in – as mentioned previously, the 12LP design is based on 14LPP with performance bump improvements. In the past it might not have been mentioned as a separate product line. So pushing through the same design is an easy win, allowing the teams to focus on the next major core redesign.

That all being said, AMD has previously already stated its intentions for the Zen+ core design – rolling back to CES at the beginning of the year, AMD stated that they wanted Zen+ and future products to go above and beyond the ‘industry standard’ of a 7-8% performance gain each year.

Clearly 3% IPC is not enough, so AMD is combining the performance gain with the +250 MHz increase, which is about another 6% peak frequency, with better turbo performance with Precision Boost 2 / XFR 2. This is about 10%, on paper at least. Benchmarks to follow.

Improvements to the Cache Hierarchy: Lower Latency = Higher IPC Precision Boost 2 and XFR2: Ensuring It Hertz More
Comments Locked

545 Comments

View All Comments

  • bryanlarsen - Thursday, April 19, 2018 - link

    Just because transistors can be 15% smaller, doesn't mean that they have to be. Every IC design includes transistors of many different sizes. GF is saying that the minimum transistor size is 15% smaller than the previous minimum transistor size. And it seems that AMD chose not to use them, selecting to use a larger, higher performance transistor instead that happens to be the same size as their previous transistor.
  • bryanlarsen - Thursday, April 19, 2018 - link

    And you confirm that in the next paragraph. "AMD confirmed that they are using 9T transistor libraries, also the same as the previous generation, although GlobalFoundries offers a 7.5T design as well." So please delete your very misleading transistor diagram and accompanying text.
  • danjw - Friday, April 20, 2018 - link

    I think you are misreading that part of the article. AMD shrunk the size of the processor blocks giving them more "dark silicone" between the blocks. This allowed better thermal isolation between blocks, thus higher clocks.
  • The Hardcard - Thursday, April 19, 2018 - link

    “Cache Me Ousside, How Bow Dah?“

    Very low hanging fruit, yet still so delicious.
  • msroadkill612 - Thursday, April 19, 2018 - link

    "Intel is expected to have a frequency and IPC advantage
    AMD’s counter is to come close on frequency and offer more cores at the same price

    It is easy for AMD to wave the multi-threaded crown with its internal testing, however the single thread performance is still a little behind."

    If so, why is it given such emphasis - its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms. Oh so recently, the norm wa probably 2 core, so that's what they coded for - THEN.

    This minor advantage, compares to intel getting absolutely smashed on increasingly multi threaded apps, at any price point, is rarely mentioned in proximity, where it deserves to be in a balanced analysis.
  • Ratman6161 - Thursday, April 19, 2018 - link

    "its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms" As I often do, I'd like to remind people that not all readers of this article are gamers or give a darn about games. I am one of those i.e. game performance is meaningless to me.
  • 0ldman79 - Thursday, April 19, 2018 - link

    Agreed.

    I am a gamer, but the gaming benchmarks are nearly irrelevant at this point.

    Almost every CPU (ignoring Atom) can easily feed a modern video card and keep the framerate above 60fps. I'm running an FX 6300 and I still run everything at 1080p with a GTX 970 and hardly ever see a framerate drop.

    Gaming benches are somewhat less important than days gone by. Everything on the market hits the minimum requirement and then some. It's primarily fuel for the fanboys, "OMG!!! AMD sucks!!! Intel is faster at gaming!!!"

    Well, considering Intel is running 200fps and AMD is hitting 175fps I'm *thinking* they're both playable.
  • Akkuma - Thursday, April 19, 2018 - link

    Gaming + streaming benchmarks, as done by GamersNexus, are exactly the kind of relevant and important benchmarks more sites need to be doing. Those numbers you don't care about are much more important when you start trying to do streaming.

    Your 60fps? That isn't even what most people who game care about with high refresh rate monitors doing 144hz+. Add in streaming where you're taking a decent FPS hit and that difference between 200 and 175 fps all of a sudden is the difference between maintaining the 144hz and not.
  • Vesperan - Thursday, April 19, 2018 - link

    Yea but.. of all the people interested in gaming, those with high refresh rate monitors and/or streaming online is what - 10% of the market? Tops?

    Sure the GamersNexus reviews have relevance.. to that distinct minority of people out there. Condemning/praising CPU architectures for gaming in general due to these corner cases is non-sensical.

    Like Oldman79 said, damn near any of these CPUs is fine for gaming - unless you happen to be one of the corner cases.
  • Akkuma - Friday, April 20, 2018 - link

    You're pulling a number out of thin air and building an entire argument around a made up number. 72% of steam users have 1080p monitors. What percentage of those are high refresh rate is unknown, but 120hz monitors have existed for at least 5 years now and maybe even longer. At this stage arguing around 60fps is like arguing about sound quality of cassettes today as we are long past it.

Log in

Don't have an account? Sign up now