CPU & Memory Subsystem

As we mentioned earlier, the Snapdragon 845 is the first SoC employing ARM’s new cluster technology DynamiQ. The different CPU cores are no longer hosted in dedicated cluster subsystems but are now integrated in a larger DynamiQ cluster. This change means that the cluster cache which to date was referred to as the L2 cache becomes the L3 cache and the individual CPU cores now receive a new private per-core L2. In the case of the Snapdragon 845, the Kryo 385 performance cores – derivative of ARM’s new A75 – are configured with 256KB L2 caches. The Kryo 845 efficiency cores, which are derivative of the A55, use 128KB L2 cache configurations.

The L3 cache of the DynamiQ Shared Unit (DSU) is configured at 2MB. At the launch of the Snapdragon 845 Qualcomm advertised three voltage and clock domains – unfortunately we haven’t had time to look deeper into the system of the QRD to find out how this is partitioned, however it is still my unconfirmed belief that the third clock/voltage domain is dedicated to the DSU and not part of the CPU cores. An important characteristic that is totally new to the Android SoC ecosystem is the introduction of a system cache – this 3MB cache seems to sit at the memory controller / interconnect level above the CPU subsystem, something we’ll get back to in just a bit.

Qualcomm’s performance projections seemed relatively conservative as they claimed a performance uplift of only 25-30% which seemed lower than ARM’s projections. We have to keep in mind that beyond the microarchitectural improvements expected from the transition from A73 to A75 based CPU cores we also have a flat 14% frequency increase from 2.47GHz to 2.8GHz on the side of the performance cores. Unfortunately in the limited testing time we had with the QRD we couldn’t make use of long-running CPU benchmarks such as our SPEC suite, so for the scope of this article we had to base our synthetic analysis on GeekBench4 results.

Geekbench 4 - Integer Performance
Single Threaded
  Snapdragon 845 Snapdragon 835 % Increase
 
AES 1160 MB/s 942.5 MB/s 23.1%
LZMA 4.15 MB/s 2.98 MB/s 39.3%
JPEG 20.8 Mpixels/s 16.6 Mpixels/s 25.2%
Canny 32.1 Mpixels/s 24.9 Mpixels/s 28.8%
Lua 2.18 MB/s 1.75 MB/s 24.0%
Dijkstra 1.90 MTE/s 1.62 MTE/s 16.9%
SQLite 70.3 Krows/s 53.4 Krows/s 31.8%
HTML5 Parse 12.9 MB/s 8.97 MB/s 44.1%
HTML5 DOM 3.00 Melems/s 2.27 Melems/s 31.9%
Histogram Equalization 67.3 Mpixels/s 52.5 Mpixels/s 28.2%
PDF Rendering 66.4 Mpixels/s 48.5 Mpixels/s 37.0%
LLVM 321.2 functions/s 257.3 functions/s 24.8%
Camera 7.96 images/s 5.64 images/s 40.9%

For the integer workload results we see a healthy performance across the various tests. Qualcomm’s 25-30% increase here seems to be justified as this is the most common increase in most tests. Workloads such as LZMA, HTML5 parsing, PDF rendering and the Camera substests see larger increases into the 40% range. The overall improvement in absolute performance for the integer tests is 31%.

Geekbench 4  (Single Threaded) Integer Score/MHz

If we revisit performance per clock across recent microarchitectures we see the Snapdragon’s A75 based cores increase by only a meagre 15% which is below our expectations. We move on to the floating point benchmarks to see if we see a similar story.

Geekbench 4 - Floating Point Performance
Single Threaded
  Snapdragon 845 Snapdragon 835 % Increase
SGEMM 16.6 GFLOPS 11.4 GFLOPS 45.1%
SFFT 4.23 GFLOPS 2.86 GFLOPS 47.9%
N-Body Physics 1400 Kpairs/s 872.2 Kpairs/s 60.5%
Rigid Body Physics 8524.2 FPS 6130.5 FPS 39.0%
Ray Tracing 354.0 Kpixels/s 232.7 Kpixels/s 52.1%
HDR 11.9 Mpixels/s 8.31 Mpixels/s 43.2%
Gaussian Blur 34.5 Mpixels/s 23.9 Mpixels/s 44.3%
Speech Recognition 17.9 Words/s 13.6 Words/s 31.6%
Face Detection 752.4 Ksubs/s 532.8 Ksubs/s 41.2%

The FP subtests of GB4 show a noticeably larger increase than the integer tests. Besides the switch from a 2-wide decode front-end to a 3-wide one, the largest changes of the A75 microarchitecture was found in the floating point execution pipelines and is likely the cause for the larger FP performance improvement. The boost here comes at an overall 45% in GB4.

Geekbench 4 (Single Threaded) Floating Point Score/MHz

In terms of performance per clock, the 45% overall boost translates into a much larger 26% increase in IPC which is nearer to what we had expected.

Revisiting the performance claims from ARM’s TechDay release of the A75 we notice that we had been promised larger improvements such as up to a 34% increase in GB4 performance per clock, which I interpreted with the frequency increase of the Snapdragon 845 to result in a 52% overall increase, which did not materialise. I reached out to ARM on the topic and got back several points of consideration: The projections ARM published were made on a A75 simulation with 512KB L2 caches and 2MB L3. The L3 matches the configuration of the Snapdragon 845 however Qualcomm’s choice of going with smaller L2 caches will have a certain performance hit. ARM didn’t have a number at hand for GB4 but quotes a 2% performance degradation for SPEC2000, and claims for GB4 it should be lower. Another consideration point is the memory subsystem of the SoC which ARM can’t control but heavily impacts the performance of the CPU, so let’s have a look at that.

Running our internal memory benchmark on the QRD we see several expected characteristics of the Snapdragon 845: Compared to the Snapdragon 835’s A73 based cores we see the shift from shared cluster L2’s to private ones as well as the integration of sort of an L3 and L4 cache. The new L2 caches are very visible in our benchmark as memory latency up to the 256KB barrier (or rather, the 320KB barrier as the L1D and L2 are exclusive) sees a vast reduction compared to the L2 region of the A73 cores. The A75 cores promise 8-cycle hits for the L2 versus 19 cycles on the A73. In our test the difference is far larger as see a reduction from ~30ns down to ~4.5ns (not forgetting a clock frequency increase of the new cache). After the 256/320KB test size boundary we enter the DSU’s L3 cache. ARM describes the L3 as pseudo-exclusive so the outer boundary should end around or shortly after the 2048KB mark, the transition here is much harder to make out in the limited data we had time to collect so hopefully we’ll get to revisit it on a commercial device.

On the Snapdragon 835 the transition between L2 cache and DRAM is very sharp and visible in the graph. On the Snapdragon 845 however we see a far more gradient latency transition stretching out to up to the 5MB test depth. This is confirmation that Qualcomm’s system cache is indeed applied to the CPU subsystem and acts as an exclusive L4 cache to the processors. I think this new system cache is a true SoC-wide cache lying high up at the interconnect or memory controller level.

One of the worries of such a configuration for the CPU subsystem was increased latency to DRAM and it seems my fears were realised as the Snapdragon 845 shows a 30% increase in main memory latency from the CPU subsystem. Previously the Snapdragon 835 seemed to have by far one of the best memory controller implementations which directly resulted in higher performance of memory latency sensitive workloads. The latency increase in the 845 thus must be counteracting some of the microarchitectural improvements on part of the CPU cores. For GB4 in particular I made a remark that I didn’t notice any performance impact at all on the part of the Kirin 970’s memory latency, however we’re talking about different platforms and CPUs so I can’t be certain.

We reserve final conclusion on synthetic benchmarks until we get more time with a Snapdragon 845 device and able to investigate more and run SPEC. For now it looks like the Snapdragon 845 does not reach ARM’s projected performance levels, and falls well short of the claims. Among one of the other performance claims was Octane. We retired Octane some years ago and Google shortly followed up with official retirement, but as an added data-point the Snapdragon 845 reached a score of 15969 versus the Snapdragon 835’s 11879, also well short of a 20000 target that a projected 1.48x per clock performance increase would have resulted in.

Introduction System Performance
Comments Locked

140 Comments

View All Comments

  • Lodix - Monday, February 12, 2018 - link

    That they make their own chip.
  • ZolaIII - Monday, February 12, 2018 - link

    Actually A75 is a bit of a let down. As it's actually refined A73 with tree instructions per clock vs two. I assume that with larger cache, bigger predictor & everything else it's also close of being 50% larger while it's able to achieve only 20~25% performance advantage. Nevertheless if compared to A71 which is similar 3 instructions per clock design advantage is nice 30~35%. Neither is really a server material & you know that pretty good (of all people around hire). We will have to wait & see what Austin will cook up next.
  • ZolaIII - Monday, February 12, 2018 - link

    One more thing FP VFP & especially NEON got a most significant boost with A75 - A73, that's actually only really architectural improvement on this generation. FP performance is very important as it scales rather good on the SMP while integer doesn't. Still giving MP scaling factor & relative power efficiency/performance the A55's are still much better target for such workloads using 25% power & achieving 85% performance per MHz. Arm's NEON SIMD whose marginally unusable before this gen. as on the previous VFP had 98% of NEON performance while (VFP) being much faster to access so in many real workloads actually faster. ARM boosted NEON performance but in my opinion not even close enough to go in a higher tear. I do agree with you that Integer performance is actually rather very good for small, efficient little OoO core but ARM must do much more on the FP - NEON SIMD if it wants that their cores become more competitive in HPC segment. Actually I see this as a key (FP performance). Hopefully they will produce a next key architectural element of unified SIMD with added multiply, divide subs on it as I see that as the best possible scaling/performance improvement & also as future avoiding of black silicone. Actually regarding large NEON SIMD blocks usage & in the purpose of server scientific HPC workloads the Fujitsu started working on it long time ago (two + years ago). I just wonder what happened with that.
  • iter - Monday, February 12, 2018 - link

    You are confusing integer and floating point with scalar and vector. SIMD units do vector processing, the vector components can be either integer or floating point. Both are equally useful in HPC, and both get a massive boost from SIMD processing. It is the ALU and the FPU units that do scalar processing, that is one number at a time, of integers and floating point numbers respectively. Those are not used for data crunching, but for managing the program flow, which is beneficial since the lower throughput also means lower latency.

    There is no such thing as a free lunch here. If you want to stay at a lower power target, you have to compromise on the SIMD throughput. There is no way to cheat around that. If ARM chips get SIMD units to match x86 counterparts they will also match their higher power usage.
  • ZolaIII - Monday, February 12, 2018 - link

    Lol both scalar and vector are FP. I ain't confusing anything, you are... SIMD's are rather efficient, more efficient for an order of magnitude compared to the VFP, that's why SIMD arias find their way to pretty much any special purpose or general purpose computing unit's. What I told is a massive united heterogeneous SIMD aria... Now think about it.
  • iter - Tuesday, February 13, 2018 - link

    You are such a dummy. Scalar means "one number", vector means "two or more numbers". The number can be an integer or a floating point number. SIMD instruction sets feature dozens of instructions for processing integer numbers, which are essential to image, video and audio processing, which is all stored using integers.

    In fact, the first SIMD implementation to hit consumer products was intel's MMX, which provided ONLY INTEGER operations.

    As I said - scalar operations involve processing one number of a time, and are executed by the ALU or FP unit for integers and real numbers respectively, vector operations involve processing multiple numbers at once, and is handled by the SIMD units, regardless of whether its integers or reals.
  • lmcd - Monday, February 12, 2018 - link

    Wouldn't get too excited, as A75 was reported to feature a variant of the "Meltdown" bug also affecting Intel CPUs. Performance hit for a patch could be damaging.
  • SirPerro - Monday, February 12, 2018 - link

    I'm more interested in the mid-range processors to drive devices like the Moto G Plus series

    Right now, an SD845 is extraordinarily excessive for like... 95% of the Android use cases.

    It's like... "OK, that 1080 Ti GPU is really nice, but how good is the 1060 I will actually pay for?"
  • imaheadcase - Monday, February 12, 2018 - link

    The irony of all this is that software is going to make the difference more than this SoC. You can have the best SoC and put in a shit phone.
  • yeeeeman - Monday, February 12, 2018 - link

    First of all nice review Andrei, coming from a romanian guy like you.
    Related to SD845, this chip is a nice bump over 835, but I cannot help but wonder if this yearly cadence is really a necessity or just a money grabbing, each year, thing.
    I want to change my Z3 compact, SD801 phone with something new, but I feel like what is best, has yet to come. In 2019 we will have 5G modems, 11ax wifi chips, new uArch from ARM aaand 7nm. This chip is just a intermediate step to have something to sell this year, but in any case, nice work as usual from Qualcomm.

Log in

Don't have an account? Sign up now