Instruction Changes

Both of the processor cores inside Alder Lake are brand new – they build on the previous generation Core and Atom designs in multiple ways. As always, Intel gives us a high level overview of the microarchitecture changes, as we’ve written in an article from Architecture Day:

At the highest level, the P-core supports a 6-wide decode (up from 4), and has split the execution ports to allow for more operations to execute at once, enabling higher IPC and ILP from workflow that can take advantage. Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated.

For the E-core, similarly it also has a 6-wide decode, although split to 2x3-wide. It has a 17 execution ports, buffered by double the load/store support of the previous generation Atom core. Beyond this, Gracemont is the first Atom core to support AVX2 instructions.

As part of our analysis into new microarchitectures, we also do an instruction sweep to see what other benefits have been added. The following is literally a raw list of changes, which we are still in the process of going through. Please forgive the raw data. Big thanks to our industry friends who help with this analysis.

Any of the following that is listed as A|B means A in latency (in clocks) and B in reciprocal throughput (1/instructions).

 

P-core: Golden Cove vs Cypress Cove

Microarchitecture Changes:

  • 6-wide decoder with 32b window: it means code size much less important, e.g. 3 MOV imm64 / clks;(last similar 50% jump was Pentium -> Pentium Pro in 1995, Conroe in 2006 was just 3->4 jump)
  • Triple load: (almost) universal
    • every GPR, SSE, VEX, EVEX load gains (only MMX load unsupported)
    • BROADCAST*, GATHER*, PREFETCH* also gains
  • Decoupled double FADD units
    • every single and double SIMD VADD/VSUB (and AVX VADDSUB* and VHADD*/VHSUB*) has latency gains
    • Another ADD/SUB means 4->2 clks
    • Another MUL means 4->3 clks
    • AVX512 support: 512b ADD/SUB rec. throughput 0.5, as in server!
    • exception: half precision ADD/SUB handled by FMAs
    • exception: x87 FADD remained 3 clks
  • Some form of GPR (general purpose register) immediate additions treated as NOPs (removed at the "allocate/rename/move ellimination/zeroing idioms" step)
    • LEA r64, [r64+imm8]
    • ADD r64, imm8
    • ADD r64, imm32
    • INC r64
    • Is this just for 64b addition GPRs?
  • eliminated instructions:
    • MOV r32/r64
    • (V)MOV(A/U)(PS/PD/DQ) xmm, ymm
    • 0-5 0x66 NOP
    • LNOP3-7
    • CLC/STC
  • zeroing idioms:
    • (V)XORPS/PD, (V)PXOR xmm, ymm
    • (V)PSUB(U)B/W/D/Q xmm
    • (V)PCMPGTB/W/D/Q xmm
    • (V)PXOR xmm

Faster GPR instructions (vs Cypress Cove):

  • LOCK latency 20->18 clks
  • LEA with scale throughput 2->3/clk
  • (I)MUL r8 latency 4->3 clks
  • LAHF latency 3->1 clks
  • CMPS* latency 5->4 clks
  • REP CMPSB 1->3.7 Bytes/clock
  • REP SCASB 0.5->1.85 Bytes/clock
  • REP MOVS* 115->122 Bytes/clock
  • CMPXVHG16B 20|20 -> 16|14
  • PREFETCH* throughput 1->3/clk
  • ANDN/BLSI/BLSMSK/BLSR throughput 2->3/clock
  • SHA1RNDS4 latency 6->4
  • SHA1MSG2 throughput 0.2->0.25/clock
  • SHA256MSG2 11|5->6|2
  • ADC/SBB (r/e)ax 2|2 -> 1|1

Faster SIMD instructions (vs Cypress Cove):

  • *FADD xmm/ymm latency 4->3 clks (after MUL)
  • *FADD xmm/ymm latency 4->2 clks(after ADD)
  • * means (V)(ADD/SUB/ADDSUB/HADD/HSUB)(PS/PD) affected
  • VADD/SUB/PS/PD zmm  4|1->3.3|0.5
  • CLMUL xmm  6|1->3|1
  • CLMUL ymm, zmm 8|2->3|1
  • VPGATHERDQ xmm, [xm32], xmm 22|1.67->20|1.5 clks
  • VPGATHERDD ymm, [ym32], ymm throughput 0.2 -> 0.33/clock
  • VPGATHERQQ ymm, [ym64], ymm throughput 0.33 -> 0.50/clock

Regressions, Slower instructions (vs Cypress Cove):

  • Store-to-Load-Forward 128b 5->7, 256b 6->7 clocks
  • PAUSE latency 140->160 clocks
  • LEA with scale latency 2->3 clocks
  • (I)DIV r8 latency 15->17 clocks
  • FXCH throughput 2->1/clock
  • LFENCE latency 6->12 clocks
  • VBLENDV(B/PS/PD) xmm, ymm 2->3 clocks
  • (V)AESKEYGEN latency 12->13 clocks
  • VCVTPS2PH/PH2PS latency 5->6 clocks
  • BZHI throughput 2->1/clock
  • VPGATHERDD ymm, [ym32], ymm latency 22->24 clocks
  • VPGATHERQQ ymm, [ym64], ymm latency 21->23 clocks

 

E-core: Gracemont vs Tremont

Microarchitecture Changes:

  • Dual 128b store port (works with every GPR, PUSH, MMX, SSE, AVX, non-temporal m32, m64, m128)
  • Zen2-like memory renaming with GPRs
  • New zeroing idioms
    • SUB r32, r32
    • SUB r64, r64
    • CDQ, CQO
    • (V)PSUBB/W/D/Q/SB/SW/USB/USW
    • (V)PCMPGTB/W/D/Q
  • New ones idiom: (V)PCMPEQB/W/D/Q
  • MOV elimination: MOV; MOVZX; MOVSX r32, r64
  • NOP elimination: NOP, 1-4 0x66 NOP throughput 3->5/clock, LNOP 3, LNOP 4, LNOP 5

Faster GPR instructions (vs Tremont)

  • PAUSE latency 158->62 clocks
  • MOVSX; SHL/R r, 1; SHL/R r,imm8  tp 1->0.25
  • ADD;SUB; CMP; AND; OR; XOR; NEG; NOT; TEST; MOVZX; BSSWAP; LEA [r+r]; LEA [r+disp8/32] throughput 3->4 per clock
  • CMOV* throughput 1->2 per clock
  • RCR r, 1 10|10 -> 2|2
  • RCR/RCL r, imm/cl 13|13->11|11
  • SHLD/SHRD r1_32, r1_32, imm8 2|2 -> 2|0.5
  • MOVBE latency 1->0.5 clocks
  • (I)MUL r32 3|1 -> 3|0.5
  • (I)MUL r64 5|2 -> 5|0.5
  • REP STOSB/STOSW/STOSD/STOSQ 15/8/12/11 byte/clock -> 15/15/15/15 bytes/clock

Faster SIMD instructions (vs Tremont)

  • A lot of xmm SIMD throughput is 4/clock instead of theoretical maximum(?) of 3/clock, not sure how this is possible
  • MASKMOVQ throughput 1 per 104 clocks -> 1 per clock
  • PADDB/W/D; PSUBB/W/D PAVGB/PAVGW 1|0.5 -> 1|.33
  • PADDQ/PSUBQ/PCMPEQQ mm, xmm: 2|1 -> 1|.33
  • PShift (x)mm, (x)mm 2|1 -> 1|.33
  • PMUL*, PSADBW mm, xmm 4|1 -> 3|1
  • ADD/SUB/CMP/MAX/MINPS/PD 3|1 -> 3|0.5
  • MULPS/PD 4|1 -> 4|0.5
  • CVT*, ROUND xmm, xmm 4|1 -> 3|1
  • BLENDV* xmm, xmm 3|2 -> 3|0.88
  • AES, GF2P8AFFINEQB, GF2P8AFFINEINVQB xmm 4|1 -> 3|1
  • SHA256RNDS2 5|2 -> 4|1
  • PHADD/PHSUB* 6|6 -> 5|5

Regressions, Slower (vs Tremont):

  • m8, m16 load latency 4->5 clocks
  • ADD/MOVBE load latency 4->5 clocks
  • LOCK ADD 16|16->18|18
  • XCHG mem 17|17->18|18
  • (I)DIV +1 clock
  • DPPS 10|1.5 -> 18|6
  • DPPD 6|1 -> 10|3.5
  • FSIN/FCOS +12% slower

 

Power: P-Core vs E-Core, Win10 vs Win11 CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP
Comments Locked

474 Comments

View All Comments

  • michael2k - Thursday, November 4, 2021 - link

    One is a bellwether for the other.

    Mobile parts will have cores and clocks slashed to hit mobile power levels; 7W-45W with 2p2e - 6p8e

    However, given that a single P core in the desktop variant can burn 78W in POV Ray, and they want 6 of them in a mobile part under 45W, that means a lot of restrictions apply.

    Even 8 E cores, per this review, clock in at 48W!

    That suggests a 6p8e part can't be anywhere near the desktop part's 5.2GHz/3.9GHz Turbo clocks. If there is a linear power-clock relationship (no change in voltage) then 8 E cores at 3GHz will be the norm. 6 P cores on POV-Ray burn 197W, then to hit 45W would mean throttling all 6 cores to 1.2GHz

    https://hothardware.com/news/intel-alder-lake-p-mo...
  • siuol11 - Thursday, November 4, 2021 - link

    Except that we know that the power-clock ratio is not linear and never has been. You can drop a few hundred MHz off of any Intel chip for the past 5 generations and get a much better performance per watt ratio. This is why mobile chips don't lose a lot of MHz compared to desktop chips.
  • michael2k - Thursday, November 4, 2021 - link

    We already know their existing Ice Lake 10nm 4C mobile parts are capped at 1.2GHz to hit 10W:
    https://www.anandtech.com/show/15657/intels-new-si...

    A 6p8e part might not clock that low, but I'm certain that they will have to for the theoretical 7W parts.

    Here's a better 10nm data point showing off their 15W-28W designs:
    https://www.anandtech.com/show/14664/testing-intel...

    4C 2.3GHz 28W TDP

    Suggests that a 4pNe part might be similar while the 6p8e part would probably be a 2.3GHz part that could turbo up to a single core to 4GHz or all cores to 3.6GHz
  • TheinsanegamerN - Thursday, November 4, 2021 - link

    Yes, once it gets in the way of performance, and intel's horrible efficiency means you need high end water cooling to keep it running, whereas AMD does not. Intel's inneficiency is going to be an issue for those who like air cooling, which is a lot of the market.
  • Wrs - Thursday, November 4, 2021 - link

    Trouble is I'm not seeing "horrible efficiency" in these benchmarks. The 12900k is merely pushed far up the curve in some of these benches - if the Zen3 parts could be pushed that far up, efficiency would likewise drop quite a bit faster than performance goes up. Some people already do that. PBO on the 5900x does up to about 220W (varies on the cooler).
  • jerrylzy - Friday, November 5, 2021 - link

    PBO is garbage. You can restrict EDC to 140A, let loose other restrictions and achieve a better performance than setting EDC to 220A.
  • Spunjji - Friday, November 5, 2021 - link

    "if the Zen3 parts could be pushed that far up"
    But you wouldn't, because you'd get barely any more performance for increased power draw. This is a decision Intel made for the default shipping configuration and it needs to be acknowledged as such.
  • Wrs - Saturday, November 6, 2021 - link

    As a typical purchaser of K chips the default shipping configuration holds rather little weight. A single BIOS switch (PBO on AMD, MTP on Intel), or one slight change to Windows power settings, is pretty much all the efficiency difference between 5950x and 12900k. It pains me every time I see a reviewer or reader fail to realize that. The chips trade blows on the various benches because they're so similar in efficiency, yet each by their design has strong advantages in certain commonplace scenarios.
  • Spunjji - Friday, November 5, 2021 - link

    If the competition are able to offer similar performance and you don't have to shell out the cash and space for a 360mm AIO to get it, that's a relevant advantage. If those things don't bother you then it's fine, though - but we're in a situation where AMD's best is much more power efficient than Intel's at full load, albeit Intel appears to reverse that at lower loads.
  • geoxile - Thursday, November 4, 2021 - link

    Clock/power scales geometrically. The 5900HS retains ~85% of the 5800X's performance while using 35-40W stable power vs 110-120W for the 5800X. That's almost 3x more efficient. Intel is clocking desktop ADL to the moon, it doesn't mean ADL is going to scale down poorly, if anything I expect it to scale down very well since the E-cores are very performant while using a fraction of the power and according to Intel can operate at lower voltages than the P-cores can, so they can scale down even lower than big cores like ADL P-cores and zen 3. ADL mobile should be way more interesting than ADL desktop.

Log in

Don't have an account? Sign up now