Instruction Changes

Both of the processor cores inside Alder Lake are brand new – they build on the previous generation Core and Atom designs in multiple ways. As always, Intel gives us a high level overview of the microarchitecture changes, as we’ve written in an article from Architecture Day:

At the highest level, the P-core supports a 6-wide decode (up from 4), and has split the execution ports to allow for more operations to execute at once, enabling higher IPC and ILP from workflow that can take advantage. Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated.

For the E-core, similarly it also has a 6-wide decode, although split to 2x3-wide. It has a 17 execution ports, buffered by double the load/store support of the previous generation Atom core. Beyond this, Gracemont is the first Atom core to support AVX2 instructions.

As part of our analysis into new microarchitectures, we also do an instruction sweep to see what other benefits have been added. The following is literally a raw list of changes, which we are still in the process of going through. Please forgive the raw data. Big thanks to our industry friends who help with this analysis.

Any of the following that is listed as A|B means A in latency (in clocks) and B in reciprocal throughput (1/instructions).

 

P-core: Golden Cove vs Cypress Cove

Microarchitecture Changes:

  • 6-wide decoder with 32b window: it means code size much less important, e.g. 3 MOV imm64 / clks;(last similar 50% jump was Pentium -> Pentium Pro in 1995, Conroe in 2006 was just 3->4 jump)
  • Triple load: (almost) universal
    • every GPR, SSE, VEX, EVEX load gains (only MMX load unsupported)
    • BROADCAST*, GATHER*, PREFETCH* also gains
  • Decoupled double FADD units
    • every single and double SIMD VADD/VSUB (and AVX VADDSUB* and VHADD*/VHSUB*) has latency gains
    • Another ADD/SUB means 4->2 clks
    • Another MUL means 4->3 clks
    • AVX512 support: 512b ADD/SUB rec. throughput 0.5, as in server!
    • exception: half precision ADD/SUB handled by FMAs
    • exception: x87 FADD remained 3 clks
  • Some form of GPR (general purpose register) immediate additions treated as NOPs (removed at the "allocate/rename/move ellimination/zeroing idioms" step)
    • LEA r64, [r64+imm8]
    • ADD r64, imm8
    • ADD r64, imm32
    • INC r64
    • Is this just for 64b addition GPRs?
  • eliminated instructions:
    • MOV r32/r64
    • (V)MOV(A/U)(PS/PD/DQ) xmm, ymm
    • 0-5 0x66 NOP
    • LNOP3-7
    • CLC/STC
  • zeroing idioms:
    • (V)XORPS/PD, (V)PXOR xmm, ymm
    • (V)PSUB(U)B/W/D/Q xmm
    • (V)PCMPGTB/W/D/Q xmm
    • (V)PXOR xmm

Faster GPR instructions (vs Cypress Cove):

  • LOCK latency 20->18 clks
  • LEA with scale throughput 2->3/clk
  • (I)MUL r8 latency 4->3 clks
  • LAHF latency 3->1 clks
  • CMPS* latency 5->4 clks
  • REP CMPSB 1->3.7 Bytes/clock
  • REP SCASB 0.5->1.85 Bytes/clock
  • REP MOVS* 115->122 Bytes/clock
  • CMPXVHG16B 20|20 -> 16|14
  • PREFETCH* throughput 1->3/clk
  • ANDN/BLSI/BLSMSK/BLSR throughput 2->3/clock
  • SHA1RNDS4 latency 6->4
  • SHA1MSG2 throughput 0.2->0.25/clock
  • SHA256MSG2 11|5->6|2
  • ADC/SBB (r/e)ax 2|2 -> 1|1

Faster SIMD instructions (vs Cypress Cove):

  • *FADD xmm/ymm latency 4->3 clks (after MUL)
  • *FADD xmm/ymm latency 4->2 clks(after ADD)
  • * means (V)(ADD/SUB/ADDSUB/HADD/HSUB)(PS/PD) affected
  • VADD/SUB/PS/PD zmm  4|1->3.3|0.5
  • CLMUL xmm  6|1->3|1
  • CLMUL ymm, zmm 8|2->3|1
  • VPGATHERDQ xmm, [xm32], xmm 22|1.67->20|1.5 clks
  • VPGATHERDD ymm, [ym32], ymm throughput 0.2 -> 0.33/clock
  • VPGATHERQQ ymm, [ym64], ymm throughput 0.33 -> 0.50/clock

Regressions, Slower instructions (vs Cypress Cove):

  • Store-to-Load-Forward 128b 5->7, 256b 6->7 clocks
  • PAUSE latency 140->160 clocks
  • LEA with scale latency 2->3 clocks
  • (I)DIV r8 latency 15->17 clocks
  • FXCH throughput 2->1/clock
  • LFENCE latency 6->12 clocks
  • VBLENDV(B/PS/PD) xmm, ymm 2->3 clocks
  • (V)AESKEYGEN latency 12->13 clocks
  • VCVTPS2PH/PH2PS latency 5->6 clocks
  • BZHI throughput 2->1/clock
  • VPGATHERDD ymm, [ym32], ymm latency 22->24 clocks
  • VPGATHERQQ ymm, [ym64], ymm latency 21->23 clocks

 

E-core: Gracemont vs Tremont

Microarchitecture Changes:

  • Dual 128b store port (works with every GPR, PUSH, MMX, SSE, AVX, non-temporal m32, m64, m128)
  • Zen2-like memory renaming with GPRs
  • New zeroing idioms
    • SUB r32, r32
    • SUB r64, r64
    • CDQ, CQO
    • (V)PSUBB/W/D/Q/SB/SW/USB/USW
    • (V)PCMPGTB/W/D/Q
  • New ones idiom: (V)PCMPEQB/W/D/Q
  • MOV elimination: MOV; MOVZX; MOVSX r32, r64
  • NOP elimination: NOP, 1-4 0x66 NOP throughput 3->5/clock, LNOP 3, LNOP 4, LNOP 5

Faster GPR instructions (vs Tremont)

  • PAUSE latency 158->62 clocks
  • MOVSX; SHL/R r, 1; SHL/R r,imm8  tp 1->0.25
  • ADD;SUB; CMP; AND; OR; XOR; NEG; NOT; TEST; MOVZX; BSSWAP; LEA [r+r]; LEA [r+disp8/32] throughput 3->4 per clock
  • CMOV* throughput 1->2 per clock
  • RCR r, 1 10|10 -> 2|2
  • RCR/RCL r, imm/cl 13|13->11|11
  • SHLD/SHRD r1_32, r1_32, imm8 2|2 -> 2|0.5
  • MOVBE latency 1->0.5 clocks
  • (I)MUL r32 3|1 -> 3|0.5
  • (I)MUL r64 5|2 -> 5|0.5
  • REP STOSB/STOSW/STOSD/STOSQ 15/8/12/11 byte/clock -> 15/15/15/15 bytes/clock

Faster SIMD instructions (vs Tremont)

  • A lot of xmm SIMD throughput is 4/clock instead of theoretical maximum(?) of 3/clock, not sure how this is possible
  • MASKMOVQ throughput 1 per 104 clocks -> 1 per clock
  • PADDB/W/D; PSUBB/W/D PAVGB/PAVGW 1|0.5 -> 1|.33
  • PADDQ/PSUBQ/PCMPEQQ mm, xmm: 2|1 -> 1|.33
  • PShift (x)mm, (x)mm 2|1 -> 1|.33
  • PMUL*, PSADBW mm, xmm 4|1 -> 3|1
  • ADD/SUB/CMP/MAX/MINPS/PD 3|1 -> 3|0.5
  • MULPS/PD 4|1 -> 4|0.5
  • CVT*, ROUND xmm, xmm 4|1 -> 3|1
  • BLENDV* xmm, xmm 3|2 -> 3|0.88
  • AES, GF2P8AFFINEQB, GF2P8AFFINEINVQB xmm 4|1 -> 3|1
  • SHA256RNDS2 5|2 -> 4|1
  • PHADD/PHSUB* 6|6 -> 5|5

Regressions, Slower (vs Tremont):

  • m8, m16 load latency 4->5 clocks
  • ADD/MOVBE load latency 4->5 clocks
  • LOCK ADD 16|16->18|18
  • XCHG mem 17|17->18|18
  • (I)DIV +1 clock
  • DPPS 10|1.5 -> 18|6
  • DPPD 6|1 -> 10|3.5
  • FSIN/FCOS +12% slower

 

Power: P-Core vs E-Core, Win10 vs Win11 CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP
Comments Locked

474 Comments

View All Comments

  • mode_13h - Friday, November 5, 2021 - link

    It basically comes down to a context-switch. And those take a couple microseconds (i.e. many thousands of CPU cycles), last I checked. And that assumes there's a P-core available to run the thread. If not, you're potentially going to have to wait a few timeslices (often 1 -10 ms).

    Now, consider the case of some software that assumes all cores are AVX-512 capable. This would be basically all AVX-512 software written to date, because we've never had a hybrid one, or even the suggestion from Intel that we might need to worry about such a thing. So, the software spawns 1 thread per hyperthread (i.e. 24 threads on the i9-12900K) but can only run 16 of them at any time. That's going to result in a performance slowdown, especially when you account for all the fault-handling and context-switching that happens whenever any of these threads tries to run on an E-core. You'd basically end up thrashing the E-cores, burning a lot of power and getting no real work done on them.
  • mode_13h - Friday, November 5, 2021 - link

    Forgot to address the case where the OS blocks the thread from running on the E-core, again.

    So, if we think about how worker threads are used to split up bigger tasks, you really want to have no more worker threads than actual CPU resources that can execute them. You don't want a bunch of worker threads all fighting to run on a smaller number of cores.

    So, even the solution of having the OS block those threads from running on the E-cores would yield lower performance than if the the app knew how many AVX-512 capable cores there were and spawned only that many worker threads. However, you have to keep in mind that whether some function uses AVX-512 is not apparent to a software developer. It might even do this dynamically, based on whether AVX-512 is detected, but this detection often happens at startup and then the hardware support is presumed to be invariant. So, it's problematic to dump the problem in the application developer's lap.
  • eastcoast_pete - Thursday, November 4, 2021 - link

    Plus, enabling AVX-512 on the big Cores would have meant having it on the E (Gracemont) cores also, or switching workloads from P to E cores on the fly won't "fly". And having AVX-512 in Gracemont would have interfered with the whole idea of Gracemonts being low-power and small footprint on the die. I actually find what Ian and Andrei did here quite interesting: if AVX-512 can really speed up whatever you want to do, disable the Gracemonts and run AL in Cove only. If that could be a supported option with a quick restart, it might be worthwhile under the right circumstances.
  • AntonErtl - Friday, November 5, 2021 - link

    There is no relevant AVX-512 state before the first AVX-512 instruction is executed. So trapping and switching to a P-core is entirely doable. Switching back would probably be a bigger problem, but one probably does not want to do that anyway.
  • Spunjji - Friday, November 5, 2021 - link

    Possible problem: how would you account for a scenario where the gain from AVX-512 is smaller than the gain from running additional threads on E cores? Especially when some processors have a greater proportion of E cores to P cores than others. That could get quite complicated.
  • TeXWiller - Friday, November 5, 2021 - link

    If you look at the Intel's prerelease presentation about Thread Director carefully, you see they are indeed talking about moving the integer (likely control) sections of AVX threads to E-cores and back as needed.
  • kobblestown - Friday, November 5, 2021 - link

    I'll reply to my comment because it seems the original one was not understood.

    When you have an AVX512-using thread on a P thread, it might happen that it needs to be suspended, say, because the CPU is overloaded. Then the whole CPU state is saved to memory so the execution can later be resumed as if nothing has happened. In particular, it may be rescheduled on another core when its time for it run again. If that new core is a P core, then we're safe. But if it's an E core, it might happen that we hit an AVX512 instruction. Obviously, the core cannot execute it so it traps into the OS. The OS can check what was the offending instruction and determine that the problem is not the instruction, but the core. So it moves it back to a P core, stores a flag that this thread should not be rescheduled on an E-core and keeps chugging.

    Now, someone suggested that there might be a problem with the CPU state. And, indeed, you can not restore the AVX512 part of the state on an E core. But it cannot get changed by an E core either, because at the first attempt to do it it will trap. So the AVX512 part of the state that was saved on a P core is still correct.

    Since this isn't being done, there might be (but not "must be" - intel, like AMD, will only do what is good for them, not what is good for us) some problem. One being that an AVX512 thread will never be rescheduled on an E core even if it executes a single AVX512 instruction. But it's still better than the current situation which postpones the wider adoption of AVX512 yet again. I mean, the transistors are already there!
  • factual - Thursday, November 4, 2021 - link

    Great win for consumers! AMD will need to cut prices dramatically to be competitive otherwise Intel will dominate until Zen4 comes out!
  • kobblestown - Friday, November 5, 2021 - link

    Let's first see Zen3D early next year. It will let me keep my investment into the AM4 platform yet offer top notch performance.
  • Spunjji - Friday, November 5, 2021 - link

    "AMD will need to cut prices dramatically"
    Not until Intel's platform costs drop. Nobody's buying an ADL CPU by itself.

Log in

Don't have an account? Sign up now