Power: P-Core vs E-Core, Win10 vs Win11

For Alder Lake, Intel brings two new things into the mix when we start talking about power.

First is what we’ve already talked about, the new P-core and E-core, each with different levels of performance per watt and targeted at different sorts of workloads. While the P-cores are expected to mimic previous generations of Intel processors, the E-cores should offer an interesting look into how low power operation might work on these systems and in future mobile systems.

The second element is how Intel is describing power. Rather than simply quote a ‘TDP’, or Thermal Design Power, Intel has decided (with much rejoicing) to start putting two numbers next to each processor, one for the base processor power and one for maximum turbo processor power, which we’ll call Base and Turbo. The idea is that the Base power mimics the TDP value we had before – it’s the power at which the all-core base frequency is guaranteed to. The Turbo power indicates the highest power level that should be observed in normal power virus (usually defined as something causing 90-95% of the CPU to continually switch) situation. There is usually a weighted time factor that limits how long a processor can remain in its Turbo state for slowly reeling back, but for the K processors Intel has made that time factor effectively infinite – with the right cooling, these processors should be able to use their Turbo power all day, all week, and all year.

So with that in mind, let’s start simply looking at the individual P-cores and E-cores.

Listed in red, in this test, all 8P+8E cores fully loaded (on DDR5), we get a CPU package power of 259 W. The progression from idle to load is steady, although there is a big jump from idle to single core. When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.

By contrast, in green, the E-cores only jump from 5 W to 15 W when a single core is active, and that is the same number as we see on SPEC power testing. Using all the E-cores, at 3.9 GHz, brings the package power up to 48 W total.

It is worth noting that there are differences between the blue bars (P-cores only) and the red bars (all cores, with E-cores loaded all the time), and that sometimes the blue bar consumes more power than the red bar. Our blue bar tests were done with E-cores disabled in the BIOS, which means that there might be more leeway in balancing a workload across a smaller number of cores, allowing for higher power. However as everything ramps up, the advantage swings the other way it seems. It’s a bit odd to see this behavior.

Moving on to individual testing, and here’s a look at a power trace of POV-Ray in Windows 11:

Here we’re seeing a higher spike in power, up to 272 W now, with the system at 4.9 GHz all-core. Interestingly enough, we see a decrease of power through the 241 W Turbo Power limit, and it settles around 225 W, with the reported frequency actually dropping to between 4.7-4.8 GHz instead. Technically this all-core is meant to take into account some of the E-cores, so this might be a case of the workload distributing itself and finding the best performance/power point when it comes to instruction mix, cache mix, and IO requirements. However, it takes a good 3-5 minutes to get there, if that’s the case.

Intrigued by this, I looked at how some of our other tests did between different operating systems. Enter Agisoft:

Between Windows 10 and Windows 11, the traces look near identical. The actual run time was 5 seconds faster on Windows 11 out of 20 minutes, so 0.4% faster, which we would consider run-to-run variation. The peaks and spikes look barely higher in Windows 11, and the frequency trace in Windows 11 looks a little more consistent, but overall they’re practically the same.

For our usual power graphs, we get something like this, and we’ll also add in the AVX-512 numbers from that page:

(0-0) Peak Power

Compared to Intel’s previous 11th Generation Processor, the Alder Lake Core i9 uses more power during AVX2, but is actually lower in AVX-512. The difficulty of presenting this graph in the future is based on those E-cores; they're more efficient, and as you’ll see in the results later. Even on AVX-512, Alder Lake pulls out a performance lead using 50 W fewer than 11th Gen.

When we compare it to AMD however, with that 142 W PPT limit that AMD has, Intel is often trailing at a 20-70 W deficit when we’re looking at full load efficiency. That being said, Intel is likely going to argue that in mixed workloads, such as two software programs running where something is on the E-cores, it wants to be the more efficient design.

Fundamental Windows 10 Issues: Priority and Focus Instruction Changes for Golden Cove and Gracemont
Comments Locked

474 Comments

View All Comments

  • Silver5urfer - Friday, November 5, 2021 - link

    I forgot the most important thing. After going through everything. Intel sabotaged LGA1200 on purpose. Looking at the Cinebench R20 score of the 12900K vs 10900K It's clear cut that 2.5K lead of the 10K marks is coming from the SKL cores you showed. And the P cores are 8K, up from 6K of 10th gen 10900K. They sandbagged the socket with 11th gen on 14nm+++ instead of 10nm and gave a hot, power hungry poor IMC poor SMT processor. Now they show this as a massive boost because it works wonders in charts looking at the big bars of the 12th gen over 10th gen.

    Intel is real damn scum. Now they will milk the DDR5, OEM deals and all with those PCIe5.0 and etc BS BGA trash since ST performance is so high they will easily get all those laptops and use and throw machine sales. And nobody knows how long LGA1700 will even last. Maybe upto that successor of Raptor Lake. But going DDR4 is going to bite the people in nuts once Raptor Lake launches, I bet they will launch Z790 and only DDR5 and more DMI or something.

    I hope AMD Zen 4 wages a storm on these pesky BS Small cores and give a full powerful big beast with longevity on AM5 LGA. And ofc they will fix the WHEA and USB things because they know now with experience.
  • GeoffreyA - Friday, November 5, 2021 - link

    Certainly, judging from AM4, their next socket will have a long life.
  • idimitro - Thursday, November 4, 2021 - link

    The best coverage of the launch. Really in depth and with details nobody else mentioned. Great work guys!!!
  • dotes12 - Thursday, November 4, 2021 - link

    I'm not too optimistic that W11 was actually designed to handle optimized/efficiency cores, because W11 was designed before Intel released that beast. Unfortunately that probably means that Windows will continue to be an "every other release is good" OS. W11.1 or W12 (whatever they call it) will be the best continuance of XP -> 7 -> 10 -> 11.1/12.
  • mode_13h - Friday, November 5, 2021 - link

    > W11 was designed before Intel released that beast.

    The companies do collaborate, extensively. During their Architecture Day, there was discussion of this. And Windows has a much longer history of hybrid CPU scheduling, since they also support ARM CPUs like Qualcomm's 8cx and even a previous hybrid CPU from Intel (Lakefield).

    Also, Windows is not a static target. MS is continually releasing updates, some of which are sure to fine-tune scheduling even more.
  • dwade123 - Thursday, November 4, 2021 - link

    Intel inside. AMD outside.
  • m53 - Thursday, November 4, 2021 - link

    [Intel 12th gen consumes less power in gaming across the board vs Ryzen 5000 series](https://www.reddit.com/r/intel/comments/qmw9fl/why... [Even the Multi threaded perf per watt is also better for 12900K compared to 5900X](https://twitter.com/capframex/status/1456244849477... It is only specific cases where 12900k need to beat 5950x in multi threaded loads it needs to crank up more power. But for typical users Intel is both the perf /watt and perf /dollar champion.
  • zodiacfml - Friday, November 5, 2021 - link

    Though the review not exceeded my expectations due to lack of in-depth power consumption testing, the charts show to me what really is going on. Intel has roughly matched AMD in CPU performance then gained some more with DDR5. Some CPU ang gaming benchmarks shows that are limited with memory performance. now AMD's V-cache makes even more sense and 15% average uplift in games more plausible.
    AMD still has the edge (due to cost, cooling requirements, power consumption) while Intel gains more value if one is going to utilize the iGPU.
  • Alistair - Friday, November 5, 2021 - link

    Personally I don't find all this impressive. Intel went from -5 percent in gaming to +5 percent in gaming. Then went from -10 percent in productivity to beating the 5900x but still losing to the 5950x. Power consumption is terrible. Honestly all AMD has to do is drop their 5600X from $400 to $300 CAD, their 5800X from $500 to $400, their 5900X from $650 CAD to $500, and the 5950x from $900 CAD to $700 and I wouldn't even consider Intel.
  • roknonce - Friday, November 5, 2021 - link

    Power consumption is a curve, bro. Unlock PBO, 5950x can also eat 300+W. OMG, "Power consumption is terrible." When set P-core@4.4Ghz whose power consumption is less than 120W, its Cinebench score is 1730 single and 25000 multiple, way better than 5900x (1500/21000). I can give it 1000w, can 5900x hit 2000/27000?
    If you set voltage offset manually, a 6*P-core@3.2Ghz+E-core@2.4Ghz is 35W. At that time, its Cinebench score is 1300 single and 14000 multiple. which is able to compare with a 30W M1 Max, 1550/12000. Not to mention that TSMC N5p is 30% better than intel 10nm ESF.

Log in

Don't have an account? Sign up now