Alongside this morning’s launch of their new laptop SKUs, NVIDIA is also rolling out a couple of new technologies aimed at high-end laptops. Being placed under their Max-Q banner, the company is unveiling new features to better manage laptop TDP allocations, and for the first time, the ability to have G-Sync in an Optimus-enabled laptop. These new technologies are separate from the new hardware SKUs being launched today – they can technically be built into any future GeForce laptop – so I wanted to touch upon separately from the hardware itself.

NVIDIA Dynamic Boost: Dynamic TDP Between GPU & CPU, For Intel & AMD

First off, we have what NVIDIA is calling their Dynamic Boost technology. This is essentially NVIDIA’s counterpart to AMD’s SmartShift technology, which was introduced in the recently-launched Ryzen Mobile 4000 APUs.

Like SmartShift, NVIDIA’s Dynamic Boost is designed to take advantage of the fact that in many laptop designs, the GPU and the CPU share a common thermal budget, typically because they are both cooled via the same set of heatpipes. In practice, this is usually done in order to allow OEMs to build relatively thin and light systems, where the cooling capacity of the system is more than the TDP either of the CPU or GPU alone, but less than the total TDP of those two processors together. This allows OEMs to design around different average, peak, and sustained workloads, offering plenty of headroom for peak performance while sacrificing some sustained performance in the name of lighter laptops.

In fact, a lot of these designs are potentially leaving some performance on the table, as far as peak performance is concerned. Because the thermal budget of a laptop is usually greater than any single processor, normal processor TDPs have them holding themselves back more than they otherwise would. So, as the thinking goes, if the two processors are sharing a common cooling system, why not raise their power limits and then them split up the thermal budget of the system in an intelligent manner?

And this is exactly what Dynamic Boost and similar technologies set out to do. By dynamically allocating power between the CPU and the GPU, a system should be able to eke out a bit more performance by making better-informed choices about where to allocate power. This could include, for example, allowing a CPU to go to a full 135W for a short period of time because the GPU is known to be idle, or borrowing some of the thermal budget from a lightly-loaded CPU and instead spending it on the GPU. Essentially it’s the next step in min-maxing the performance of laptops with discrete CPUs and GPUs by offering ever finer-grained control over how power and thermal budgets are allocated between the two processors.

Overall this isn’t a new concept, but until recently it’s only been used within a single CPU/APU to balance power between those two blocks. Extending it over multiple chips is a bit more work, and while beneficial, it’s really only become worth the effort as Moore’s Law has been slowing down.

NVIDIA’s solution, meanwhile, has the notable distinction of being the first generic solution that works across multiple platforms. Whereas AMD’s SmartShift is designed to work with the combination of AMD APUs and GPUs – fully leveraging the AMD ecosystem and their platform control fabric – NVIDIA as a common GPU supplier for both platforms needed to develop a solution that works with both. So Dynamic Boost can be used with both Intel Core processors and AMD Ryzen processors in a relatively generic manner, allowing OEMs to apply the technology regardless of whose CPU they use.

As for the performance benefits, while NVIDIA isn’t promising anything major, Dynamic Boost will none the less let them wring out a bit more performance out of their GPUs. Like AMD, the numbers being thrown around are generally less than 10%, reflecting the fact that most games are significantly taxing both the CPU and the GPU, but that’s four to eight percent more performance that would otherwise have been left on the table. Ultimately the big win here comes from taking advantage of the relative difference in voltage-frequency curves between the processors, as the highest speed bins are always the most expensive from a power perspective.

Past that, the hardware implementation details for the technology are pretty straightforward, but they do require vendor involvement – so this can’t be added to existing (in the field) laptops. Besides the immediate requirement of having a shared thermal system, laptops need to have the sensors/telemetry in place to keep a close eye on the state of the two processors and the thermal system itself. As well, OEMs need to build their laptops with greater-than-normal capacity VRM setups so that the processors can pull the extra power needed to clock higher (particularly the GPU). And even then, there is some system-level tuning required to account for the specific performance and cooling details of a given laptop design.

The upshot, at least, is that all of this OEM firmware tuning is a one-time affair, and Dynamic Boost is all but transparent at a software level. So no per-game profiling is necessary, and NVIDIA’s drivers can operate on a generic basis, adjusting power/thermal limits based on what it’s reading from the workload at-hand. According to NVIDIA, they are making their adjustments on a per-frame basis, so Dynamic Boost will only need a short period of time to adjust to any rapid shifts in the workload, depending on just how aggressive NVIDIA’s algorithms are.

Finally, it should be noted that Dynamic Boost is an optional feature, so it’s ultimately up to OEMs to decide whether they want to implement it or not. For marketing reasons, NVIDIA is bundling it under their Max-Q family of features, but just because a laptop is Max-Q doesn’t mean it can do Dynamic Boost. The flip side to that, however, is that if an OEM is going for Max-Q certification anyhow, then if they have a shared thermal solution then there’s everything to gain from baking in support for the technology in future laptop designs.

Advanced Optimus: G-Sync and Optimus Together At Last

Along with Dynamic Boost, NVIDIA’s other new laptop technology for today is what the company is calling Advanced Optimus. Exactly what it says on the tin, Advanced Optimus is a further enhancement of NVIDIA’s Optimus technology, finally allowing for G-Sync to be used with the power-saving technology.

As a quick refresher, Optimus is NVIDIA’s graphics switching technology. Originally introduced a decade ago, Optimus allows for both an iGPU and a dGPU to be used together in a single system and in a very efficient manner. This is accomplished by primarily relying on the iGPU, and then firing up the dGPU only when needed for intense workloads. The process as a whole is more involved than relying exclusively on either GPU – frames need to get passed around from the dGPU to the iGPU when it’s being used – but the power savings relative to an always-on dGPU are significant. As a result, almost every GeForce-equipped laptop shipped today uses Optimus, with the exception of a handful of high-end machines that are aiming to be portable desktops.

And while Optimus is all well and good, the fact that it essentially slaves the dGPU to the iGPU has always come with a rather specific drawback: it means the display controller used is always the iGPU’s display controller, and thus the laptop can only do whatever the iGPU’s display controller is capable of. Back in 2010 this wasn’t a meaningful problem, but the introduction of variable refresh rate technology changed this. Ignoring for the moment the proprietary nature of some of NVIDIA’s G-Sync implementations, even today Intel only supports variable refresh on its Gen11 GPUs, which are only found on its ultra-low voltage Ice Lake processors. As a result, it’s not possible to both use the iGPU of an H-class system and get variable refresh out of it at the same time.

The net impact has been that laptop adoption of variable refresh has been few and far between. While laptop-class variable refresh displays are available, systems implementing it have had to go old school, either using a pure dGPU setup and shutting off the iGPU entirely, or using a multiplexer (mux) to switch the display output between the iGPU and dGPU. The mux solution in turn certainly works, but it in practice it has required a (high-friction) reboot when switching modes. Or at least, it did until now.

For Advanced Optimus, NVIDIA has finally figured out how to have their cake and eat it too. In short, the company has brought back the idea of dynamic mux, allowing laptops to switch between the iGPU and the dGPU on the fly. As a result, the iGPU is no longer a limiting factor when it comes to variable refresh; if a system wants to switch to a variable refresh mode, it can just fire up the dGPU and then switch over to using that. All without a reboot.

By and large, Advanced Optimus does away with the most novel aspects of Optimus – passing frames from the dGPU to the iGPU – and instead becomes a much smarter muxing implementation. Which means that along with making variable refresh technology more accessible, it also gets rid of the Optimus latency penalty. Since buffers no longer need to be passed to the iGPU, that roughly 1 frame delay is eliminated.

Unfortunately, as far as technical details go, NVIDIA is holding this one quite close to their chest for the time being. NVIDIA has tried this idea once before in 2008 with their "gen2" switchable graphics, but in practice it wasn't a frictionless experience like it needed to be. Once Optimus came along, even dynamic muxing was quickly cast aside in favor of Optimus for most laptops, and manual muxing for the rest.

So at this point it’s unclear just how NVIDIA has solved the pitfalls of previous dynamic mux solutions – passing the entire Windows desktop to another GPU in real time is (still) no easy task – but the company is adamant that it is a truly seamless experience. Truthfully, I haven’t fully ruled out NVIDIA doing something incredibly crazy like baking a display controller into their mux – in essence having an external controller composite inputs from the two GPUs – but this is admittedly unlikely. More likely is that the company has borrowed some tips and tricks from their eGFX technology, which has to solve a similar problem with the added wrinkle of a dGPU that can be removed at any time. None the less, it will be interesting to crack open an Advanced Optimus laptop and see what makes it tick.

As for the software side of matters, Advanced Optimus behaves more or less like regular Optimus. That means checking applications against a list, and then switching accordingly. Crucially, windowed or full screen mode doesn’t matter, and Advanced Optimus mode works for each. So while this presumably comes with the same tradeoffs between windowed and exclusive fullscreen mode that we see in regular G-Sync operation, it none the less means that every option is on the table, just like a regular (non-Optimus) G-Sync setup.

Ultimately, NVIDIA’s goal is to get variable refresh/G-Sync support in a lot more laptops than it is today, as Advanced Optimus removes both the friction and the battery life penalties that have been encountered to date. To be sure, this is still an optional feature for OEMs, as it requires them to take the care to integrate both a variable refresh display as well as the necessary mux. But never the less it opens the door to putting G-Sync on thin & light laptops and other notebooks where a vendor would never be willing to accept a traditional mux solution. I do wonder if perhaps this is going to be a short-term solution – what happens when Intel launches the variable refresh-capable Tiger Lake-H in 2021? – but for now, NVIDIA is the only game in town for doing variable refresh on a laptop without making other compromises.

Finally, the first vendor out of the door will be Lenovo, whom is NVIDIA’s launch partner for the technology. The two companies have worked together to add the technology to Lenovo’s new Legion 5i and 7i laptops, which were also announced today and start with an RTX 2060 for $999. Unfortunately, Lenovo has not announced a release date for those laptops, so it sounds like we’ll be waiting just a bit longer before the first Advanced Optimus laptop hits the market.

Comments Locked

13 Comments

View All Comments

  • Chaitanya - Thursday, April 2, 2020 - link

    Eagerly waiting to see how Optimus and G-Sync play hand in hand together.
  • nathanddrews - Thursday, April 2, 2020 - link

    Yeah, but that Ryzen 4000-series is a BEAST!
  • blanarahul - Thursday, April 2, 2020 - link

    While it's true that Intel supports VRR only it's Gen11 IGP, AMD has much wider support in their APU lineup don't they? AMD + NVIDIA should have both Optimus and VRR without needing a seperate chop shouldn't it?
  • MamiyaOtaru - Friday, April 3, 2020 - link

    Yeah, the ROG Zephyrus G14 has a GTX 2060, an AMD APU, Optimus and freesync support for its 120hz monitor. https://www.notebookcheck.net/Asus-Zephyrus-G14-Ry...
  • eastcoast_pete - Thursday, April 2, 2020 - link

    Thanks Ryan! Any information on whether the all new and improved Optimus also works with AMD Ryzen mobile CPUs, especially the 4000 series? Until AMD gets its mobile dGPU act together, a Ryzen 4000/NVIDIA combo is probably the best option.
  • Ryan Smith - Thursday, April 2, 2020 - link

    There was no specific discussion of using it with AMD APUs. But then there was no specific mention of Intel, either. With NVIDIA preparing Dynamic Boost to work with processors from both AMD and Intel, it certainly makes me presume that they're going to want to support Optimus in this fashion as well. The mux aspect means that AO should be relatively abstracted from the hardware.
  • Aikouka - Thursday, April 2, 2020 - link

    I'm rather intrigued by the Advanced Optimus feature. I've always been more of a fan of G-Sync in laptops due to the inability to upgrade graphics, which always made the idea of syncing to falling framerates (as the laptop ages) seem like a worthwhile idea. However, G-Sync was usually rarely found in a laptop under $1500, and it was hard to justify something when I already have a gaming desktop. So, in turn, the cheaper Lenovo laptops also sound quite interesting.

    Although, like some other commenters are mentioning, I'd be curious to see if we get any AMD-equipped laptops with this technology.
  • eek2121 - Thursday, April 2, 2020 - link

    I wonder how efficient Optimus would be if GPUs had better power management. If the GPU could be throttled down and entire blocks of cores could be shut off completely along with multiple banks of RAM, I imagine Optimus would not even be needed.
  • londedoganet - Thursday, April 2, 2020 - link

    It’s spelled “eke out”. Sorry, it’s just one of my pet peeves, along with spelling “discrete” as “discreet” (which this article did not do).
  • p3ngwin1 - Friday, April 3, 2020 - link

    It will be interesting to see Sony's implementation of this idea with their PS5, having variable clock rates for both CPU+GPU, compared to Xbox Series-X's "fixed clocks".

Log in

Don't have an account? Sign up now