Navigating Power: Intel’s Dynamic Tuning Technology

In the past few years, Intel has introduced a number of energy saving features, including advanced speed states, SpeedShift to eliminate high-frequency power drain, and thermal balancing acts to allow OEMs like Dell and HP to be able to configure total power draw as a function of CPU power requests, skin temperature, the orientation of the device, and the current capability of the power delivery system. As part of the announcement today, Intel has plugged a gap in that power knowledge when a discrete-class graphics processor is in play.

The way Intel explains it, OEMs that used separate CPUs and GPUs in a mobile device would design around a System Design Point (SDP) rather than a combined Thermal Design Power (TDP). OEMs would have to manage how that power was distributed – they would have to decide that if the GPU was on 100% and the SDP was reached, how the CPU and GPU would react if the CPU requested more performance.

Intel’s ‘new’ feature, Intel Dynamic Tuning, leverages the fact that Intel can now control the power delivery mechanism of both the combined package, and distribute power to the CPU and pGPU as required. This leverages how Intel approached the CPU in response to outside factors – by using system information, the power management can be shared to maintain minimum performance levels and ultimately save power.

If that sounds a bit wishy-washy, it is because it is. Intel’s spokespersons during our briefing were heralding this as a great way to design a notebook, but failed to go into any sort of detail as to how the mechanism works, leaving it as a black box for consumers. They quoted that a design aiming at 62.5W SDP could have Intel Dynamic Tuning enabled and be considered a 45W device, and by managing the power they could also increase gaming efficiency up to 18% more frames per watt.

One of the big questions we had when Intel first starting discussing these new parts is how the system deals with power requests. At the time, AMD had just explained in substantial detail its methodology for Ryzen Mobile, with the CPU and GPU in the same piece of silicon, so it was a fresh topic in mind. When questioned, Intel wanted to wait until the official launch to discuss the power in more detail, but unfortunately all we ended up with was a high-level overview and a non-answer to a misunderstood question in the press-briefing Q&A.

We’re hoping that Intel does a workshop on the underlying technology and algorithms here, as it would help shine a light on how future Intel with Radeon designs are implementing their power budgets for a given cooling strategy.

So Why Two Sets of Graphics? Intel’s Performance Numbers
Comments Locked

66 Comments

View All Comments

  • mczak - Monday, January 8, 2018 - link

    FWIW apple has shipped plenty MBPs where the charger isn't quite sufficient. These will drain the battery a little even if plugged in when running at full tilt (and at least some of them also have the habit of running really really slow if the battery isn't just old but completely dead because they will be forced to low power states).
    Albeit I agree for a 89W charger a 100W cpu+gpu is probably too much, since together with the rest of the system that might amount to a sustained power draw of over 110W, which would drain the battery too fast. But if apple wants a 80W version of it, I'm pretty sure intel would just deliver that, those limits can be easily changed.
  • Kevin G - Sunday, January 7, 2018 - link

    And MS SQL Server is available for Linux. I think hell has frozen over.
  • tracker1 - Monday, January 8, 2018 - link

    For what it's worth, MS SQL Server on Linux/Docker is fairly limited, and the mgt software is still windows based, though you can do anything you need via sql execute statements... it's not the friendliest. I usually treat my database as mostly dumb anyway.
  • Zingam - Sunday, January 7, 2018 - link

    But does it melt down?
  • haukionkannel - Monday, January 8, 2018 - link

    Yes it does.
  • B166ER - Sunday, January 7, 2018 - link

    I just don't get this marriage. It seems the graphics power is juuust a wee bit over intels cores, so what graphics need would this push? "Oh look I get 5 more fps in Minecraft!"??
  • schizoide - Sunday, January 7, 2018 - link

    I thought it would be faster than that, so i did some back of the napkin math. 24 CUs = 43% of a Vega 56, but 19% slower on the GPU and 50% slower on the HBM. Seems reasonable to guess it will offer about 25% the performance of a Vega 56.

    Vega56 gets 20k in 3DMark, 25% of that is 5k. The fastest iGPU I could find on futuremark's site is the 6700HQ, which scored 7910. So... it's slower than the fastest intel GPU from 2 years ago. Is that right?
  • schizoide - Sunday, January 7, 2018 - link

    Yeah that wasn't right, futuremark switched it from GPU to CPU when I searched for Intel. The fastest GPU score I could find for an Intel iGPU was the Iris Pro 6200 from the Broadwell generation. It got 1630. Skylake improved the iGPU quite a lot but I can't find the benchmarks offhand.
  • JohnPec - Monday, January 8, 2018 - link

    Linus said it will be as good as 1060maxq or better.
  • tipoo - Sunday, January 7, 2018 - link

    Hm? It's definitely a fair shot over the Iris Plus 650, and the Pro line seems dead after the Pro 580. This will absolutely be 3-4x over an Iris Plus 650, let alone the eDRAM-less Iris HD 630 thrown in there.

    What did you mean by barely above the Intel part? I see nothing close.

Log in

Don't have an account? Sign up now