Navigating Power: Intel’s Dynamic Tuning Technology

In the past few years, Intel has introduced a number of energy saving features, including advanced speed states, SpeedShift to eliminate high-frequency power drain, and thermal balancing acts to allow OEMs like Dell and HP to be able to configure total power draw as a function of CPU power requests, skin temperature, the orientation of the device, and the current capability of the power delivery system. As part of the announcement today, Intel has plugged a gap in that power knowledge when a discrete-class graphics processor is in play.

The way Intel explains it, OEMs that used separate CPUs and GPUs in a mobile device would design around a System Design Point (SDP) rather than a combined Thermal Design Power (TDP). OEMs would have to manage how that power was distributed – they would have to decide that if the GPU was on 100% and the SDP was reached, how the CPU and GPU would react if the CPU requested more performance.

Intel’s ‘new’ feature, Intel Dynamic Tuning, leverages the fact that Intel can now control the power delivery mechanism of both the combined package, and distribute power to the CPU and pGPU as required. This leverages how Intel approached the CPU in response to outside factors – by using system information, the power management can be shared to maintain minimum performance levels and ultimately save power.

If that sounds a bit wishy-washy, it is because it is. Intel’s spokespersons during our briefing were heralding this as a great way to design a notebook, but failed to go into any sort of detail as to how the mechanism works, leaving it as a black box for consumers. They quoted that a design aiming at 62.5W SDP could have Intel Dynamic Tuning enabled and be considered a 45W device, and by managing the power they could also increase gaming efficiency up to 18% more frames per watt.

One of the big questions we had when Intel first starting discussing these new parts is how the system deals with power requests. At the time, AMD had just explained in substantial detail its methodology for Ryzen Mobile, with the CPU and GPU in the same piece of silicon, so it was a fresh topic in mind. When questioned, Intel wanted to wait until the official launch to discuss the power in more detail, but unfortunately all we ended up with was a high-level overview and a non-answer to a misunderstood question in the press-briefing Q&A.

We’re hoping that Intel does a workshop on the underlying technology and algorithms here, as it would help shine a light on how future Intel with Radeon designs are implementing their power budgets for a given cooling strategy.

So Why Two Sets of Graphics? Intel’s Performance Numbers
Comments Locked

66 Comments

View All Comments

  • B166ER - Sunday, January 7, 2018 - link

    Exaggeration to emphasize a point..
  • tipoo - Monday, January 8, 2018 - link

    What's the point if it doesn't remotely make sense? That's not an exaggeration, it's just not the truth, the dGPU is significantly better than any Intel iGPU.
  • Cooe - Monday, January 8, 2018 - link

    Umm. Lol can you read? Perhaps you need your eyes checked? Because I really can't fathom how you ended up at THAT conclusion. This is a near GTX 1060 level part we're talking about here (and well beyond the 1050Ti). As others have said, it'll fall right around the MaxQ version of the 1060. That's damn impressive considering the size & power envelope.
  • OEMG - Sunday, January 7, 2018 - link

    In page 4 ("Intel’s Performance Numbers") the last table's headers are wrong (should be GH vs 1060).

    I wonder how FreeSync would work if they're powering down the pGPU. One guess is some sort of mode switching to the iGPU's display engine as for light workloads the iGPU is more than capable to maintain max display frame rate. But then, there's some protocol stuff being done in FreeSync/G-Sync so it could also be that the pGPU's display engine would always be on with the iGPU feeding into it.
  • neblogai - Sunday, January 7, 2018 - link

    On 4th page, last two charts (and also text between them) should have GH series part, instead of i7-8509G Vega M GL.
  • StevoLincolnite - Sunday, January 7, 2018 - link

    "Coffee Lake processors, using Intel’s latest 14++ process and running up to 8 cores."

    I could have sworn they only topped out at 6-cores.
  • nerd1 - Sunday, January 7, 2018 - link

    I don't get it, AMD GPU has never been good for efficiency, but now they claim to beat NVdia 1050/1050 ti in terms of efficiency.... sounds too good to be true.
  • tipoo - Sunday, January 7, 2018 - link

    EMIB + HBM2 + a multichip module with Intel is what it takes apparently. With all those edges, I'm sure it can manage to edge out 1050 perf/watt like they say, it's not all in the architecture.
  • tipoo - Sunday, January 7, 2018 - link

    Also states power sharing saves 18 watts, so that's a lot of handicaps for the more efficient Pascal to catch up with.
  • Cooe - Monday, January 8, 2018 - link

    EMIB adds nothing performance or efficiency-wise over a regular interposer like AMD uses. It's simply thinner/cheaper. Also, people drastically undervalue how power efficient Vega is when clocked in it's efficiency "sweet spot". The reason it looks so bad in Vega 56/64 (the latter especially) is because the clocks have been pushed right up to the process' limits, and well, well beyond said "sweet spot". The clock's being used here, otoh, fall right within it (for obvious reasons). I think people are going to be very surprised by both the efficiency here, and in the Vega Mobile dGPU AMD announced today (which could very well be based off this graphics part, but we'll know soon enough. I'd bet my lunch it ends up in discrete desktop cards as well at some point to replace at-least the 560 & 570, and possibly 580).

Log in

Don't have an account? Sign up now