Navigating Power: Intel’s Dynamic Tuning Technology

In the past few years, Intel has introduced a number of energy saving features, including advanced speed states, SpeedShift to eliminate high-frequency power drain, and thermal balancing acts to allow OEMs like Dell and HP to be able to configure total power draw as a function of CPU power requests, skin temperature, the orientation of the device, and the current capability of the power delivery system. As part of the announcement today, Intel has plugged a gap in that power knowledge when a discrete-class graphics processor is in play.

The way Intel explains it, OEMs that used separate CPUs and GPUs in a mobile device would design around a System Design Point (SDP) rather than a combined Thermal Design Power (TDP). OEMs would have to manage how that power was distributed – they would have to decide that if the GPU was on 100% and the SDP was reached, how the CPU and GPU would react if the CPU requested more performance.

Intel’s ‘new’ feature, Intel Dynamic Tuning, leverages the fact that Intel can now control the power delivery mechanism of both the combined package, and distribute power to the CPU and pGPU as required. This leverages how Intel approached the CPU in response to outside factors – by using system information, the power management can be shared to maintain minimum performance levels and ultimately save power.

If that sounds a bit wishy-washy, it is because it is. Intel’s spokespersons during our briefing were heralding this as a great way to design a notebook, but failed to go into any sort of detail as to how the mechanism works, leaving it as a black box for consumers. They quoted that a design aiming at 62.5W SDP could have Intel Dynamic Tuning enabled and be considered a 45W device, and by managing the power they could also increase gaming efficiency up to 18% more frames per watt.

One of the big questions we had when Intel first starting discussing these new parts is how the system deals with power requests. At the time, AMD had just explained in substantial detail its methodology for Ryzen Mobile, with the CPU and GPU in the same piece of silicon, so it was a fresh topic in mind. When questioned, Intel wanted to wait until the official launch to discuss the power in more detail, but unfortunately all we ended up with was a high-level overview and a non-answer to a misunderstood question in the press-briefing Q&A.

We’re hoping that Intel does a workshop on the underlying technology and algorithms here, as it would help shine a light on how future Intel with Radeon designs are implementing their power budgets for a given cooling strategy.

So Why Two Sets of Graphics? Intel’s Performance Numbers
Comments Locked

66 Comments

View All Comments

  • haukionkannel - Monday, January 8, 2018 - link

    This. Vega is very effisient in low clock rates!
  • Yojimbo - Sunday, January 7, 2018 - link

    Where do they claim that it will beat the GTX 1050 in terms of power efficiency? They show some select benchmarks that imply a certain efficiency in those specific cases, but I didn't see that they mentioned general power efficiency or price at all.

    This package from Intel does have HBM, which is more power efficient than GDDR5. That will help. But overall, my expectation is that Intel's new chip will be less efficient in graphics intensive tasks than a system with a latest generation discrete NVIDIA GPU. The dynamic tuning should help in cases where both CPU and GPU need to draw significant power, though.

    We probably know how Vega performs. Assuming that the chips aren't TDP constrained, the more powerful of the two variants should probably perform somewhere between a 560 and 570 in games. The lesser variant should perform around a 560, less or more depending on how memory bandwidth plays into things. We'll have to see how power constraints factor into to things though.

    Another thing to keep in mind is that for most of its lifetime, this chip will probably be going up against NVIDIA's next generation of GPUs and not their current generation. Intel did benchmark it against a 950M, but I wouldn't put it past them to ignore price differences in a comparison they release. The new chips will probably be expensive enough that they will have to go up against the latest generation of their competitor's chips.
  • Kevin G - Monday, January 8, 2018 - link

    This does leave room for Intel produce a slimmer GT1 or even omitting a GPU entirely for mobile when the know that it will be paired with a Radeon Vega on package. That'd permit Intel to decrease costs on their end, though this would up to Intel to pass onward to OEMs.
  • nico_mach - Monday, January 8, 2018 - link

    AMD wasn't good at efficiency mostly due to fabbing. That's easily fixable with a deep-pocketed and suddenly desperate partner like Intel.
  • artk2219 - Wednesday, January 10, 2018 - link

    Vega is actually pretty efficient, just not when they try to chase high performance, then the power requirements jump exponentially in response to the higher clocks and voltage. Also, AMD has had the fficiency crown multiple times, just not recently. The Radeon 9700 pro, 9800 pro, 4850, 4870, 5850, 5870, 7790, 7950, and 7970 all say hello when compared to their Nvidia counterparts of the time.
  • jjj - Sunday, January 7, 2018 - link

    Ask AMD for a die shot so we can count CUs lol
  • shabby - Sunday, January 7, 2018 - link

    8th generation... kaby lake? Have i been sleeping under a rock?
  • evilpaul666 - Sunday, January 7, 2018 - link

    Is there a difference between Skylake, Kaby Lake and Coffee Lake that I'm unaware of?
  • shabby - Sunday, January 7, 2018 - link

    In mobile the only difference was the core count, it doubled when coffee lake was released, but this kaby lake has similar core counts for some reason.
  • extide - Sunday, January 7, 2018 - link

    Yeah, for U/Y (and now G) series 8th gen is 'Kaby Lake refresh, not Coffee Lake)

Log in

Don't have an account? Sign up now