For much of the last month we have been discussing bits and pieces of AMD’s GPU plans for 2016. As part of the Radeon Technology Group’s formation last year, the leader and chief architect of the group, Raja Koduri, has set about to make his mark on AMD’s graphics technology. Along with consolidating all graphics matters under the RTG, Raja and the rest of the RTG have also set about to change how they interact with the public, with developers, and with their customers.

One of those changes – and the impetus for these recent articles – has been that the RTG wants to be more forthcoming about future product developments. Traditionally AMD always held their cards close to their chest about architectures, keeping them secret until the first products based on a new architecture launch (and even then sometimes not talking about matters in detail). With the RTG this is changing, and similar to competitors Intel and NVIDIA, the RTG wants to prepare developers and partners for new architectures sooner. As a result the RTG has been giving us a limited, high-level overview of their GPU plans for 2016.

Back in December we started things off talking about RTG’s plans for display technologies – DisplayPort, HDMI, Freesync, and HDR – and how the company would be laying the necessary groundwork in future architectures to support their goals for higher resolution displays, more ubiquitous Freesync-over-HDMI, and the wider color spaces and higher contrast of HDR. The second of RTG’s presentations that we covered was focused on their software development plans, including Linux driver improvements and the consolidation of all of RTG’s various GPU libraries and SDKs under the GPUOpen banner, which will see these resources released on GitHub as open source projects.

Last but not least among RTG’s presentations is without a doubt the most eagerly anticipated subject: the hardware. As RTG (and AMD before them) has commented on in the past couple of years, a new architecture is being developed for future RTG GPUs. Dubbed Polaris (the North Star), RTG’s new architecture will be at the heart of their 2016 GPUs, and is designed for what can now be called the current-generation FinFET processes. Polaris incorporates a number of new technologies, including a 4th generation Graphics Core Next design for the heart of the GPU, and of course the new display technologies that RTG revealed last month. Finally, the first Polaris GPUs should be available in mid-2016, or roughly 6 months from now.

First Polaris GPU Is Up and Running

But before we dive into Polaris and RTG’s goals for the new architecture, let’s talk about the first Polaris GPUs. With the first products expected to launch in the middle of this year, to no surprise RTG has their first GPUs back from the fab and up & running. To that end – and I am sure many of you are eager to hear about – as part of their presentation RTG showed off the first Polaris GPU in action, however briefly.

As a quick preface here, while RTG demonstrated a Polaris based card in action we the press were not allowed to see the physical card or take pictures of the demonstration. Similarly, while Raja Koduri held up an unsoldered version of the GPU used in the demonstration, again we were not allowed to take any pictures. So while we can talk about what we saw, at this time it’s all we can do. I don’t think it’s unfair to say that RTG has had issues with leaks in the past, and while they wanted to confirm to the press that the GPU was real and the demonstration was real, they don’t want the public (or the competition) seeing the GPU before they’re ready to show it off. That said, I do know that RTG is at CES 2016 planning to recap Polaris as part of AMD’s overall presence, so we may yet see the GPU at CES after the embargo on this information has expired.

In any case, the GPU RTG showed off was a small GPU. And while Raja’s hand is hardly a scientifically accurate basis for size comparisons, if I had to guess I would wager it’s a bit smaller than RTG’s 28nm Cape Verde GPU or NVIDIA’s GK107 GPU, which is to say that it’s likely smaller than 120mm2. This is clearly meant to be RTG’s low-end GPU, and given the evolving state of FinFET yields, I wouldn’t be surprised if this was the very first GPU design they got back from Global Foundries as its size makes it comparable to current high-end FinFET-based SoCs. In that case, it could very well also be that it will be the first GPU we see in mid-2016, though that’s just supposition on my part.

For their brief demonstration, RTG set up a pair of otherwise identical Core i7 systems running Star Wars Battlefront. The first system contained an early engineering sample Polaris card, while the other system had a GeForce GTX 950 installed (specific model unknown). Both systems were running at 1080p Medium settings – about right for a GTX 950 on the X-Wing map RTG used – and generally hitting the 60fps V-sync limit.

The purpose of this demonstration for RTG was threefold: to showcase that a Polaris GPU was up and running, that the small Polaris GPU in question could offer performance comparable to GTX 950, and finally to show off the energy efficiency advantage of the small Polaris GPU over current 28nm GPUs. To that end RTG also plugged each system into a power meter to measure the total system power at the wall. In the live press demonstration we saw the Polaris system average 88.1W while the GTX 950 system averaged 150W. Meanwhile in RTG’s own official lab tests (and used in the slide above) they measured 86W and 140W respectively.

Keeping in mind that this is wall power – PSU efficiency and the power consumption of other components is in play as well – the message RTG is trying to send is clear: that Polaris should be a very power efficient GPU family thanks to the combination of architecture and FinFET manufacturing. That RTG is measuring a 54W difference at the wall is definitely a bit surprising as GTX 950 averages under 100W to begin with, so even after accounting for PSU efficiency this implies that power consumption of the Polaris video card is about half that of the GTX 950. But as this is clearly a carefully arranged demo with a framerate cap and a chip still in early development, I wouldn’t read too much into it at this time.

Polaris: A High Level Look
Comments Locked

153 Comments

View All Comments

  • Friendly0Fire - Monday, January 4, 2016 - link

    Because all things considered you can't really make a card drain much more than 250W or so. If you get twice the performance per watt, then you've just doubled the maximum computing power a single GPU can have.
  • Samus - Monday, January 4, 2016 - link

    That's why I gave up SLI. Having two 200+ watt GPU's and a CPU pumping basically make your PC a space heater. Since my house doesn't have zones for AC this making summer time gaming when it's 80 degrees outside unacceptable. Ideally a PC shouldn't produce more than 200 watts while gaming, and laptops have trouble doing half that without some creative thermal design.

    Mainstream GPU's need to take a note from the GTX 750Ti, a card that could run off the PCIe bus without any additional power, while still playing just about any game at acceptable peformance and detail.
  • coldpower27 - Monday, January 4, 2016 - link

    Totally agree. You can get a portable AC to dump the heat back outside again, but that means a whole lot of energy used for gaming. So much better to have power efficient performance.

    Hoping the GTX 970 successor doubles performance at the same thermal envelope. Pretty much would mean GTX 980 TI performance at GTX 970 power and pricing. Would love that! 14/16nm could definitely make that possible.
  • smilingcrow - Monday, January 4, 2016 - link

    It's a primary metric for mobile GPUs and a big deal for fans of quiet GPUs also.
  • Arnulf - Monday, January 4, 2016 - link

    Because not all of us are 16 any more. I want performance that is good enough for what I do, I want the hardware to be as quiet as possible and I can afford that.
  • Mondozai - Monday, January 4, 2016 - link

    250 watt GPUs are not noticably louder than the 165 W ones. Take a look at the 380X reviews for one, or the 980 Ti ones.
  • Arnulf - Monday, January 4, 2016 - link

    ... and this is relevant ... how exactly ?
  • dsumanik - Monday, January 4, 2016 - link

    because you just said:

    "I want the hardware to be as quiet as possible and I can afford that"

    He's pointing out the difference in power envelopes (performance/watt) doesnt have the impact on quiet computing that you are suggesting it does.

    Seem pretty relevant to me smart guy
  • RafaelHerschel - Monday, January 4, 2016 - link

    A small and silent system in the living room.

    Faster cards at the maximum practical power requirement.

    Shorter cards (less cooling) offer more flexibility when it comes to choosing a case.

    A general dislike for inefficiency.
  • DominionSeraph - Monday, January 4, 2016 - link

    My space heater is 1000W. It can bring my room to 80F when it's below freezing outside. I don't use it in the summer.
    I use my computer in the summer. There's a reason I don't it to take 1000W when it's already 90F in here.

    My 89W Athlon X2 5200+ and 8800GTS used to noticeably warm the room. I feel sorry for anyone who bought a FX-9590 and 390x.

Log in

Don't have an account? Sign up now