Gaming Performance, Power, Temperature, & Noise

So with the basics of the architecture and core configuration behind us, let’s dive into some numbers.

Rise of the Tomb Raider - 3840x2160 - Very High (DX11)

Dirt Rally - 3840x2160 - Ultra

Ashes of the Singularity - 3840x2160 - Extreme

Battlefield 4 - 3840x2160 - Ultra Quality (0x MSAA)

Crysis 3 - 3840x2160 - Very High Quality + FXAA

The Witcher 3 - 3840x2160 - Ultra Quality (No Hairworks)

The Division - 3840x2160 - Ultra Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

Hitman - 3840x2160 - Ultra Quality

As the first high-end card of this generation to launch, NVIDIA gets to set the pace for the market. At the risk of being redundant the GTX 1080 is now the fastest single-GPU card on the market, and even at 4K it wins at every single gaming benchmark, typically by a good margin. In practice we’re looking at a 31% performance lead over GTX 980 Ti – the card the GTX 1080 essentially replaces – with a similar 32% lead over AMD’s Radeon R9 Fury X. Meanwhile against the slightly older GTX 980, that gap is 70%.

On a generational basis this ends up being very close to the 74% jump in 4K performance going from the GTX 680 to GTX 980. And although the pricing comparison is not especially flattering for NVIDIA here, it should be evident that NVIDIA isn’t just looking to sell GTX 1080 as an upgrade for high-end Kepler cards, but as an upgrade for GTX 980 as well, just 20 months after it launched.

The Witcher 3 - 1920x1080 - Ultra Quality (No Hairworks)

I also wanted to quickly throw in a 1080p chart, both for the interest of comparing the GTX 1080 to the first-generation 28nm cards, and for gamers who are playing on high refresh rate 1080p monitors. Though this will of course vary from game to game, roughly speaking the GTX 1080 should be 3x faster than the GTX 680 or Radeon HD 7970. This is a good reminder of how architectural efficiency has played a greater role in past years, as this is a much larger gain than we saw jumping from 55nm to 40nm or 40nm to 28nm, both of which were much closer to the historical norm of 2x.

Load Power Consumption - Crysis 3

Meanwhile when it comes to power, temperature, and noise, NVIDIA continues to execute very well here. Power consumption under Crysis 3 is some 20W higher than GTX 980 or 52W lower than GTX 980 Ti, generally in line with NVIDIA’s own TDP ratings after accounting for the slightly higher CPU power consumption incurred by the card’s higher performance. The end result is that GTX 1080 is a bit more power hungry than GTX 980, but still in the sweet spot NVIDIA has carved out in the gaming market. Broadly speaking, this amounts to a 54% increase in energy efficiency in the case of Crysis 3.

Load GPU Temperature - Crysis 3

Load Noise Levels - Crysis 3

Otherwise from a design perspective the GTX 1080 Founders Edition carries on from NVIDIA’s high-end GTX 700/900 reference design, allowing NVIDIA to once again offer a superior blower-based solution. NVIDIA’s temperature management technology has not changed relative to Maxwell, so like their other cards, the GTX 1080 tops out in the low 80s for load temperature. More significantly, at 47.5 db(A) load noise, the card is on par with the GTX 780 and half a dB off of the GTX 980.

Ultimately NVIDIA has designed the GTX 1080 to be a drop-in replacement for the GTX 980, and this data confirms just that, indicating that GTX 1080’s much higher performance comes with only a slight increase in power consumption and no meaningful change in temperatures or acoustics.

The NVIDIA GeForce GTX 1080 Preview First Thoughts
Comments Locked

262 Comments

View All Comments

  • bill44 - Wednesday, May 18, 2016 - link

    That's a bummer.
    Currently, I have 3x screens connected. 2x desktop monitors and 1x for HTPC trough the amp.
    If I wanted full hardware HEVC 10bit decoding, DP1.3/1.4 for 2x 4K or 5K HDR monitor over 1x cable, I need to give up 10bpc support for windowed apps. Or, go with something like the quadro M2000 with non of the latest goodies (DP 1.2, HDMI 2.0b, full HW decode HEVC 10bit, HDR etc. etc.).
    It will be quite a while before any new quadro support them. Regardless of price.
  • Ryan Smith - Wednesday, May 18, 2016 - link

    To be clear, you get 10bpc support for windowed D3D applications, so your HTPC idea will work.

    The distinction is for professional applications such as Photoshop. NVIDIA has artificially restricted 10bpc color support for those applications in order to make it a Quadro feature, and that doesn't change for GTX 1080.
  • sagman12 - Tuesday, May 17, 2016 - link

    "Gamers however won’t be able to get their hands on the card until the 27th – next Friday – with pre-order sales starting this Friday." I hope this is true. I don't want to have to stay up all day hitting F5 until i secure my 1080FE
  • R3MF - Tuesday, May 17, 2016 - link

    Looking at an unopened Gigabyte R9 390X G1 that I picked up for £250 (standard price for an R9 390X is £330-£360 in UK money)
    .
    This is getting 50-66 percent of the framerate of the GTX 1080, but for slightly less than half the price ($599 for the non-Founders edition translates to roughly £500 inc VAT).

    Knowing what we know no about likely performance of upcoming 14/16nm products, should i be sending it back?
  • cheshirster - Tuesday, May 17, 2016 - link

    Perf/$ would not change drastically.
    But perf/watt will skyrocket. You probably will be able to get the same perf in half the power.
  • R3MF - Tuesday, May 17, 2016 - link

    cheers
  • Marucins - Tuesday, May 17, 2016 - link

    Where is COMPUTING tests?
  • 3ogdy - Tuesday, May 17, 2016 - link

    Ryan, please consider integrating this in your upcoming review of the 1080. It would be extremely useful:
    Clock the 1080 just like the 980 and then compare their performance. I would like to see how much of that 15-FPS-on-avg increase vs 980Ti comes from clock speed increase and how much of an impact does Pascal actually have. As it looks right now, the 1080 is a disappointement - I was expecting something truly stellar from nVidia after touting this and that and making serious all-around changes , taking advantage of a process node half as big as the previous one...so far 1080 is shaping to be just an incremental upgrade if not even a sidegrade when clock speed differences are negated. I hope I'm as wrong as one could be, though. Good preview so far!
  • tarqsharq - Tuesday, May 17, 2016 - link

    Yes, this would be very interesting!
  • genekellyjr - Tuesday, May 17, 2016 - link

    Doing some quick calcs w/ BF4 FPS numbers gives 1080 - 111 FPS/MHz/core, 980Ti - 150 FPS/MHz/core, 980 - 75 FPS/MHz/core for 4K. The 1440p and 1080p numbers also follow suit (150/206/102 for 1440p, 231/319/159 for 1080p).

    Essentially, doing meaningless number crunching does show that normalized the 980Ti is better per MHz per core, at least for BF4. I used the boost clock numbers for the MHz. Hope it is investigated because it seems like Nvidia spent extra transistor bank on other aspects significantly (maybe F16 compute?) to the detriment of their F32 gaming chops.

Log in

Don't have an account? Sign up now