GM200 - All Graphics, Hold The Double Precision

Before diving into our look at the GTX Titan X itself, I want to spend a bit of time talking about the GM200 GPU. GM200 is a very interesting GPU, and not for the usual reasons. In fact you could say that GM200 is remarkable for just how unremarkable it is.

From a semiconductor manufacturing standpoint we’re still at a standstill on 28nm for at least a little bit longer, pushing 28nm into its 4th year and having all sorts of knock-on effects. We’ve droned on about this for some time now, so we won’t repeat ourselves, but ultimately what it means for consumers is that AMD and NVIDIA have needed to make do with the tools they have, and in lieu of generational jumps in manufacturing have focused on architectural efficiency and wringing out everything they can get out of 28nm.

For NVIDIA those improvements came in the form of the company’s Maxwell architecture, which has made a concentrated effort to focus on energy and architectural efficiency to get the most out of their technology. In assembling GM204 NVIDIA built the true successor to GK104, putting together a pure graphics chip. From a design standpoint NVIDIA spent their energy efficiency gains on growing out GM204’s die size without increasing power, allowing them to go from 294mm2 and 3.5B transistors to 398mm2 and 5.2B transistors. With a larger die and larger transistor budget, NVIDIA was able to greatly increase performance by laying down a larger number of high performance (and relatively larger themselves) Maxwell SMMs.

On the other hand for GM206 and the GTX 960, NVIDIA banked the bulk of their energy savings, building what’s best described as half of a GM204 and leading to a GPU that didn’t offer as huge of a jump in performance from its predecessor (GK106) but also brought power usage down and kept costs in check.

Not Pictured: The 96 FP64 ALUs

But for Big Maxwell, neither option was open to NVIDIA. At 551mm2 GK110 was already a big GPU, so large (33%) increase in die size like with GM204 was not practical. Neither was leaving the die size at roughly the same area and building the Maxwell version of GK110, gaining only limited performance in the process. Instead NVIDIA has taken a 3rd option, and this is what makes GM200 so interesting.

For GM200 NVIDIA’s path of choice has been to divorce graphics from high performance FP64 compute. Big Kepler was a graphics powerhouse in its own right, but it also spent quite a bit of die area on FP64 CUDA cores and some other compute-centric functionality. This allowed NVIDIA to use a single GPU across the entire spectrum – GeForce, Quadro, and Tesla – but it also meant that GK110 was a bit jack-of-all-trades. Consequently when faced with another round of 28nm chips and intent on spending their Maxwell power savings on more graphics resources (ala GM204), NVIDIA built a big graphics GPU. Big Maxwell is not the successor to Big Kepler, but rather it’s a really (really) big version of GM204.

GM200 is 601mm2 of graphics, and this is what makes it remarkable. There are no special compute features here that only Tesla and Quadro users will tap into (save perhaps ECC), rather it really is GM204 with 50% more GPU. This means we’re looking at the same SMMs as on GM204, featuring 128 FP32 CUDA cores per SMM, a 512Kbit register file, and just 4 FP64 ALUs per SMM, leading to a puny native FP64 rate of just 1/32. As a result, all of that space in GK110 occupied by FP64 ALUs and other compute hardware – and NVIDIA won’t reveal quite how much space that was – has been reinvested in FP32 ALUs and other graphics-centric hardware.

  Die Size Native FP64 Rate
GM200 (Big Maxwell) 601mm2 1/32
GK110 (Big Kepler) 551mm2 1/3
GF110 (Big Fermi) 520mm2 1/2
GT200 (Big Tesla) 576mm2 1/8
G80 484mm2 N/A

It’s this graphics “purification” that has enabled NVIDIA to improve their performance over GK110 by 50% without increasing power consumption and with only a moderate 50mm2 (9%) increase in die size. In fact in putting together GM200, NVIDIA has done something they haven’t done for years. The last flagship GPU from the company to dedicate this little space to FP64 was G80 – heart of the GeForce 8800GTX – which in fact didn’t have any FP64 hardware at all. In other words this is the “purest” flagship graphics GPU in 9 years.

Now to be clear here, when we say GM200 favors graphics we don’t mean exclusively, but rather it favors graphics and its associated FP32 math over FP64 math. GM200 is still a FP32 compute powerhouse, unlike anything else in NVIDIA’s lineup, and we don’t expect it will be matched by anything else from NVIDIA for quite some time. For that reason I wouldn’t be too surprised if we a Tesla card using it aimed at FP32 users such the oil & gas industry – something NVIDIA has done once before with the Tesla K10 – but you won’t be seeing GM200 in the successor to Tesla K40.

This is also why the GTX Titan X is arguably not a prosumer level card like the original GTX Titan. With the GTX Titan NVIDIA shipped it with its full 1/3 rate FP64 enabled, having GTX Titan pull double duty as the company’s consumer graphics flagship while also serving as their entry-level FP64 card. For GTX Titan X however this is not an option since GM200 is not a high performance FP64 GPU, and as a result the card is riding only on its graphics and FP32 compute capabilities. Which for that matter doesn’t mean that NVIDIA won’t also try to pitch it as a high-performance FP32 card for users who don’t need Tesla, but it won’t be the same kind of entry-level compute card like the original GTX Titan was. In other words, GTX Titan X is much more consumer focused than the original GTX Titan.

Tesla K80: The Only GK210 Card

Looking at the broader picture, I’m left to wonder if this is the start of a permanent divorce between graphics/FP32 compute and FP64 compute in the NVIDIA ecosystem. Until recently, NVIDIA has always piggybacked compute on their flagship GPUs as a means of bootstrapping the launch of the Tesla division. By putting compute in their flagship GPU, even if NVIDIA couldn’t sell those GPUs to compute customers they could sell them to GeForce/Quadro graphics customers. This limited the amount of total risk the company faced, as they’d never end up with a bunch of compute GPUs they could never sell.

However in the last 6 months we’ve seen a shift from NVIDIA at both ends of the spectrum. In November we saw the launch of a Tesla K80, a dual-GPU card featuring the GK210 GPU, a reworked version of GK110 that doubled the register file and shared memory sizes for better performance. GK210 would not come to GeForce or Quadro (though in theory it could have), making it the first compute-centric GPU from NVIDIA. And now with the launch of GM200 we have distinct graphics and compute GPUs from NVIDIA.

NVIDIA GPUs By Compute
  GM200 GK210 GK110B
Stream Processors 3072 2880 2880
Memory Bus Width 384-bit 384-bit 384-bit
Register File Size (Per SM) 4 x 64KB 512KB 256KB
Shared Memory /
L1 Cache (Per SM)
96KB + 24KB 128KB 64KB
Transistor Count 8B 7.1B(?) 7.1B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm
Architecture Maxwell Kepler Kepler
Tesla Products None K80 K40

The remaining question at this point is what happens from here. Was this divorce of compute and graphics a temporary action, the result of being stuck on the 28nm process for another generation? Or was it the first generation in a permanent divorce between graphics and compute, and consequently a divorce between GeForce/Quadro and Tesla? Is NVIDIA finally ready to let Tesla stand on its own?

With Pascal NVIDIA could very well build a jack-of-all-trades style GPU once more. However having already divorced graphics and compute for a generation, merging them again would eat up some of the power and die space benefits from going to 16nm FinFET, power and space that NVIDIA would likely want to invest in greater separate improvements in graphics and compute performance. We’ll see what Pascal brings, but I suspect GM200 is the shape of things to come for GeForce and the GTX Titan lineup.

The NVIDIA GeForce GTX Titan X Review Meet The GeForce GTX Titan X
Comments Locked


View All Comments

  • looncraz - Tuesday, March 17, 2015 - link

    If the most recent slides (allegedly leaked from AMD) hold true, the 390x will be at least as fast as the Titan X, though with only 8GB of RAM (but HBM!).

    A straight 4096SP GCN 1.2/3 GPU would be a close match-up already, but any other improvements made along the way will potentially give the 390X a fairly healthy launch-day lead.

    I think nVidia wanted to keep AMD in the dark as much as possible so that they could not position themselves to take more advantage of this, but AMD decided to hold out, apparently, until May/June (even though they apparently already have some inventory on hand) rather than give nVidia a chance to revise the Titan X before launch.

    nVidia blinked, it seems, after it became apparent AMD was just going to wait out the clock with their current inventory.
  • zepi - Wednesday, March 18, 2015 - link

    Unless AMD has achieved considerable increase in perf/w, they are going to have really hard time tuning those 4k shaders to a reasonable frequency without being a 450W card.

    Not that being a 500W is necessarily a deal breaker for everyone, but in practice cooling a 450W card without causing ear shattering level of noise is very difficult compared to cooling a 250W card.

    Let us wait and hope, since AMD really would need to get a break and make some money on this one...
  • looncraz - Wednesday, March 18, 2015 - link

    Very true. We know that with HBM there should already be a fairly beefy power savings (~20-30W vs 290X it seems).

    That doesn't buy them room for 1,280 more SPs, of course, but it should get them a healthy 256 of them. Then, GCN 1.3 vs 1.1 should have power advantages as well. GCN 1.2 vs 1.0 (R9 285 vs R9 280) with 1792 SPs showed a 60W improvement, if we assume GCN 1.1 to GCN 1.3 shows a similar trend the 390X should be pulling only about 15W more than the 290X with the rumored specs without any other improvements.

    Of course, the same math says the 290X should be drawing 350W, but that's because it assumes all the power is in the SPs... But I do think it reveals that AMD could possibly do it without drawing much, if any, more power without making any unprecedented improvements.
  • Braincruser - Wednesday, March 18, 2015 - link

    Yeah, but the question is, How well will the memory survive on top of a 300W GPU?
    Because the first part in a graphic card to die from high temperatures is the VRAM.
  • looncraz - Thursday, March 19, 2015 - link

    It will be to the side, on a 2.5d interposer, I believe.

    GPU thermal energy will move through the path of least resistance (technically, to the area with the greatest deltaT, but regulated by the material thermal conductivity coefficient), which should be into the heatsink or water block. I'm not sure, but I'd think the chips could operate in the same temperature range as the GPU, but maybe not. It may be necessary to keep them thermally isolated. Which shouldn't be too difficult, maybe as simple as not using thermal pads at all for the memory and allowing them to passively dissipate heat (or through interposer mounted heatsinks).

    It will be interesting to see what they have done to solve the potential issues, that's for sure.
  • Xenonite - Thursday, March 19, 2015 - link

    Yes, I agree that AMD would be able to absolutely destroy NVIDIA on the performance front if they designed a 500W GPU and left the PCB and waterblock design to their AIB partners.

    I would also absolutely love to see what kind of performance a 500W or even a 1kW graphics card would be able to muster; however, since a relatively constant 60fps presented with less than about 100ms of total system latency has been deemed sufficient for a "smooth and responsive" gaming experience, I simply can't imagine such a card ever seeing the light of day.
    And while I can understand everyone likes to pretend that they are saving the planet with their <150W GPUs, the argument that such a TDP would be very difficult to cool does not really hold much water IMHO.

    If, for instance, the card was designed from the ground up to dissipate its heat load over multiple 200W~300W GPUs, connected via a very-high-speed, N-directional data interconnect bus, the card could easily and (most importantly) quietly be cooled with chilled-watercooling dissipating into a few "quad-fan" radiators. Practically, 4 GM200-size GPUs could be placed back-to-back on the PCB, with each one rendering a quarter of the current frame via shared, high-speed frame buffers (thereby eliminating SLI-induced microstutter and "frame-pacing" lag). Cooling would then be as simple as installing 4 standard gpu-watercooling loops with each loop's radiator only having to dissipate the TDP of a single GPU module.
  • naxeem - Tuesday, March 24, 2015 - link

    They did solve that problem with a water-cooling solution. 390X WCE is probably what we'll get.
  • ShieTar - Wednesday, March 18, 2015 - link

    Who says they don't allow it? EVGA have already anounced two special models, a superclocked one and one with a watercooling-block:
  • Wreckage - Tuesday, March 17, 2015 - link

    If by fast you mean June or July. I'm more interested in a 980ti so I don't need a new power supply.
  • ArmedandDangerous - Saturday, March 21, 2015 - link

    There won't ever be a 980 Ti if you understand Nvidia's naming schemes. Ti's are for unlocked parts, there's nothing to further unlock on the 980 GM204.

Log in

Don't have an account? Sign up now