Never one to shy away from high-end video cards, in 2013 NVIDIA took the next step towards establishing a definitive brand for high-end cards with the launch of the GeForce GTX Titan. Proudly named after NVIDIA’s first massive supercomputer win – the Oak Ridge National Laboratory Titan – it set a new bar in performance. It also set a new bar in build quality for a single-GPU card, and at $999 it also set a new bar in price. The first true “luxury” video card, NVIDIA would gladly sell you one of their finest video cards if you had the pockets deep enough for it.

Since 2013 the Titan name has stuck around for additional products, although it never had quite the same impact as the original. The GTX Titan Black was a minor refresh of the GTX Titan, moving to a fully enabled GK110B GPU and from a consumer/gamer standpoint somewhat redundant due to the existence of the nearly-identical GTX 780 Ti. Meanwhile the dual-GPU GTX Titan Z was largely ignored, its performance sidelined by its unprecedented $3000 price tag and AMD’s very impressive Radeon R9 295X2 at half the price.

Now in 2015 NVIDIA is back with another Titan, and this time they are looking to recapture a lot of the magic of the original Titan. First teased back at GDC 2015 in an Epic Unreal Engine session, and used to drive more than a couple of demos at the show, the GTX Titan X gives NVIDIA’s flagship video card line the Maxwell treatment, bringing with it all of the new features and sizable performance gains that we saw from Maxwell last year with the GTX 980. To be sure, this isn’t a reprise of the original Titan – there are some important differences that make the new Titan not the same kind of prosumer card the original was – but from a performance standpoint NVIDIA is looking to make the GTX Titan X as memorable as the original. Which is to say that it’s by far the fastest single-GPU card on the market once again.

NVIDIA GPU Specification Comparison
  GTX Titan X GTX 980 GTX Titan Black GTX Titan
CUDA Cores 3072 2048 2880 2688
Texture Units 192 128 240 224
ROPs 96 64 48 48
Core Clock 1000MHz 1126MHz 889MHz 837MHz
Boost Clock 1075MHz 1216MHz 980MHz 876MHz
Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 6GHz GDDR5
Memory Bus Width 384-bit 256-bit 384-bit 384-bit
VRAM 12GB 4GB 6GB 6GB
FP64 1/32 FP32 1/32 FP32 1/3 FP32 1/3 FP32
TDP 250W 165W 250W 250W
GPU GM200 GM204 GK110B GK110
Architecture Maxwell 2 Maxwell 2 Kepler Kepler
Transistor Count 8B 5.2B 7.1B 7.1B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 03/17/2015 09/18/14 02/18/2014 02/21/2013
Launch Price $999 $549 $999 $999

To do this NVIDIA has assembled a new Maxwell GPU, GM200 (aka Big Maxwell). We’ll dive into GM200 in detail a bit later, but from a high-level standpoint GM200 is the GM204 taken to its logical extreme. It’s bigger, faster, and yes, more power hungry than GM204 before it. In fact at 8 billion transistors occupying 601mm2 it’s NVIDIA’s largest GPU ever. And for the first time in quite some time, virtually every last millimeter is dedicated to graphics performance, which coupled with Maxwell’s performance efficiency makes it a formidable foe.

Diving into the specs, GM200 can for most intents and purposes be considered a GM204 + 50%. It has 50% more CUDA cores, 50% more memory bandwidth, 50% more ROPs, and almost 50% more die size. Packing a fully enabled version of GM200, this gives the GTX Titan X 3072 CUDA cores and 192 texture units(spread over 24 SMMs), paired with 96 ROPs. Meanwhile considering that even the GM204-backed GTX 980 could outperform the GK110-backed GTX Titans and GTX 780 Ti thanks to Maxwell’s architectural improvements – 1 Maxwell CUDA core is quite a bit more capable than Kepler in practice, as we’ve seen – GTX Titan X is well geared to shoot well past the previous Titans and the GTX 980.

Feeding GM200 is a 384-bit memory bus driving 12GB of GDDR5 clocked at 7GHz. Compared to the GTX Titan Black this is one of the few areas where GTX Titan X doesn’t have an advantage in raw specifications – there’s really nowhere to go until HBM is ready – however in this case numbers can be deceptive as NVIDIA has heavily invested in memory compression for Maxwell to get more out of the 336GB/sec of memory bandwidth they have available. The 12GB of VRAM on the other hand continues NVIDIA’s trend of equipping Titan cards with as much VRAM as they can handle, and should ensure that the GTX Titan X has VRAM to spare for years to come. Meanwhile sitting between the GPU’s functional units and the memory bus is a relatively massive 3MB of L2 cache, retaining the same 32K:1 cache:ROP ratio of Maxwell 2 and giving the GPU more cache than ever before to try to keep memory operations off of the memory bus.

As for clockspeeds, as with the rest of the Maxwell lineup GTX Titan X is getting a solid clockspeed bump from its Kepler predecessor. The base clockspeed is up to 1Ghz (reported as 1002MHz by NVIDIA’s tools) while the boost clock is 1075MHz. This is roughly 100MHz (~10%) ahead of the GTX Titan Black and will further push the GTX Titan X ahead. However as is common with larger GPUs, NVIDIA has backed off on clockspeeds a bit compared to the smaller GM204, so GTX Titan X won’t clock quite as high as GTX 980 and the overall performance difference on paper is closer to 33% when comparing boost clocks.

Power consumption on the other hand is right where we’d expect it to be for a Titan class card. NVIDIA’s official TDP for GTX Titan X is 250W, the same as the previous single-GPU Titan cards (and other consumer GK110 cards). Like the original GTX Titan, expect GTX Titan X to spend a fair bit of its time TDP-bound; 250W is generous – a 51% increase over GTX 980 – but then again so is the number of transistors that need to be driven. Overall this puts GTX Titan X on the high side of the power consumption curve (just like GTX Titan before it), but it’s the price for that level of performance. Practically speaking 250W is something of a sweet spot for NVIDIA, as they know how to efficiently dissipate that much heat and it ensures GTX Titan X is a drop-in replacement for GTX Titan/780 in any existing system designs.

Moving on, the competitive landscape right now will greatly favor NVIDIA. With AMD’s high-end having last been refreshed in 2013 and with the GM204 GTX 980 already ahead of the Radeon 290X, GTX Titan X further builds on NVIDIA’s lead. No other single-GPU card is able to touch it, and even GTX 980 is left in the dust. This leaves NVIDIA as the uncontested custodian of the single-GPU performance crown.

The only thing that can really threaten the GTX Titan X at this time are multi-GPU configurations such as GTX 980 SLI and the Radeon R9 295X2, the latter of which is down to ~$699 these days and is certainly a potential spoiler for GTX Titan X. To be sure when multi-GPU works right either of these configurations can shoot past a single GTX Titan X, however when multi-GPU scaling falls apart then we have the usual problem of such setups falling well behind a single powerful GPU. Such setups are always a risk in that regard, and consequently as a single-GPU card GTX Titan X offers the best bet for consistent performance.

NVIDIA of course is well aware of this, and with GTX 980 already fending off the R9 290X NVIDIA is free to price GTX Titan X as they please. GTX Titan X is being positioned as a luxury video card (like the original GTX Titan) and NVIDIA is none too ashamed to price it accordingly. Complicating matters slightly however is the fact that unlike the Kepler Titan cards the GTX Titan X is not a prosumer-level compute monster. As we’ll see it lacks its predecessor’s awesome double precision performance, so NVIDIA does need to treat this latest Titan as a consumer gaming card rather than a gaming + entry level compute card as was the case with the original GTX Titan.

In any case, with the previous GTX Titan and GTX Titan Black launching at $999, it should come as no surprise that this is where GTX Titan X is launching as well. NVIDIA saw quite a bit of success with the original GTX Titan at this price, and with GTX Titan X they are shooting for the same luxury market once again. Consequently GTX Titan X will be the fastest single-GPU card you can buy, but it will once again cost quite a bit to get. For our part we'd like to see GTX Titan X priced lower - say closer to the $700 price tag of GTX 780 Ti - but it's hard to argue with NVIDIA's success on the original GTX Titan.

Finally, for launch availability this will be a hard launch with a slight twist. Rather than starting with retail and etail partners such as Newegg, NVIDIA is going to kick things off by selling cards directly, while partners will start to sell cards in a few weeks. For a card like GTX Titan X, NVIDIA selling cards directly is not a huge stretch; with all cards being identical reference cards, partners largely serve as distributors and technical support for buyers.

Meanwhile selling GTX Titan X directly also allowed NVIDIA to keep the card under wraps for longer while still offering a hard launch, as it left fewer avenues for leaks through partners. On the other hand I'm curious how partners will respond to being cut out of the loop like this, even if it is just temporary.

Spring 2015 GPU Pricing Comparison
AMD Price NVIDIA
  $999 GeForce GTX Titan X
Radeon R9 295X2 $699  
  $550 GeForce GTX 980
Radeon R9 290X $350  
  $330 GeForce GTX 970
Radeon R9 290 $270  
GM200 - All Graphics, Hold The Double Precision
Comments Locked

276 Comments

View All Comments

  • Kevin G - Wednesday, March 18, 2015 - link

    There was indeed a bigger chip due closer to the GK104/GTX 680's launch: the GK100. However it was cancelled due to bugs in the design. A fixed revision eventually became the GK110 which was ultimately released as the Titan/GTX 780.

    After that there have been two more revisions. The GK110B is quick respin which all fully enabled dies stem from (Titan Black/GTX 780 Ti). Then late last nVidia surprised everyone with the GK210 which has a handful of minor architectural improvements (larger register files etc.).

    The morale of the story is that building large dies is hard and takes lots of time to get right.
  • chizow - Monday, March 23, 2015 - link

    We don't know what happened to GK100, it is certainly possible as I've guessed aloud numerous times that AMD's 7970 and overall lackluster pricing/performance afforded Nvidia the opportunity to scrap GK100 and respin it to GK110 while trotting GK104 out as its flagship, because it was close enough to AMD's best and GK100 may have had problems as you described. All of that led to considerable doubt whether or not we would see a big Kepler, a sentiment that was even dishonestly echoed by some Nvidia employees I got into it with on their forums.

    Only in October 2012 did we see signs of Big Kepler in the Titan supercomputer with K20X, but still no sign of a GeForce card. Its no doubt that a big die takes time, but Nvidia had always led with their big chip first, since G80 and this was the first time they deviated from that strategy while parading what was clearly their 2nd best, mid-range performance ASIC as flagship.

    Titan X sheds all that nonsense and goes back to their gaming roots. It is their best effort, up front, no BS. 8Bn transistors Inspired by Gamers and Made by Nvidia. So as someone who buys GeForce for gaming first and foremost, I'm going to reward them for those efforts so they keep rewarding me with future cards of this kind. :)
  • Railgun - Wednesday, March 18, 2015 - link

    With regards to the price, 12GB of RAM isn't justification enough for it. Memory isn't THAT expensive in the grand scheme of things. What the Titan was originally isn't what the Titan X is now. They can't be seen as the same lineage. If you want to say memory is the key, the original Titan with its 6GB could be seen as more than still relevant today. Crysis is 45% faster in 4K with the X than the original. Is that the chip itself or memory helping? I vote the former given the 690 is 30% faster in 4K with the same game than the original Titan, with only 4GB total memory. VRAM isn't going to really be relevant for a bit other than those that are running stupidly large spans. It's a shame as Ryan touches on VRAM usage in Middle Earth, but doesn't actually indicate what's being used. There too, the 780Ti beats the original Titan sans huge VRAM reserves. Granted, barely, but point being is that VRAM isn't the reason. This won't be relevant for a bit I think.

    You can't compare an aftermarket price to how an OEM prices their products. The top tier card other than the TiX is the 980, which has been mentioned ad nauseam that the TiX is NOT worth 80% more given its performance. If EVGA wants to OC a card out of their shop and charge 45% more than a stock clock card, then buyer beware if it's not a 45% gain in performance. I for one don't see the benefit of a card like that. The convenience isn't there given the tools and community support for OCing something one's self.

    I too game on 25x14 and there've been zero issues regarding VRAM, or the lack thereof.
  • chizow - Monday, March 23, 2015 - link

    I didn't say VRAM was the only reason, I said it was one of the reasons. The bigger reason for me is that it is the FULL BOAT GM200 front and center. No waiting. No cut cores. No cut SMs for compute. No cut down part because of TDP. It's 100% of it up front, 100% of it for gaming. I'm sold and onboard until Pascal. That really is the key factor, who wants to wait for unknown commodities and timelines if you know this will set you within +/-10% of the next fastest part's performance if you can guarantee you get it today for maybe a 25-30% premium? I guess it really depends on how much you value your current and near-future gaming experience. I knew from the day I got my ROG Swift (with 2x670 SLI) I would need more to drive it. 980 was a bit of a sidegrade in absolute performance and I still knew i needed more perf, and now I have it with Titan X.

    As for VRAM, 12GB is certainly overkill today, but I'd say 6GB isn't going to be enough soon enough. Games are already pushing 4GB (SoM, FC4, AC:U) and that's still with last-gen type textures. Once you start getting console ports with PC texture packs I could see 6 and 8GB being pushed quite easily, as that is the target framebuffer for consoles (2+6). So yes, while 12GB may be too much, 6GB probably isn't enough, especially once you start looking at 4K and Surround.

    Again, if you don't think the price is worth it over a 980 that's fine and fair, but the reality of it is, if you want better single-GPU performance there is no alternative. A 2nd 980 for SLI is certainly an option, but for my purposes and my resolution, I would prefer to stick to a single-card solution if possible, which is why I went with a Titan X and will be selling my 980 instead of picking up a 2nd one as I originally intended.

    Best part about Titan X is it gives another choice and a target level of performance for everyone else!
  • Frenetic Pony - Tuesday, March 17, 2015 - link

    They could've halved the ram, dropped the price by $200, and done a lot better without much to any performance hit.
  • Denithor - Wednesday, March 18, 2015 - link

    LOL.

    You just described the GTX 980 Ti, which will likely launch within a few months to answer the 390X.
  • chizow - Wednesday, March 18, 2015 - link

    @Frenetic Pony, maybe now, but what about once DX12 drops and games are pushing over 6GB? We already see games saturating 4GB, and we still haven't seen next-gen engine games like UE4. Why compromise for a few hundred less? You haven't seen all the complaints from 780Ti users about how 3GB isn't enough anymore? Shoudn't be a problem for this card, which is just 1 less thing to worry about.
  • LukaP - Thursday, March 19, 2015 - link

    Games dont push 4GB... Check the LTT Ultrawide video, where he barely got Shadow of Mordor on ultra to go past 4GBs on 3 ulrawide 1440p screens.

    And as a game dev i can tell you, with proper optimisations, more than 4GB is insane, on a GPU, unless you just load stuff in with a predictive algorithm, to avoid PCIe bottlenecks.

    And please do show me where a 780Ti user isnt happy with his cards performance at 1080-1600p. Because the card does, and will continue to perform great on those resolutions, since games wont really advance, due to consoles limiting again.
  • LukaP - Thursday, March 19, 2015 - link

    Also, DX12 wont make games magically use more VRAM. all it really does is it makes the CPU and GPU communicate better. It wont magically make games run or look better. both of those are up to the devs, and the look better part is certainly not the textures or polycounts. Its merely the amount of drawcalls per frame going up, meaning more UNIQUE objects. (contrary to more objects, which can be achieved through instancing easily in any modern engine, but Ubisoft havent learned that yet)
  • chizow - Monday, March 23, 2015 - link

    DX12 raises the bar for all games by enabling better visuals, you're going to get better top-end visuals across the board. Certainly you don't think UE4 when it debuts will have the same reqs as DX11 based games on UE3?

    Even if you have the same size textures as before 2K or 4K assets as is common now, the fact you are drawing more polygons enabled by DX12's lower overhead, higher draw call/poly capabilities means they need to be textured, meaning higher VRAM requirement unless you are using the same textures over and over again.

    Also, since you are a game dev, you would also know Devs are going more and more towards bindless or megatextures that specifically make great use of textures staying resident in local VRAM for faster accesses, rather than having to optimize and cache/load/discharge them.

Log in

Don't have an account? Sign up now