It’s been quite a while since we’ve looked at triple-GPU CrossFire and SLI performance – or for that matter looking at GPU scaling in-depth. While NVIDIA in particular likes to promote multi-GPU configurations as a price-practical upgrade path, such configurations are still almost always the domain of the high-end gamer. At $700 we have the recently launched GeForce GTX 590 and Radeon HD 6990, dual-GPU cards whose existence is hedged on how well games will scale across multiple GPUs. Beyond that we move into the truly exotic: triple-GPU configurations using three single-GPU cards, and quad-GPU configurations using a pair of the aforementioned dual-GPU cards. If you have the money, NVIDIA and AMD will gladly sell you upwards of $1500 in video cards to maximize your gaming performance.

These days multi-GPU scaling is a given – at least to some extent. Below the price of a single high-end card our recommendation is always going to be to get a bigger card before you get more cards, as multi-GPU scaling is rarely perfect and with equally cutting-edge games there’s often a lag between a game’s release and when a driver profile is released to enable multi-GPU scaling. Once we’re looking at the Radeon HD 6900 series or GF110-based GeForce GTX 500 series though, going faster is no longer an option, and thus we have to look at going wider.

Today we’re going to be looking at the state of GPU scaling for dual-GPU and triple-GPU configurations. While we accept that multi-GPU scaling will rarely (if ever) hit 100%, just how much performance are you getting out of that 2nd or 3rd GPU versus how much money you’ve put into it? That’s the question we’re going to try to answer today.

From the perspective of a GPU review, we find ourselves in an interesting situation in the high-end market right now. AMD and NVIDIA just finished their major pushes for this high-end generation, but the CPU market is not in sync. In January Intel launched their next-generation Sandy Bridge architecture, but unlike the past launches of Nehalem and Conroe, the high-end market has been initially passed over. For $330 we can get a Core i7 2600K and crank it up to 4GHz or more, but what we get to pair it with is lacking.

Sandy Bridge only supports a single PCIe x16 link coming from the CPU – an awesome CPU is being held back by a limited amount of off-chip connectivity; DMI and a single PCIe x16 link. For two GPUs we can split that out to x8 and x8 which shouldn’t be too bad. But what about three GPUs? With PCIe bridges we can mitigate the issue some by allowing the GPUs to talk to each other at x16 speeds and dynamically allocate CPU-to-GPU bandwidth based on need, but at the end of the day we’re splitting a single x16 lane across three GPUs.

The alternative is to take a step back and work with Nehalem and the x58 chipset. Here we have 32 PCIe lanes to work with, doubling the amount of CPU-to-GPU bandwidth, but the tradeoff is the CPU.  Gulftown and Nehalm are capable chips on its own, but per-clock the Nehalem architecture is normally slower than Sandy Bridge, and neither chip can clock quite as high on average. Gulftown does offer more cores – 6 versus 4 – but very few games are held back by the number of cores. Instead the ideal configuration is to maximize performance of a few cores.

Later this year Sandy Bridge E will correct this by offering a Sandy Bridge platform with more memory channels, more PCIe lanes, and more cores; the best of both worlds. Until then it comes down to choosing from one of two platforms: a faster CPU or more PCIe bandwidth. For dual-GPU configurations this should be an easy choice, but for triple-GPU configurations it’s not quite as clear cut. For now we’re going to be looking at the latter by testing on our trusty Nehalem + x58 testbed, which largely eliminates a bandwidth bottleneck in a tradeoff for a CPU bottleneck.

Moving on, today we’ll be looking at multi-GPU performance under dual-GPU and triple-GPU configurations; quad-GPU will have to wait. Normally we only have two reference-style cards of any product on hand, so we’d like to thank Zotac and PowerColor for providing a reference-style GTX 580 and Radeon HD 6970 respectively.

Fitting Three Video Cards in an ATX Case
Comments Locked

97 Comments

View All Comments

  • A5 - Sunday, April 3, 2011 - link

    I'm guessing Ryan doesn't want to spend a month redoing all of their benchmarks over all the recent cards. Also the only one of your more recent games that would be at all relevant is Shogun 2 - SC2 runs well on everything, no one plays Arma 2, and the rest are console ports...
  • slickr - Sunday, April 3, 2011 - link

    apart from Shift, no game is console port.

    PC only is mafia 2, SC2, Arma 2, shogun 2, dead space is also not a console port. its PC port to consoles.
  • Ryan Smith - Sunday, April 3, 2011 - link

    We'll be updating the benchmark suite in the next couple of months as keeping with our twice a year schedule. Don't expect us to drop Civ V or Crysis, however.
  • Dustin Sklavos - Monday, April 4, 2011 - link

    Jarred and I have gone back and forth on this stuff to get our own suite where it needs to be, and the games Ryan's running have sound logic behind them. For what it's worth...

    Aliens vs. Predator isn't worth including because it doesn't really leverage that much of DX11 and nobody plays it because it's a terrible game. Crysis Warhead STILL stresses modern gaming systems. As long as it does that it'll be useful, and at least provides a watermark for the underwhelming Crysis 2.

    Battleforge and Shogun 2 I'm admittedly not sure about, same with HAWX and Shift 2.

    Civ 5 should stay, but StarCraft II should definitely be added. There's a major problem with SC2, though: it's horribly, HORRIBLY CPU bound. SC2 is criminally badly coded given how long it's been in the oven and doesn't scale AT ALL with more than two cores. I've found situations even with Sandy Bridge hardware where SC2 is more liable to demonstrate how much the graphics drivers and subsystem hit the CPU rather than how the graphics hardware itself performs. Honestly my only justification for including it in our notebook/desktop suites is because it's so popular.

    Mass Effect 2 to Dead Space 2 doesn't make any sense; Dead Space 2 is a godawful console port while Mass Effect 2 is currently one of the best if not THE best optimized Unreal Engine 3 games on the PC. ME2 should get to stay almost entirely by virtue of being an Unreal Engine 3 representative, ignoring its immense popularity.

    Wolfenstein is currently the most demanding OpenGL game on the market. It may seem an oddball choice, but it really serves the purpose of demonstrating OpenGL performance. Arma 2 doesn't fill this niche.

    Mafia II's easy enough to test that it couldn't hurt to add it.
  • JarredWalton - Monday, April 4, 2011 - link

    Just to add my two cents....

    AvP is a lousy game, regardless of benchmarks. I also toss HAWX and HAWX 2 into this category, but Ryan has found a use for HAWX in that it puts a nice, heavy load on the GPUs.

    Metro 2033 and Mafia II aren't all that great either, TBH, and so far Crysis 2 is less demanding *and* less fun than either of the two prequels. (Note: I finished both Metro and Mafia, and I'd say both rate something around 70%. Crysis 2 is looking about 65% right now, but maybe it'll pick up as the game progresses.)
  • c_horner - Sunday, April 3, 2011 - link

    I'm waiting for the day when someone actually reports on the perceived usability of Mutli-GPU setups in comparison to a single high-end GPU.

    What I mean is this: often times even though you might be receiving an arbitrarily larger frame count, the lag and overall smoothness of the games aren't anywhere near as playable and enjoyable as a game that can be run properly with a single GPU.

    Having tried SLI in the past I was left with a rather large distaste for plopping down the cost of another high end card. Not all games worked properly, not all games scaled well, some games would scale well in the areas it could render easily but minimum frame rates sucked etc. etc. and the list goes on.

    When are some of these review sites going to post subjective and real world usage information instead of a bunch of FPS comparisons?

    There's more to the story here.
  • semo - Sunday, April 3, 2011 - link

    I think this review covers some of your concerns. It seems that AMD with their latest drivers achieve a better min FPS score compared to nVidia.

    I've never used SLI myself but I would think that you wouldn't be able to notice the latency due to more than one GPU in game. Wouldn't such latencies be in the micro seconds?
  • SlyNine - Monday, April 4, 2011 - link

    And yet, those micro seconds seemed like macro seconds, Micro studder was one of the most annoying things ever! I hated my 8800GT SLI experience.

    Haven't been back to multi videocard setups since.
  • DanNeely - Monday, April 4, 2011 - link

    Look at HardOCP.com's reviews. Instead of FPS numbers from canned benches they play the games and list the highest settings that were acceptable. Minimum FPS levels and for SLI/xFire microstuttering problems can push their recommendations down because even when the average numbers look great the situation might actually not be playable.
  • robertsu - Sunday, April 3, 2011 - link

    How is microstuttering with 3 GPU's? Is there any in this new versions?

Log in

Don't have an account? Sign up now