It can play Crysis maxed out. If this thing can get 38FPS (playable) on "gamer quality" at 2560 res with 4xAA then it can certainly run crysis maxed out at the way lower 1080p res.
The 480 that i owned could do this. And one of my 5870's can also do it...but with no AA.
Whats real and whats not real for you? 6850 and 6870 are essentially more shaders, rops and higher clocks with few architectural improvements. It will be same for the 6900 series.
GTX 580 again is same as the above, few architectural improvements, higher shaders, textures and clocks.
Credit must be given where its due and even though this was supposed to be the 480, its still a great product by its own, its faster, cooler, quieter and less power hungry.
Umm, 6850 and 6870 are DECREASES on all but ROPs and Clocks. The architecture is only similar and very much evolutionary:
numbers are: Universal Shader : Texture Mapping Unit : Render Output unit.
5850: 1440:72:32 6850: 960:48:32
5870: 1600:80:32 6870: 1120:56:32
So, 5-10% slower and 33% component shrink, which is to say ~1.4x perf increase per shader. So a full blown (6970) NI at an assumed 1600 shaders like the previous gen would be the rough equivalent of a 2400 shader evergreen when it comes to raw performance, so 1.4 to 1.5 times faster. It can be safely assumed that the shader performance will be higher than 2 5830s in CFX. 2 5830s are about 30% faster than a GTX 580.
That is theoretical performance based on a lot of assumptions and Ideal drivers. My point is NI is not a ground up redesign but it is a vastly superior and vastly different architecture to Evergreen.
I can't wait to see a comparison with the 6970. I don't know which will win but it will help me decide which to get - I'm still using a 4870 so I'm ready for something new!
The other thing that will help me decide is card options. The 4870 had some very nice "pre-silenced" modified cards right after release. The 6870 already has some "pre-silenced" cards on the market. The factory standard 580 is pretty loud although it is quieter than the 480 by far. I'm hoping there will be some modified 580's on the market with much quieter but at least as effective cooling solutions.
Let's hope so. This way both fanbois camps will win when it comes to how much lighter their wallets will be after their purchase (in some cases their parents wallets - only children act like fanbois).
"Both the Radeon HD 5970 and Radeon HD 6870 CF are worthy competitors to the GTX 580 – they’re faster and in the case of the 6870 CF largely comparable in terms of power/temperature/noise."
it should say:
"Both the Radeon HD 5970 and Radeon HD 6870 CF come out on top of the GTX 580 – they’re faster in nearly all benchmarks, and in both cases largely comparable in terms of price, power and temperature; 6870CF is also comparable in terms of noise, while 5970 comes out significantly louder."
No, it should say, "Both 6850 CF and GTX460 SLI blow everything else out of the water given that they're practically giving away the highly overclockable 6850's for ~$185, 1GB GTX 460's for ~$180 AR, and 768MB GTX 460's for $145 AR."
by looking at the test results, I guess 6870CF is a better choice than a single 580 in term of performance/watt, price (about the same), temperature, noise, etc..
I noticed that as well, a little surprising I did expect a bit more considering all the hype, but then again it's built on the Fermi. I'm not sure what's up with the pricing? Do they think that there will be a market for GTX480 when Cayman comes and with the GF110 out?
Amen. I'm a 5850 CF user (and 4870X2 before that), and I can tell you I'd much rather have a single GPU and forget about profiles and other issues. But then, 30" LCDs need more than a single GPU in most games.
30" monitors dont need anything more than a high-end single GPU for most games. 99.9% of PC games are playable maxed out plus AA at 2560 res with just one of my 5870's in use, of with my previous 480. Theres only a handful of games that need dual GPU's to be playable at this res. Mainly because most games are console ports these days.
And the OP is wrong, 6870 CF is not any better than the 580 with temperature or noise. 580 being better under load with noise, better with temps at idle, but only very slightly hotter under load.
Anyone know if aftermarket cooling for the GTX 480 will work for the GTX 580? It would be great to be able to reuse a waterblock from a GTX 480 for the new 580s. Looking at the picture the layout looks similar.
I moved all my gaming to the living room on a big screen TV and HTPC (a next next gen console in a sense). But, Optimus would be the only way to use this card on HTPC.
I hold you in the most absolute respect. Actually, in my first post a while ago I praised your work, and I think you´re quite didactic and fun to read. On that, thanks for the review.
However, I need to ask you: W.T.F. is wrong with you? Aren´t you pissed off by the fact that GTX480 was a half baked chip (wouldn´t say the same about GTX460) and now that we get the real version they decided to call it 580? Why isn´t a single complain about that in the article?
If, as I understand, you think that the new power / temperature / noise / performance balance has improved dramatically from the 480, I think you are smart enough to see that it was because the 480 was very, very, unpolished chip. This renaming takes us for stupid, is even worse than what AMD did.
/rant
AT & staff, I think you have a duty to tell off lousy tactics such as the Barts being renamed 68x0, or the 8800 becoming 9800 then GTS250 as you always did. You have failed so badly to do that here that you look really biased. For me, a loyal argentinian reader since 2001, that is absolutely imposible, but with the GXT460 and this you are acomplishing that.
+1 for this card deserving an indifferent thumbs up, as Ryan graciously said, not for the card itself (wich is great) but for the nVidia tactics and the half baked 480 they gave us. Remember the FX5800 (as bad or worse than the 480) becoming the 5900... gosh, I think those days are over. Maybe that´s why I stick with my 7300 GT, haha.
I respectfully disent with your opinion, but thanks for the great review.
I'd have to agree he probably didn't read the article thoroughly, beside explicitly saying this is the 2nd worst excuse for a new naming denomination, Ryan takes jabs at the 480 throughout by repeatedly hinting the 580 is what Fermi should've been to begin with.
Sounds like just another short-sighted rant about renaming that conveniently forgets all the renaming ATI has done in the past. See how many times ATI renamed their R200 and R300 designs, even R600 and RV670 fall into the same exact vein as the G92 renaming he bemoans......
Nvidia has done no different than ATI has as far as naming in their new cards. They simply jumped on the naming bandwagon for marketing and competetive purposes since ATI had already done so.... at least the 580 is actually faster than the 480. ATI releasing a 6870 that is far inferior to a 5870 is worse in my mind.
It should indeed have been a 485, but since ATI calls their new card a 6870 when it really should be a 5860 or something, it only seems fair.
Actually the new ATI naming makes a bit more sense.
Its not a new die shrink but the 6xxx all do share some features not found at all in the 5xxx series such as Displayport 1.2 (which could become very important if 120 and 240Hz monitors ever catch on).
Also the Cayman 69xx parts are in fact a significantly original design relative to the 58xx parts.
Nvidia to me is the worst offender ... cause a 580 is just fully-enabled 480 with the noise and power problems fixed.
If you think that stepping up the spec on the output ports warrants skipping a generation when naming your product, see that mini-HDMI port on the 580, that's HDMI 1.4 compliant... the requirements for 120Hz displays are met.
The GF110 in not a GF100 with all the shaders enabled. It looks that way to the uninitiated. GF110 has much more in common with GF104.
GF110 has three types of tranzistors, graded by leakage, while the GF100 has just two. This gives you the ability to clock the core higher, while having a lower TDP. It is smaller in size then GF100 is, while maintaining the 40nm fab node. GTX580 has a power draw limitation system on the board, the GTX480 does not...
What else... support for full speed FP16 texture filtering which enhances performance in texture heavy applications. New tile formats which improve Z-cull efficiency...
So how does displayport 1.2 warrant the 68x0 name for AMD but the few changes above do not warrant the 5x0 name for nVidia?
The 580 comes with the same old video engine as the GF100 - if it was so close to GF104, it would have that video engine and all the goodies and improvements it brings over the one in the 480 (and 580).
No, GT580 is a fixed GF100 and most of what you listed there supports that because it fixes what was broken with the 480. Thats all.
I'm not sure what you mean... maybe you're right... but I'm not sure... If you're referring to bitstreaming support, just wait for a driver update, the hardware supports it.
"What is also good to mention is that HDMI audio has finally been solved. The stupid S/PDIF cable to connect a card to an audio codec, to retrieve sound over HDMI is gone. That also entails that NVIDIA is not bound to two channel LPCM or 5.1 channel DD/DTS for audio.
Passing on audio over the PCIe bus brings along enhanced support for multiple formats. So VP4 can now support 8 channel LPCM, lossless format DD+ and 6 channel AAC. Dolby TrueHD and DTS Master Audio bit streaming are not yet supported in software, yet in hardware they are (needs a driver update)."
NEVER rely just on one source of information.
Fine, if a more powerful card then the GTX480 can't be named the GTX580 then why is a lower performing then the HD5870 card is ok to be named HD6870... screw technology, screw refinements, talk numbers...
To set the record straight, the hardware does not support full audio bitstreaming. I had NV themselves confirm this. It's only HDMI 1.4a video + the same audio formats that GTX 480 supported.
You can all argue all you want, but at the end of the day, for marketing reasons alone, NV really didn't have much of a choice but to name this card the 580 instead of 485 after ATI gave there cards the 6xxx series names. Which dont deserve a new series name either.
No ATI's new naming convention makes no sense at all. Their x870 designation has always been reserved for their Single-GPU Flagship part ever since the HD3870, and this naming convention has held true through both the HD4xxx and HD5xxxx series. But the 6870 clearly isn't the flagship of this generation, in fact, its slower than the 5870 while the 580 is clearly faster than the 480 in every aspect.
To further complicate matters, ATI also launched the 5970 as a dual-GPU part, so single-GPU Cayman being a 6970 will be even more confusing and will also be undoubtedly slower than the 5970 in all titles that have working CF profiles.
If anything, Cayman should be 5890 and Barts should be 5860, but as we've seen from both caps, marketing names are often inconvenient and short-sighted when they are originally designated......
We're getting into philosophy there. Know what's a sophism? An argument that seems strong but isn't because there's a fail in it. The new honda 2011 ain't necessarily better than the 2010 because it's newer.
They name it differently because it's changed and wanna make you believe it's better but history proved it's not always the case. So the argument of newer generation means better is a false argument. Not everything new ''gotta'' be better in every way to live up to it's name.
It seems worse but that rebranding is all ok in my mind as it comes the 6870 comes in at a cheaper price than the 5870. So everyone can be happy about it. Nvidia did worse rebranding some of the 8xxx series into 9xxx chips for higher price but almost no change and no more performance. 9600gt comes to my mind...
What is 9xxx series? a remake of a ''better'' 8xxx series. What is GTS3xx series, remake of GTx2xx, what is GTX5xx, .... and so on. Who cares? If it's priced well it's all ok. When I see someone going at staples to get a 9600gt at 80$ and I know I can get a 4850 for almost the same price, I say WTF!!!
GTX580 deserve the name they want to give it. Whoever tries to understand all that naming is up to him. But whoever wants to pay example 100$ for a card should get performance according to that and it seems more important than everything else to me!
ok EVERYONE belonging to this thread is on CRACK... what other option did AMD have to name the 68xx? If they named them 67xx, the differences between them and 57xx are too great. They use nearly as little power as 57xx yet the performance is 1.5x or higher!!!
im a sucker for EFFICIENCY... show me significant gains in efficiency and i'll bite, and this is what 68xx handily brings over 58xx
the same argument goes for 480-580... AT, show us power/performance ratios between generations on each side, then everyone may begin to understand the naming
i'm sorry to break it to everyone, but this is where the GPU race is now, in efficiency, where it's been for cpus for years
Just started reading the article and I noticed a couple of typos on p1.
"But before we get to deep in to GF110" --> "but before we get TOO deep..."
Also, the quote at the top of the page was placed inside of a paragraph which was confusing. I read: "Furthermore GTX 480 and GF100 were clearly not the" and I thought: "the what?". So I continued and read the quote, then realized that the paragraph continued below.
I noticed the remark on Bitstreaming and it seems like a logical choice *not* to include it with the 580. The biggest factor is that I don't think the large majority of people actually need/want it. While the 580 is certainly quieter than the 480, it's still relatively loud and extraneous noise is not something you want in a HTPC. It's also overkill for a HTPC, which would delegate the feature to people wanting to watch high-definition content on their PC through a receiver, which probably doesn't happen much.
I'd assume the feature could've been "on the board" to add, but would've probably been at the bottom of the list and easily one of the first features to drop to either meet die size (and subsequently, TDP/Heat) targets or simply to hit their deadline. I certainly don't work for nVidia so it's really just pure speculation.
I see your points as valid, but let me counterpoint with 3-D. I think NVIDIA dropped the ball here in the sense that there are two big reasons to have a computer connected to your home theater: games and Blu-ray. I know a few people that have 3-D HDTVs in their homes, but I don't know anyone with a 3-D HDTV and a 3-D monitor.
I realize how niche this might be, but if the 580 supported bitstreaming, then it would be perfect card for anyone that wants to do it ALL. Blu-ray, 3-D Blu-Ray, any game at 1080p with all eye-candy, any 3-D game at 1080p with all eye-candy. But without bitstreaming, Blu-ray is moot (and mute, IMO).
For a $500+ card, it's just a shame, that's all. All of AMD's high-end cards can do it.
Well said. There are quite a few fixes that make the 580 what I wanted in March, but the lack of bitstream is still a hard hit for what I want my PC to do.
Actually, this is killing me. I waited for the 480 in March b4 pulling the trigger on a 5870 because I wanted HDMI to a Denon 3808 and the 480 totally dropped the ball on the sound aspect (S/PDIF connector and limited channels and all). I figured no big deal, it is a gamer card after all, so 5870 HDMI I went.
The thing is, my PC is all-in-one (HTPC, Game & typical use). The noise and temps are not a factor as I watercool. When I read that HDMI audio got internal on the 580, I thought, finally. Then I read Guru's article and seen bitstream was hardware supported and just a driver update away, I figured I was now back with the green team since 8800GT.
Now Ryan (thanks for the truth, I guess :) counters Gurus bitstream comment and backs it up with direct communication with NV. This blows, I had a lofty multimonitor config in mind and no bitstream support is a huge hit. I'm not even sure if I should spend the time to find out if I can arrange the monitor setup I was thinking.
Now I might just do a HTPC rig and Game rig or see what 6970 has coming. Eyefinity has an advantage for multiple monitors, but the display-port puts a kink in my designs also.
So where do they go from here? Disable one SM again and call it a GTX570? GF104 is to new to replace, so I suppose they'll enable the last SM on it for a GTX560.
There's 2 ways they can go with the GTX 570, either more disabled SM than just 1 (2-3) and similar clockspeed to the GTX 580, or fewer disabled SM but much lower clockspeed. Both would help to reduce TDP but with more disabled SM that would also help Nvidia unload the rest of their chip yield. I have a feeling they'll disable 2-3 SM with higher clocks similar to the 580 so that the 470 is still slightly slower than the 480.
I'm thinking along the same lines as you though for the GTX 560, it'll most likely be the full-fledged GF104 we've been waiting for with all 384SP enabled, probably slightly higher clockspeeds and not much more than that, but definitely a faster card than the original GF104.
The performance advantage of a single GPU vs CF or SLI is steadily diminishing and approaching a point of near irrelevancy.
6870 CF beats out the 580 in nearly every parameter, often substantially on performance benchmarks, and per current newegg prices, comes in at $80 cheaper.
But I think the real sweet spot would be a 6850 CF setup with AMD Overdrive applied 850Mz clocks, which any 6850 can achieve at stock voltages with minimal thermal/power/noise costs (and minimal 'tinkering'), and from the few 6850 CF benchmarks that showed up would match or even beat the GTX580 on most game benchmarks and come in at $200 CHEAPER.
Agreed, CF scales very badly when it comes to minumum framerates. It is even below the minimum framerates of one of the cards in the CF setup. It is very anoying when you're doing 120FPS in a game and from time to time your framerates drop to an unplayable and very noticable 20FPS.
Would've liked to have seen some expanded results however, but somewhat understandable given your limited access to hardware atm. It sounds like you plan on having some SLI results soon.
I would've really liked to have seen clock-for-clock comparisons though to the original GTX 480 though to isolate the impact of the refinements between GF100 and GF110. To be honest, taking away the ~10% difference in clockspeeds, what we're left with seems to be ~6-10% from those missing 6% functional units (32 SM and 4 TMUs).
I would've also liked to have seen some preliminary overclocking results with the GF110 to see how much the chip revision and cooling refinements increased clockspeed overhead, if at all. Contrary to somewhat popular belief, the GTX 480 did overclock quite well, and while that also increased heat and noise it'll be hard for someone with an overclocked 480 to trade it in for a 580 if it doesn't clock much better than the 480.
I know you typically have follow-up articles once the board partners send you more samples, so hopefully you consider these aspects in your next review, thanks!
PS: On page 4, I believe this should be a supposed GTX 570 mentioned in this excerpt and not GTX 470: "At 244W TDP the card draws too much for 6+6, but you can count on an eventual GTX 470 to fill that niche."
"I would've also liked to have seen some preliminary overclocking results ..."
Though obviously not a true oc'ing revelation, I note with interest there's already a factory oc'd 580 listed on seller sites (Palit Sonic), with an 835 core and 1670 shader. The pricing is peculiar though, with one site pricing it the same as most reference cards, another site pricing it 30 UKP higher. Of course though, none of them show it as being in stock yet. :D
Anyway, thanks for the writeup! At least the competition for the consumer finally looks to be entering a more sensible phase, though it's a shame the naming schemes are probably going to fool some buyers.
Well technically, this is not a 512-SP card at 772 MHz. This is because if you ever find a way to put all 512 processors at 100% load, the throttling mechanism will kick in.
That's like saying you managed to overclock your CPU to 4.7 GHz.. sure, it might POST, but as soon as you try to *do* anything, it instantly crashes.
Based on the performance of a number of games and compute applications, I am confident that power throttling is not kicking in for anything besides FurMark and OCCT.
This card is not enough. It is much worse than 2x 6870s in CF, while needing slightly more power and producing more heat and noise. For such levels of performance, minimum framerates are a non-issue, and this won't change in the foreseeable future since all games are console ports...
It seems AMD is on its way to fully destroy NVIDIA. This will be both good and bad for consumers:
1) Bad because we need competition
2) Good because NVIDIA has a sick culture, and some of its tactics are disgusting, for those who know...
I believe on die gpus are more interesting anyway. By the time new consoles arrive, on die gpu performance will be almost equal to next-gen console performance. All we will need by then is faster ram, and we are set. I look forward to create a silent and ecological pc for gaming... I am tired of these vacuum cleaners that also serve as gpus...
Nobody seems to be taking into account the fact that the 580 is a PREMIUM level card. It is not meant to be compared to a 6870. Sure 2x 6870s can do more. This card is not, however, geared for that category of buyer.
It is geared for the enthusiast who intends to buy 2 or 3 580s and completely dominate benchmarks and get 100+ fps in every situation. Your typical gamer will not likely buy a 580, but your insane gamer will likely buy 2 or 3 to play their 2560x1600 monitor at 60fps all the time.
I fail to see how AMD is destroying anything here. Cost per speed AMD wins, but speed possible, Nvidia clearly wins for the time being. If anyone can come up with something faster than 3x 580s in the AMD camp feel free to post it in response here.
Do you own NVIDIA stock, or are you a fanboy? Because really, only one of the two could not see how AMD destroys NVIDIA. AMD's architecture is much more efficient.
How many "insane gamers" exist, that would pay 1200 or 1800 dollars just for gpus, and adding to that an insanely expensive PSU, tower and mainboard needed to support such a thing? And play what? Console ports? On what screens? Maximum resolution is still 2560x1600 and even a single 6870 could do fine in most games in it...
And just because there may be about 100 rich kids in the whole world with no lives who could create such a machine, does it make 580 a success?
Do YOU intent to create such a beast? Or would you buy a mainstream NVIDIA card, just because the posibility of 3x 580s exists?Come on...
Oh and yes I do intend to buy a couple of them in a few months. One at first and add another later. I also love when fanboys call other fanboys, "fanboys." It doesn't get anyone anywhere.
PC games are not simply console ports, the fact that you need a top of the line PC to get even close to 60 FPS in most cases at not even maximum graphics settings is proof of this.
PC "ports" of console games have been tweaked and souped up to have much better graphics, and can take advantage of current gen hardware, instead of the ancient hardware in consoles.
The "next gen" consoles will, of course, be worse than PCs of the time.
And game companies will continue to alter their games so that they look better on PCs.
'How many "insane gamers" exist, that would pay 1200 or 1800 dollars just for gpus, ...'
Actually the market for this is surprisingly strong in some areas, especially CA I was told. I suspect it's a bit like other components such as top-spec hard drives and high-end CPUs: the volumes are smaller but the margins are significantly higher for the seller.
Some sellers even take a loss on low-end items just to retain the custom, making their money on more expensive models.
Just to clarify your incorrect (or misleading) statement 2 6870's in CF use significantly more power than a single 580, but also perform significantly better in most games (minimum frame rate issue noted however).
I don't know the stance on posting links to other reviews since I'm a new poster, so I wont. I would like to make note that in another review they claim to have found a work around the power throttling that allowed them to use furmark to get accurate temps and power readings. This review has the 580 at 28w above the 480 at max load. I don't mean to step on anyone's toe's, but I have seen so many different numbers because of this garbage nvidia has pulled, and the only person who claims to have furmark working gets higher numbers. I would really like to see something definitive.
Here's my conundrum. What is the point of something like Furmark that has no purpose except to overstress a product? In this case the 580 (with modified X program) doesn't explode and remains within some set thermal envelope that is safe to the card. I like using Crysis as it's a real-world application that stresses the GPU heavily.
Until we have another game/program that is used routinely (be it game or coding) that surpasses the heat generation and power draw of Crysis I just don't see the need to try to max out the cards with a benchmark. OC your card to the ends of the earth and run something real, that is understandable. But just using a program that has no real use to artificially create a power draw just doesn't have any benefit IMO.
I beg to differ. (be careful, high doses of flaming.)
Let me put it like this. The Abrams M1 Tank is tested on a 60º ramp (yes, that is sixty degrees), where it must park. Just park there, hold the brakes, and then let go. It proves the brakes on a 120-ton 1200hp vehicle will work. It is also tested on emergency brakes, where this sucker can pull a full stop from 50mph on 3 rubber-burning meters. (The treads have rubber pads, for the ill informed). Will ever a tank need to hold on a 60º ramp? Probably not. Would it ever need to come to a screeching halt in 3 meters? In Iraqi, they probably did, in order to avoid IEDs. But you know, if there were no prior testing, nobody would know.
I think there should be programs specifically designed to stress the GPU in unintended ways, and it must protect itself from destruction, regardless of what code is being thrown at it. NVIDIA should be grateful somebody pointed that out to them. AMD was thankful when they found out the 5800 series GPUs (and others, but this was worse) had lousy performance on 2D acceleration, or none at all, and rushed to fix its drivers. Instead, NVIDIA tries to cheat Furmark by recognizing its code and throttling. Pathetic.
Perhaps someday, a scientific application may come up with repeatable math operations that just behave exactly like Furmark. So, out of the blue, you got a $500 worth of equipment that gets burned out, and nobody can tell why??? Would you like that happening to you? Wouldn't you like to be informed that this or that code, at least, could destroy your equipment?
What if Furmark wasn't designed to stress GPUs, but it was an actual game, (with furry creatures, lol)?
Ever heard of Final Fantasy XIII killing off PS3s for good, due to overload, thermal runaway, followed by meltdown? Rumors are there, if you believe them is entirely to you.
Ever heard of Nissan GTR (skyline) being released with a top-speed limiter with GPS that unlocks itself when the car enters the premises of Nissan-approved racetracks? Inherent safety, or meddling? Can't you drive on a Autoban at 300km/h?
Remember back in the day of early benchmark tools, (3DMark 2001 if I am not mistaken), where the Geforce drivers detected the 3DMark executable and cheated the hell out of the results, and some reviewers got NVIDIA red-handed when they renamed and changed the checksum of the benchmark??? Rumors, rumors...
The point is, if there is a flaw, a risk of an unintended instruction kill the hardware, the buyer should be rightfully informed of such conditions, specially if the company has no intention at all to fix it. Since Anand warned us, they will probably release the GTX 585 with full hardware thermal safeties. Or new drivers. Or not.
Just like the instruction #PROCHOT was inserted in the Pentium (which version?) and some reviewers tested it against an AMD chip. I never forgot that AMD processor billowing blue smoke the moment the heatsink was torn off. Good PR, bad PR. The video didn´t look fake to me back then, just unfair.
In the end, it becomes matter of PR. If suddenly all the people that played Crysis on this card caused it to be torched, we would have something really interesting.
AMD has a similar system in place since the HD4xx0 generation. Remember when Furmark used to blow up 48x0 cards? Of course not. But look it up...
What nVidia did here is what AMD has in all their mid/high end cards since HD4xx0. At least nVidia will only throttle when it detects Furmark/OCCT. AMD cards will throttle in any situation if the power limiter requires it.
It's a very unfortunate situation that both companies are to blame for. That's what happens when you push the limits of power consumption and heat output too far while at the same time trying to keep manufacturing costs down.
The point of a stress test is to push the system to the very limit (but *not* beyond it, like AMD and Nvidia would have you believe). You can then be 100% assured that it will run all current and future games and HPC applications, not matter what unusual workloads they dump on your GPU or CPU, without crashes or reduced performance.
A good article, and a good conclusion overall. Much better that the fiasko that was the 6800-article.
I do lament the benchmarking method AT uses though. Benchmarks like the Crysis Warhead one are not really representative of real world performance, but tend to be a bit too "optimized". They do not reflect real world performance very well, and even skew the results between cards.
And somehow you still buy into the argument that mid-end offerings at half the price has more features than the top of the line card? nVidia has been doing this since 6800 era...
I'm impressed that my local hardware dealer here in the UK has no less than 5 GTX 580s in stock today. It also includes, yes in stock, the first overclocked 580 the Palit Sonic which has a 835 Mhz CPU up from 772, 4200 memory up from 4008, and 1670 shaders up from 1544. All this for about 5% more than the price of the standard Palit GTX 580.
At this point it makes no sense to get rattled up about the 580. We must patiently wait for the 69x0 cards and see what they can bring to the table. I heard rumours of AMD delaying their cards to the end of the year in order to do some "tweaks"...
Delaying is something good because it indicates that Cayman can be very big, very fast and...very hungry making it hard to build. What AMD needs is a card that can defeat GTX580, no matter how hot or power-hungry it is.
I guess once GTX470 goes EOL. If GTX460 had all it's shaders enabled then the overclocked versions would have canibalized GTX470 sales. Even so, it will happen on occasion.
My guess is there will be GTX 580 derivatives with less cores enabled as usual, probably a GTX 570 or something. And then an improved GTX 460 with all cores enabled as the GTX 560.
Good to see nvidia made a noticeable improvement over the overly hot and power hungry GTX 480. Unfortunately way above my power and silence needs, but competition is a good thing. Now I'm highly curious how close the Radeon 69xx will come in performance or if it can actually beat the GTX 580 in some cases. Of course the GTX 480 is completely obsolete now, more power, less speed, more noise, ugly to look at.
What we got here today is a higher clocked, better cooled GTX 480 with a slightly better power consumption. All of that for only 80$ MORE ! Any first served version of non referent GTX 480 is equipped with a much better cooling solution that gives higher OC possibilites and could kick GTX 580's ass. If we compare GTX 480 to a GTX580 clock2clock we will get about 3% of a difference in performance. All thanks to 32 CUDA processors, and a few more TMU's. How come the reviewers are NOW able to find pros of something that they used to criticise 7 months ago ? Don't forget that AMD's about to break their Sweet Spot strategy just to cut your hypocrites tongues. I bet that 6990's going to be twice as fast as what we got here today . If we really got anything cause I can't really tell the difference.
32W and 15% you say ? No it isn't a big deal since AMD's Barts GPUs release. Have on mind that GTX580 still consumes more energy than a faster (in most cases) and one year older multi GPU HD5970. In that case even 60 would sound ridiculosly funny. It's not more than a few percent improvement over GTX480. If you don't believe it calculate how much longer will you have to play on your GTX580 just to get your ~$40 spent on power consumption compared to a GTX480 back. Not to mention (again) that a nonreferent GTX480 provides much better cooling solutions and OC possibilities. Nvidia's diggin their own grave. Just like they did by releasing GTX460. The only thing that's left for them right now is to trick the reviewers. But who cares. GTX 580 isn't going to make them sell more mainstream GPUs. It isn't nvidia whos cutting HD5970 prices right now. It was AMD by releasing HD6870/50 and announcing 6970. It should have been mentioned by all of you reviewers who treat the case seriously. Nvidia's a treacherous snake and the reviewers job is not to let such things happen.
Have you heard about the ASUS GTX580 Voltage Tweak edition that can be clocked up to 1100 MHz, that's more then 40% OC? Have you seen the EVGA GTX580 FTW yet?
The fact that a single GPU card is in some cases faster then a dual GPU card built with two of the fastest competing GPU's tells a lot of good things about that single GPU card.
This "nVidia in the Antichrist" speech is getting old. Repeating it all over the interwebs doesn't make it true.
I'm with you, that AMD still has a superior performance per power design. But with the 580, nvidia took Fermi from being outrageous to competitive in that category, and even wins by a wide margin with idle power. Looking at the charts, the 580 also has a vastly superior cooling system to the 5970. Mad props to nvidia for turning things around.
Honestly, I ran out of time. I need to do a massive round of SC2 benchmarking this week, at which time it will be in all regular reviews and will be in Bench.
There is always some debate as to the value of single gpu solutions vs multi gpu. I've noticed that the avg/max framerate in multi gpu setups is in fact quite good in some cases, but the min fps paints a different picture, with nearly all setups and various games being plagued by micro-stutter. Has anybody else come across this as reason to go with a more expensive single card?
If you don't recall from our 5970 review, we disqualified our 5970 when running at 5870 clocks. The VRMs on the 5970 cannot keep up with the power draw on some real world applications, so it does not pass our muster at those speeds by even the loosest interpretation.
Was very interested to look at the review today to see how the new GTX580 and other DX11 card options are in comparison to my GTX 285 SLI setup. But unfortunately for the games I am playing BFBC2, Stalker etc and would base my descition on, I still don't know as my card is not represented. I know why,, becuase they are DX11 games and my card is DX10, but my card still runs them and I would want to know how they compare even if one is running DX10 and the other running DX11. Even Anandtech's chart system gives no measure for my cards in these games . Please sort this out. Just becuase a card does not run the latest version of directx does not mean it should be forgotten. Escpecially since the people most likley to be looking at upgrading are those with this generation of card rather than people with DX 11 hardware...
Don't worry, I'll have some useful info for you soon! 8800GT vs. 4890 vs. 460, in all three cases testing 1 & 2 cards. You should be able to eaisly extrapolate from the results to your GTX285 vs. 580 scenario. Send me an email (mapesdhs@yahoo.com) and I'll drop you a line when the results are up. Data for 8800 GT vs. 4890 is already up:
NVIDIA exceed AMD with this... as long as the barts should have been 6770, this fermi slight improvement just in this universe can be called 5xx series. it is just the gf100 done right, and should have been named properly, as gtx 490.
Really, who has to have this card when for less money, you can get better results with a pair of 460s in SLI or a pair of 6850s in CF (even better than 460s in almost all cases) to give you better numbers than this card.
Take the extra $100+ and bump your CPU up a speed bin or two.
I will give Nvidia some props they have a very fast card, but interms of performance per dollar, AMD beats them out, go to newegg and click on the 5970 its 499$, go and check the 580, a cool 560$ now return to the anandtech review and check out the benchmarks, they speak for themselves, and finnally load and idle consumption of a two gpu card are lower than a single gpu?!?!?!?!? OUTRAGEOUS!
Why would HardOCP's numbers be any more accurate than Anand's? One is comparing SLI and one is single card. We have no clue the differences in system configuration (individual variances of parts not the part name), not to mention the inherent variability of the cards themselves.
I've said it in the 6850 article and before but power consumption numbers (and thus noise/heat) have to be taken with a grain of salt because there is no independent unbiased source giving these cards for review. A really good or poor cherry-picked sample can completely change the impression of a cards efficiency. And since we know NVIDIA ships cards with varying load voltages it is very easy to imagine a situation where one card would be similar in power draw to the 480 while another would show lower thermals.
Anand/Ryan I've said it before and I'll say it again, you have the chance to be a pioneer in the review field by getting independent sources of cards slightly after release. Yes your original review would have to exclude this (or have a caveat that this is what the manufacturer supplied you with).
Decide on a good number (3-5 shouldn't be unreasonable), and purchase these from etailers/locally anonymously. Test them for power/noise/temp and then either sell them on Ebay at slightly reduced cost or even increased price? (Anandtech approved!...touched by Ryan himself!....whatever....you've got a marketing department) :), and/or take one/all and give them away to your loyal readers like the other contests.
Then have a followup article with the comparisons between the "real" cards and the cherry-picked ones from NVIDIA/AMD. Just think how if the manufacturers know that you might call them out on a cherry-picked sample in a follow-up review, they might just try to avoid the bad press and ship one more closely in line with the product line.
I'm confused here. In your review of ATI's new 6870, you were comparing against factory overclocked NVIDIA cards but here you are using reference ATI cards.
Lol, give it a rest man... and while you're at it tells us about some similarly priced factory overclocked cards that AMD has on the stores shelves and could be used at the time the review was conducted. Relevent models only please, that have the same performance as the GTX580.
I browsed through the 10 pages of comments and I don't think I saw anyone comment on the fact that the primary reason Nvidia corrected their heat problem was by blatantly copying ATi/Sapphire...not only did they plagiarize the goodies under the hood, but they look identical to AMD cards now! Our wonderful reviewer made the point, but no one else seemed to play on it.
I say weak-sauce for Nvidia, considering the cheapest 580 on NewEgg is $557.86 shipped; the price exceeds what 480 was initially and the modded/OC'd editions aren't even out yet. It can't support more than 2 monitors by itself and is lacking in the audio department. Yes, it's faster than it's predecessor. Yes, they fixed the power/heat/noise issues, but when you can get similar, if not better, performance for $200 less from AMD with a 6850 CF setup...it seems like a no brainer.
Sure ATi re-branded the new cards as the HD6000 series, but at least they aren't charging top $ for them. Yes, they are slower than the HD5000 series, but you can buy 2 6850s for less than the price of the 480, 580, 5970(even 5870 for some manufacturers) and see similar or better performance AND end up with the extra goodies the new cards support.
I am looking forward to the release of the 69XX cards to see how well they will hold up against the 580. Are they going to be a worthy successor to the 5900, or will they continue the trend of being a significant threat in CrossFire at a reasonable price? Only time will tell...
The real question is, what will happen when the 28nm HD7000 cards hit the market?
Actually the newegg prices are because they have a 10% coupon right now. I bet they'll go back to closer to normal after the coupon expires...assuming there's any stock.
Vapour chamber cooling technology was NOT invented by ATI/Sapphire. They are NOT the first to use it. Convergence Tech, the owner of the patent, even sued ATI/Sapphire/HP because of the infringement (basically means stolen technology).
Where within my post did I say it was invented by ATi/Sapphire...nowhere. The point that I was trying to make was that Nvidia copied the design that ATi/Sapphire had been using to trounce the Nvidia cards. The only reason they corrected their problems was by making their cards nearly identical to AMD/ATi...
And to tomoyo, when I made that post there was no 10% coupon on newegg. They obviously added it because everyone else was selling them cheaper.
This is still a "400" series part as it's really technically more advanced than the 480.
Does it have additional features? No. Is it faster, yes.
But check out the advancement feature list.
The 6800s, badly named and should have been 6700s, are slightly slower than the 5800s, but costs a lot less and actually does some things differently from the 5000 series. And sooner or later, there will be a whole family of 6000s.
But here we are, about 6months later and theres a whole new "product line"?
Is there any problem with Mediaespresso? My 5770 is faster with mediashow than mediaespresso. Can you check with mediashow to see if your findings are right?
To put it more clearly... Anandtech only posted minimum frame rates for one test: Crysis.
In those, we see the 480 SLI beating the 580 SLI at 1920x1200. Why is that?
It seems to fit with the pattern of the 480 being stronger in minimum frame rates in some situations -- especially Unigine -- provided that the resolution is below 2K.
It's really disturbing how the throttling happens without any real indication. I was really excited reading about all the improvements nVidia made to the GTX580 then I read this annoying "feature".
When any piece of hardware in my PC throttles, I want to know about it. Otherwise it just adds another variable when troubleshooting performance problem.
Is it a valid test to rename, say, crysis.exe to furmark.exe and see if throttling kicks in mid-game?
Copy and paste of the message: "NVIDIA has implemented a new power monitoring feature on GeForce GTX 580 graphics cards. Similar to our thermal protection mechanisms that protect the GPU and system from overheating, the new power monitoring feature helps protect the graphics card and system from issues caused by excessive power draw.
The feature works as follows: • Dedicated hardware circuitry on the GTX 580 graphics card performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin, and PCI-Express). • The graphics driver monitors the power levels and will dynamically adjust performance in certain stress applications such as Furmark 1.8 and OCCT if power levels exceed the card’s spec. • Power monitoring adjusts performance only if power specs are exceeded AND if the application is one of the stress apps we have defined in our driver to monitor such as Furmark 1.8 and OCCT. - Real world games will not throttle due to power monitoring. - When power monitoring adjusts performance, clocks inside the chip are reduced by 50%.
Note that future drivers may update the power monitoring implementation, including the list of applications affected."
I never heard anyone from the AMD camp complaining about that "feature" with their cards and all current AMD cards have it. And what would be the purpose of renaming your Crysis exe? Do you have problems with the "Crysis" name? You think the game should be called "Furmark"?
the use of renaming is that nvidia uses name tags to identify wether it should throttle or not.... suppose person x creates a program and you use an older driver that does not include this name tag, you can break things.....
Big fat YES. Please do rename the executable from crysis.exe to furmark.exe, and tell us.
Get furmark and go all the way around, rename it to Crysis.exe, but be sure to have a fire extinguisher in the premises. Caveat Emptor.
Perhaps just renaming in not enough, some checksumming is involved. It is pretty easy to change checksum without altering the running code, though. When compiling source code, you can insert comments in the code. When compiling, the comments are not dropped, they are compiled together with the running code. Change the comment, change the checksum. But furmark alone can do that.
Open the furmark on a hex editor and change some bytes, but try to do that in a long sequence of zeros at the end of the file. Usually compilers finish executables in round kilobytes, filling with zeros. It shouldn't harm the running code, but it changes the checksum, without changing byte size.
The good thing about GPU is that it scales VERY well ( if not linearly ) with transistors. 1 Node Die Shrink, Double the transistor account, double the performance.
Combined there are not bottleneck with Memory, which GDDR5 still have lots of headroom, we are very limited by process and not the design.
I didnt read through ALL the comments, so maybe this was already suggested. But, can't the idle sound level be reduced simply by lowering the fan speed and compromising idle temperatures a bit? I bet you could sink below 40db if you are willing to put up with an acceptable 45 C temp instead of 37 C temp. 45 C is still an acceptable idle temp.
Very good point techcurious. Which is why the comment in the review about having GTX580 not being a quiet card at load is somewhat misleading. I have lowered my GTX470 from 40% idle fan speed to 32% fan speed and my idle temperatures only went up from 38*C to 41*C. At 32% fan speed I can not hear the car at all over other case fans and Scythe S-Flex F cpu fan. You could do the same with almost any videocard.
Also, as far as FurMark goes, the test does test all GPUs beyond their TDPs. TDP is typically not the most power the chip could ever draw, such as by a power virus like FurMark, but rather the maximum power that it would draw when running real applications. Since HD58/68xx series already have software and hardware PowerPlay enabled which throttles their cards under power viruses like FurMark it was already meaningless to use FurMark for "maximum" power consumption figures. Besides the point, FurMark is just a theoretical application. AMD and NV implement throttling to prevent VRM/MOSFET failures. This protects their customers.
While FurMark can be great for stability/overclock testing, the power consumption tests from it are completely meaningless since it is not something you can achieve in any videogame (can a videogame utilize all GPU resources to 100%? Of course not since there are alwasy bottlenecks in GPU architectures).
How cool would it be if nVidia added to it's control panel a tab for dynamic fan speed control based on 3 user selectable settings. 1) Quiet... which would spin the fan at the lowest speed while staying just enough below the GPU temperature treshold at load and somewhere in the area of low 50 C temp in idle. 2) Balanced.. which would be a balance between moderate fan speed (and noise levels) resulting in slightly lower load temperatures and perhaps 45 C temp in idle. 3) Cool.. which would spin the fan the fastest, be the loudest setting but also the coolest. Keeping load temperatures well below the maximum treshold and idle temps below 40 C. This setting would please those who want to extend the life of their graphics card as much as possible and do not care about noise levels, and may anyway have other fans in their PC that is louder anyway!
Maybe Ryan or someone else from Anandtech (who would obviously have much more pull and credibility than me) could suggest such a feature to nVidia and AMD too :o)
Here's what I dig about you guys at AnandTech, not only are your reviews very nicely presented but you keep it relevant for us GTX 285 owners and other more legacy bound interested parties - most other sites fail to provide this level of complete comparison. Much appreciated. You charts are fanatastic, your analysis and commentary is nicely balanced and attention to detail is most excellent - this all makes for a more simplified evaluation by the potential end user of this card.
Keep up the great work...don't know what we'd do without you...
In the end we ( the gamers) who purchase these cards NEED to be be supporting BOTH sides so the AMD and Nvidia can both manage to stay profitable. Its not a question of who Pawns who but more importantly that we have CHOICE !! Maybe some of the people here ( or MOST) are not old enough to remember the days when mighty " INTEL" ruled the landscape. I can tell you for 100% fact that CPU's were expensive and there was no choice in the matter. We can agree to disagree but in the END, we need AMD and we need NVIDIA to keep pushing the limits and offering buyers a CHOICE.
God help us if we ever lose one or the other, then we won't be here reading reviews and or jousting back and forth on who has the biggest stick. We will all be crying and complaining how expense it will be to buy a decent Video card.
Here's to both Company's ..............Long live NVIDIA & AMD !
Finally, at the high end Nvidia delivers a much cooler and quiter, one GPU card, that is much more like the GTX 460, and less like the 480, in terms of performance/heat balance.
I'm one where i need Physx in my games, and until now, i had to go with a SLI 460 setup for one pc and for a lower rig, a 2GB 460 GTX(for maxing GTA:IV out).
Also, i just prefer the crisp Nvidia desktop quality, and it's drivers are more stable. (and ATI's CCC is a nightmare)
For those who want everything, and who use Physx, the 580 and it's upcoming 570/560 will be the only way to go.
For those who live by framerate only, then you may want to see what the next ATI lineup will deliver for it's single GPU setup.
But whatever you choose, this is a GREAT thing for the industry..and the gamer, as Nvidia delivered this time with not just performance, but also lower temps/noise levels, as well.
This is what the 480, should have been, but thankfully they fixed it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
160 Comments
Back to Article
TonyB - Tuesday, November 9, 2010 - link
But can it play Crysis?Etern205 - Tuesday, November 9, 2010 - link
It already has a benchmark on Crysis...And it's no surprise as being the fastest Direct X 11 card in the single GPU category.
mfenn - Tuesday, November 9, 2010 - link
whooooshdeputc26 - Tuesday, November 9, 2010 - link
"Red 580 Load" instead of "Ref 580 Load" at the top of the power temp & noise page.DigitalFreak - Tuesday, November 9, 2010 - link
Let it go.taltamir - Wednesday, November 10, 2010 - link
only in reduced quality, the specifically said it, and any other card on the market, can't play a maxed out crysis.B3an - Wednesday, November 10, 2010 - link
It can play Crysis maxed out. If this thing can get 38FPS (playable) on "gamer quality" at 2560 res with 4xAA then it can certainly run crysis maxed out at the way lower 1080p res.The 480 that i owned could do this. And one of my 5870's can also do it...but with no AA.
limonovich - Tuesday, November 9, 2010 - link
more like gtx 485Sihastru - Tuesday, November 9, 2010 - link
And what would you call the 6870/6850?Goty - Tuesday, November 9, 2010 - link
The 6870 and 6850 since there were real architectural changes and there are still faster cards to come.StevoLincolnite - Tuesday, November 9, 2010 - link
I would have called them the 6750/6770 personally... Or at the very least the 6830/6850.There were also some architectural changes in the GTX 580 as well.
slickr - Tuesday, November 9, 2010 - link
Whats real and whats not real for you?6850 and 6870 are essentially more shaders, rops and higher clocks with few architectural improvements.
It will be same for the 6900 series.
GTX 580 again is same as the above, few architectural improvements, higher shaders, textures and clocks.
Credit must be given where its due and even though this was supposed to be the 480, its still a great product by its own, its faster, cooler, quieter and less power hungry.
ninjaquick - Thursday, December 2, 2010 - link
Umm, 6850 and 6870 are DECREASES on all but ROPs and Clocks. The architecture is only similar and very much evolutionary:numbers are: Universal Shader : Texture Mapping Unit : Render Output unit.
5850: 1440:72:32
6850: 960:48:32
5870: 1600:80:32
6870: 1120:56:32
So, 5-10% slower and 33% component shrink, which is to say ~1.4x perf increase per shader. So a full blown (6970) NI at an assumed 1600 shaders like the previous gen would be the rough equivalent of a 2400 shader evergreen when it comes to raw performance, so 1.4 to 1.5 times faster. It can be safely assumed that the shader performance will be higher than 2 5830s in CFX. 2 5830s are about 30% faster than a GTX 580.
That is theoretical performance based on a lot of assumptions and Ideal drivers. My point is NI is not a ground up redesign but it is a vastly superior and vastly different architecture to Evergreen.
boe - Thursday, November 18, 2010 - link
I can't wait to see a comparison with the 6970. I don't know which will win but it will help me decide which to get - I'm still using a 4870 so I'm ready for something new!The other thing that will help me decide is card options. The 4870 had some very nice "pre-silenced" modified cards right after release. The 6870 already has some "pre-silenced" cards on the market. The factory standard 580 is pretty loud although it is quieter than the 480 by far. I'm hoping there will be some modified 580's on the market with much quieter but at least as effective cooling solutions.
Will Robinson - Tuesday, November 9, 2010 - link
This is a much better effort than GTX480.Cooler,quieter and faster.
AMD will have a tough fight on their hands with Cayman XT versus this new part.
StevoLincolnite - Tuesday, November 9, 2010 - link
Can't wait for a price war, something that seemed to be missing with Fermi and the 5xxx series.Sihastru - Tuesday, November 9, 2010 - link
Let's hope so. This way both fanbois camps will win when it comes to how much lighter their wallets will be after their purchase (in some cases their parents wallets - only children act like fanbois).B3an - Wednesday, November 10, 2010 - link
I -wish- only children acted like fanboys. The fact is that a lot of these guys are well into there 20's and 30's.They really are pathetic people.
mino - Tuesday, November 9, 2010 - link
Well not that it was hard to beat 480 ... that part was not ready for prime time in the first place.dragonsqrrl - Wednesday, March 18, 2015 - link
...lol, facepalm.donjuancarlos - Tuesday, November 9, 2010 - link
Along that line it's shoo-in, not shoe-in. :)Ryan Smith - Tuesday, November 9, 2010 - link
In this house we obey the laws of thermodynamics!Thanks for the heads up, Don.
Troll Trolling - Tuesday, November 9, 2010 - link
You guys made my day.samspqr - Tuesday, November 9, 2010 - link
more important, where it says:"Both the Radeon HD 5970 and Radeon HD 6870 CF are worthy competitors to the GTX 580 – they’re faster and in the case of the 6870 CF largely comparable in terms of power/temperature/noise."
it should say:
"Both the Radeon HD 5970 and Radeon HD 6870 CF come out on top of the GTX 580 – they’re faster in nearly all benchmarks, and in both cases largely comparable in terms of price, power and temperature; 6870CF is also comparable in terms of noise, while 5970 comes out significantly louder."
DominionSeraph - Tuesday, November 9, 2010 - link
No, it should say, "Both 6850 CF and GTX460 SLI blow everything else out of the water given that they're practically giving away the highly overclockable 6850's for ~$185, 1GB GTX 460's for ~$180 AR, and 768MB GTX 460's for $145 AR."DreamerX5521 - Tuesday, November 9, 2010 - link
by looking at the test results, I guess 6870CF is a better choice than a single 580 in term of performance/watt, price (about the same), temperature, noise, etc..Kim Leo - Tuesday, November 9, 2010 - link
I noticed that as well, a little surprising I did expect a bit more considering all the hype, but then again it's built on the Fermi.I'm not sure what's up with the pricing? Do they think that there will be a market for GTX480 when Cayman comes and with the GF110 out?
Sihastru - Tuesday, November 9, 2010 - link
If you can live with the sub-par minimum framerates that plagues so many games with CF setups.JarredWalton - Tuesday, November 9, 2010 - link
Amen. I'm a 5850 CF user (and 4870X2 before that), and I can tell you I'd much rather have a single GPU and forget about profiles and other issues. But then, 30" LCDs need more than a single GPU in most games.B3an - Wednesday, November 10, 2010 - link
30" monitors dont need anything more than a high-end single GPU for most games. 99.9% of PC games are playable maxed out plus AA at 2560 res with just one of my 5870's in use, of with my previous 480. Theres only a handful of games that need dual GPU's to be playable at this res. Mainly because most games are console ports these days.And the OP is wrong, 6870 CF is not any better than the 580 with temperature or noise. 580 being better under load with noise, better with temps at idle, but only very slightly hotter under load.
knutjb - Tuesday, November 9, 2010 - link
I agree guys it should be a 485 not a 580.The 6870 is a sore spot on an otherwise solid refinement. Curious to see its SLI performance. $559 on Newegg this am.
dtham - Tuesday, November 9, 2010 - link
Anyone know if aftermarket cooling for the GTX 480 will work for the GTX 580? It would be great to be able to reuse a waterblock from a GTX 480 for the new 580s. Looking at the picture the layout looks similar.mac2j - Tuesday, November 9, 2010 - link
In Europe the GTX 580 was launched at 399 Euros and in response ATI has lowered the 5970 to 389 Euros (if you believe the rumors).This can only bode well for holiday prices of the 6970 vs 580.
samspqr - Tuesday, November 9, 2010 - link
it's already listed and in stock at alternate.de, but the cheapest one is 480eurthe only 5970 still in stock there is 540eur
yzkbug - Tuesday, November 9, 2010 - link
I moved all my gaming to the living room on a big screen TV and HTPC (a next next gen console in a sense). But, Optimus would be the only way to use this card on HTPC.slatr - Tuesday, November 9, 2010 - link
Ryan,Would you be able to test with Octane Renderer?
I am interested to see if Octane gets throttled.
Thanks
Andyburgos - Tuesday, November 9, 2010 - link
Ryan:I hold you in the most absolute respect. Actually, in my first post a while ago I praised your work, and I think you´re quite didactic and fun to read. On that, thanks for the review.
However, I need to ask you: W.T.F. is wrong with you? Aren´t you pissed off by the fact that GTX480 was a half baked chip (wouldn´t say the same about GTX460) and now that we get the real version they decided to call it 580? Why isn´t a single complain about that in the article?
If, as I understand, you think that the new power / temperature / noise / performance balance has improved dramatically from the 480, I think you are smart enough to see that it was because the 480 was very, very, unpolished chip. This renaming takes us for stupid, is even worse than what AMD did.
/rant
AT & staff, I think you have a duty to tell off lousy tactics such as the Barts being renamed 68x0, or the 8800 becoming 9800 then GTS250 as you always did. You have failed so badly to do that here that you look really biased. For me, a loyal argentinian reader since 2001, that is absolutely imposible, but with the GXT460 and this you are acomplishing that.
+1 for this card deserving an indifferent thumbs up, as Ryan graciously said, not for the card itself (wich is great) but for the nVidia tactics and the half baked 480 they gave us. Remember the FX5800 (as bad or worse than the 480) becoming the 5900... gosh, I think those days are over. Maybe that´s why I stick with my 7300 GT, haha.
I respectfully disent with your opinion, but thanks for the great review.
Best regards,
Andy
ViRGE - Tuesday, November 9, 2010 - link
Huh, are we reading the same article? See page 4.chizow - Tuesday, November 9, 2010 - link
I'd have to agree he probably didn't read the article thoroughly, beside explicitly saying this is the 2nd worst excuse for a new naming denomination, Ryan takes jabs at the 480 throughout by repeatedly hinting the 580 is what Fermi should've been to begin with.Sounds like just another short-sighted rant about renaming that conveniently forgets all the renaming ATI has done in the past. See how many times ATI renamed their R200 and R300 designs, even R600 and RV670 fall into the same exact vein as the G92 renaming he bemoans......
Haydyn323 - Tuesday, November 9, 2010 - link
Nvidia has done no different than ATI has as far as naming in their new cards. They simply jumped on the naming bandwagon for marketing and competetive purposes since ATI had already done so.... at least the 580 is actually faster than the 480. ATI releasing a 6870 that is far inferior to a 5870 is worse in my mind.It should indeed have been a 485, but since ATI calls their new card a 6870 when it really should be a 5860 or something, it only seems fair.
spigzone - Tuesday, November 9, 2010 - link
Any 'bandwagon' here belongs to Nvidia.mac2j - Tuesday, November 9, 2010 - link
Actually the new ATI naming makes a bit more sense.Its not a new die shrink but the 6xxx all do share some features not found at all in the 5xxx series such as Displayport 1.2 (which could become very important if 120 and 240Hz monitors ever catch on).
Also the Cayman 69xx parts are in fact a significantly original design relative to the 58xx parts.
Nvidia to me is the worst offender ... cause a 580 is just fully-enabled 480 with the noise and power problems fixed.
Sihastru - Tuesday, November 9, 2010 - link
If you think that stepping up the spec on the output ports warrants skipping a generation when naming your product, see that mini-HDMI port on the 580, that's HDMI 1.4 compliant... the requirements for 120Hz displays are met.The GF110 in not a GF100 with all the shaders enabled. It looks that way to the uninitiated. GF110 has much more in common with GF104.
GF110 has three types of tranzistors, graded by leakage, while the GF100 has just two. This gives you the ability to clock the core higher, while having a lower TDP. It is smaller in size then GF100 is, while maintaining the 40nm fab node. GTX580 has a power draw limitation system on the board, the GTX480 does not...
What else... support for full speed FP16 texture filtering which enhances performance in texture heavy applications. New tile formats which improve Z-cull efficiency...
So how does displayport 1.2 warrant the 68x0 name for AMD but the few changes above do not warrant the 5x0 name for nVidia?
I call BS.
Griswold - Wednesday, November 10, 2010 - link
I call your post bullshit.The 580 comes with the same old video engine as the GF100 - if it was so close to GF104, it would have that video engine and all the goodies and improvements it brings over the one in the 480 (and 580).
No, GT580 is a fixed GF100 and most of what you listed there supports that because it fixes what was broken with the 480. Thats all.
Sihastru - Wednesday, November 10, 2010 - link
I'm not sure what you mean... maybe you're right... but I'm not sure... If you're referring to bitstreaming support, just wait for a driver update, the hardware supports it.See: http://www.guru3d.com/article/geforce-gtx-580-revi...
"What is also good to mention is that HDMI audio has finally been solved. The stupid S/PDIF cable to connect a card to an audio codec, to retrieve sound over HDMI is gone. That also entails that NVIDIA is not bound to two channel LPCM or 5.1 channel DD/DTS for audio.
Passing on audio over the PCIe bus brings along enhanced support for multiple formats. So VP4 can now support 8 channel LPCM, lossless format DD+ and 6 channel AAC. Dolby TrueHD and DTS Master Audio bit streaming are not yet supported in software, yet in hardware they are (needs a driver update)."
NEVER rely just on one source of information.
Fine, if a more powerful card then the GTX480 can't be named the GTX580 then why is a lower performing then the HD5870 card is ok to be named HD6870... screw technology, screw refinements, talk numbers...
Whatever...
Ryan Smith - Wednesday, November 10, 2010 - link
To set the record straight, the hardware does not support full audio bitstreaming. I had NV themselves confirm this. It's only HDMI 1.4a video + the same audio formats that GTX 480 supported.B3an - Wednesday, November 10, 2010 - link
You can all argue all you want, but at the end of the day, for marketing reasons alone, NV really didn't have much of a choice but to name this card the 580 instead of 485 after ATI gave there cards the 6xxx series names. Which dont deserve a new series name either.chizow - Tuesday, November 9, 2010 - link
No ATI's new naming convention makes no sense at all. Their x870 designation has always been reserved for their Single-GPU Flagship part ever since the HD3870, and this naming convention has held true through both the HD4xxx and HD5xxxx series. But the 6870 clearly isn't the flagship of this generation, in fact, its slower than the 5870 while the 580 is clearly faster than the 480 in every aspect.To further complicate matters, ATI also launched the 5970 as a dual-GPU part, so single-GPU Cayman being a 6970 will be even more confusing and will also be undoubtedly slower than the 5970 in all titles that have working CF profiles.
If anything, Cayman should be 5890 and Barts should be 5860, but as we've seen from both caps, marketing names are often inconvenient and short-sighted when they are originally designated......
Galid - Tuesday, November 9, 2010 - link
We're getting into philosophy there. Know what's a sophism? An argument that seems strong but isn't because there's a fail in it. The new honda 2011 ain't necessarily better than the 2010 because it's newer.They name it differently because it's changed and wanna make you believe it's better but history proved it's not always the case. So the argument of newer generation means better is a false argument. Not everything new ''gotta'' be better in every way to live up to it's name.
But it's my opinion.
Galid - Tuesday, November 9, 2010 - link
It seems worse but that rebranding is all ok in my mind as it comes the 6870 comes in at a cheaper price than the 5870. So everyone can be happy about it. Nvidia did worse rebranding some of the 8xxx series into 9xxx chips for higher price but almost no change and no more performance. 9600gt comes to my mind...What is 9xxx series? a remake of a ''better'' 8xxx series. What is GTS3xx series, remake of GTx2xx, what is GTX5xx, .... and so on. Who cares? If it's priced well it's all ok. When I see someone going at staples to get a 9600gt at 80$ and I know I can get a 4850 for almost the same price, I say WTF!!!
GTX580 deserve the name they want to give it. Whoever tries to understand all that naming is up to him. But whoever wants to pay example 100$ for a card should get performance according to that and it seems more important than everything else to me!
Taft12 - Tuesday, November 9, 2010 - link
In this article, Ryan does exactly what you are accusing him of not doing! It is you who need to be asked WTF is wrongIketh - Thursday, November 11, 2010 - link
ok EVERYONE belonging to this thread is on CRACK... what other option did AMD have to name the 68xx? If they named them 67xx, the differences between them and 57xx are too great. They use nearly as little power as 57xx yet the performance is 1.5x or higher!!!im a sucker for EFFICIENCY... show me significant gains in efficiency and i'll bite, and this is what 68xx handily brings over 58xx
the same argument goes for 480-580... AT, show us power/performance ratios between generations on each side, then everyone may begin to understand the naming
i'm sorry to break it to everyone, but this is where the GPU race is now, in efficiency, where it's been for cpus for years
MrCommunistGen - Tuesday, November 9, 2010 - link
Just started reading the article and I noticed a couple of typos on p1."But before we get to deep in to GF110" --> "but before we get TOO deep..."
Also, the quote at the top of the page was placed inside of a paragraph which was confusing.
I read: "Furthermore GTX 480 and GF100 were clearly not the" and I thought: "the what?". So I continued and read the quote, then realized that the paragraph continued below.
MrCommunistGen - Tuesday, November 9, 2010 - link
well I see that the paragraph break has already been fixed...ahar - Tuesday, November 9, 2010 - link
Also, on page 2 if Ryan is talking about the lifecycle of one process then "...the processes’ lifecycle." is wrong.Aikouka - Tuesday, November 9, 2010 - link
I noticed the remark on Bitstreaming and it seems like a logical choice *not* to include it with the 580. The biggest factor is that I don't think the large majority of people actually need/want it. While the 580 is certainly quieter than the 480, it's still relatively loud and extraneous noise is not something you want in a HTPC. It's also overkill for a HTPC, which would delegate the feature to people wanting to watch high-definition content on their PC through a receiver, which probably doesn't happen much.I'd assume the feature could've been "on the board" to add, but would've probably been at the bottom of the list and easily one of the first features to drop to either meet die size (and subsequently, TDP/Heat) targets or simply to hit their deadline. I certainly don't work for nVidia so it's really just pure speculation.
therealnickdanger - Tuesday, November 9, 2010 - link
I see your points as valid, but let me counterpoint with 3-D. I think NVIDIA dropped the ball here in the sense that there are two big reasons to have a computer connected to your home theater: games and Blu-ray. I know a few people that have 3-D HDTVs in their homes, but I don't know anyone with a 3-D HDTV and a 3-D monitor.I realize how niche this might be, but if the 580 supported bitstreaming, then it would be perfect card for anyone that wants to do it ALL. Blu-ray, 3-D Blu-Ray, any game at 1080p with all eye-candy, any 3-D game at 1080p with all eye-candy. But without bitstreaming, Blu-ray is moot (and mute, IMO).
For a $500+ card, it's just a shame, that's all. All of AMD's high-end cards can do it.
QuagmireLXIX - Sunday, November 14, 2010 - link
Well said. There are quite a few fixes that make the 580 what I wanted in March, but the lack of bitstream is still a hard hit for what I want my PC to do.Call me niche.
QuagmireLXIX - Sunday, November 14, 2010 - link
Actually, this is killing me. I waited for the 480 in March b4 pulling the trigger on a 5870 because I wanted HDMI to a Denon 3808 and the 480 totally dropped the ball on the sound aspect (S/PDIF connector and limited channels and all). I figured no big deal, it is a gamer card after all, so 5870 HDMI I went.The thing is, my PC is all-in-one (HTPC, Game & typical use). The noise and temps are not a factor as I watercool. When I read that HDMI audio got internal on the 580, I thought, finally. Then I read Guru's article and seen bitstream was hardware supported and just a driver update away, I figured I was now back with the green team since 8800GT.
Now Ryan (thanks for the truth, I guess :) counters Gurus bitstream comment and backs it up with direct communication with NV. This blows, I had a lofty multimonitor config in mind and no bitstream support is a huge hit. I'm not even sure if I should spend the time to find out if I can arrange the monitor setup I was thinking.
Now I might just do a HTPC rig and Game rig or see what 6970 has coming. Eyefinity has an advantage for multiple monitors, but the display-port puts a kink in my designs also.
Mr Perfect - Tuesday, November 9, 2010 - link
So where do they go from here? Disable one SM again and call it a GTX570? GF104 is to new to replace, so I suppose they'll enable the last SM on it for a GTX560.chizow - Tuesday, November 9, 2010 - link
There's 2 ways they can go with the GTX 570, either more disabled SM than just 1 (2-3) and similar clockspeed to the GTX 580, or fewer disabled SM but much lower clockspeed. Both would help to reduce TDP but with more disabled SM that would also help Nvidia unload the rest of their chip yield. I have a feeling they'll disable 2-3 SM with higher clocks similar to the 580 so that the 470 is still slightly slower than the 480.I'm thinking along the same lines as you though for the GTX 560, it'll most likely be the full-fledged GF104 we've been waiting for with all 384SP enabled, probably slightly higher clockspeeds and not much more than that, but definitely a faster card than the original GF104.
vectorm12 - Tuesday, November 9, 2010 - link
I'd really love to see the raw crunching power of the 480/580 vs. 5870/6870.I've found ighashgpu to be a great too to determine that and it can be found at http://www.golubev.com/
Please consider it for future tests as it's very well optimized for both CUDA and Stream
spigzone - Tuesday, November 9, 2010 - link
The performance advantage of a single GPU vs CF or SLI is steadily diminishing and approaching a point of near irrelevancy.6870 CF beats out the 580 in nearly every parameter, often substantially on performance benchmarks, and per current newegg prices, comes in at $80 cheaper.
But I think the real sweet spot would be a 6850 CF setup with AMD Overdrive applied 850Mz clocks, which any 6850 can achieve at stock voltages with minimal thermal/power/noise costs (and minimal 'tinkering'), and from the few 6850 CF benchmarks that showed up would match or even beat the GTX580 on most game benchmarks and come in at $200 CHEAPER.
That's an elbow from the sky in my book.
smookyolo - Tuesday, November 9, 2010 - link
You seem to be forgetting the minimum framerates... those are so much more important than average/maximum.Sihastru - Tuesday, November 9, 2010 - link
Agreed, CF scales very badly when it comes to minumum framerates. It is even below the minimum framerates of one of the cards in the CF setup. It is very anoying when you're doing 120FPS in a game and from time to time your framerates drop to an unplayable and very noticable 20FPS.chizow - Tuesday, November 9, 2010 - link
Nice job on the review as usual Ryan,Would've liked to have seen some expanded results however, but somewhat understandable given your limited access to hardware atm. It sounds like you plan on having some SLI results soon.
I would've really liked to have seen clock-for-clock comparisons though to the original GTX 480 though to isolate the impact of the refinements between GF100 and GF110. To be honest, taking away the ~10% difference in clockspeeds, what we're left with seems to be ~6-10% from those missing 6% functional units (32 SM and 4 TMUs).
I would've also liked to have seen some preliminary overclocking results with the GF110 to see how much the chip revision and cooling refinements increased clockspeed overhead, if at all. Contrary to somewhat popular belief, the GTX 480 did overclock quite well, and while that also increased heat and noise it'll be hard for someone with an overclocked 480 to trade it in for a 580 if it doesn't clock much better than the 480.
I know you typically have follow-up articles once the board partners send you more samples, so hopefully you consider these aspects in your next review, thanks!
PS: On page 4, I believe this should be a supposed GTX 570 mentioned in this excerpt and not GTX 470: "At 244W TDP the card draws too much for 6+6, but you can count on an eventual GTX 470 to fill that niche."
mapesdhs - Tuesday, November 9, 2010 - link
"I would've also liked to have seen some preliminary overclocking results ..."
Though obviously not a true oc'ing revelation, I note with interest there's already
a factory oc'd 580 listed on seller sites (Palit Sonic), with an 835 core and 1670
shader. The pricing is peculiar though, with one site pricing it the same as most
reference cards, another site pricing it 30 UKP higher. Of course though, none
of them show it as being in stock yet. :D
Anyway, thanks for the writeup! At least the competition for the consumer finally
looks to be entering a more sensible phase, though it's a shame the naming
schemes are probably going to fool some buyers.
Ian.
Ryan Smith - Wednesday, November 10, 2010 - link
You're going to have to wait until I have some more cards for some meaningful overclocking results. However clock-for-clock comparisons I can do.http://www.anandtech.com/show/4012/nvidias-geforce...
JimmiG - Tuesday, November 9, 2010 - link
Well technically, this is not a 512-SP card at 772 MHz. This is because if you ever find a way to put all 512 processors at 100% load, the throttling mechanism will kick in.That's like saying you managed to overclock your CPU to 4.7 GHz.. sure, it might POST, but as soon as you try to *do* anything, it instantly crashes.
Ryan Smith - Tuesday, November 9, 2010 - link
Based on the performance of a number of games and compute applications, I am confident that power throttling is not kicking in for anything besides FurMark and OCCT.TemplarGR - Tuesday, November 9, 2010 - link
This card is not enough. It is much worse than 2x 6870s in CF, while needing slightly more power and producing more heat and noise. For such levels of performance, minimum framerates are a non-issue, and this won't change in the foreseeable future since all games are console ports...It seems AMD is on its way to fully destroy NVIDIA. This will be both good and bad for consumers:
1) Bad because we need competition
2) Good because NVIDIA has a sick culture, and some of its tactics are disgusting, for those who know...
I believe on die gpus are more interesting anyway. By the time new consoles arrive, on die gpu performance will be almost equal to next-gen console performance. All we will need by then is faster ram, and we are set. I look forward to create a silent and ecological pc for gaming... I am tired of these vacuum cleaners that also serve as gpus...
Haydyn323 - Tuesday, November 9, 2010 - link
Nobody seems to be taking into account the fact that the 580 is a PREMIUM level card. It is not meant to be compared to a 6870. Sure 2x 6870s can do more. This card is not, however, geared for that category of buyer.It is geared for the enthusiast who intends to buy 2 or 3 580s and completely dominate benchmarks and get 100+ fps in every situation. Your typical gamer will not likely buy a 580, but your insane gamer will likely buy 2 or 3 to play their 2560x1600 monitor at 60fps all the time.
I fail to see how AMD is destroying anything here. Cost per speed AMD wins, but speed possible, Nvidia clearly wins for the time being. If anyone can come up with something faster than 3x 580s in the AMD camp feel free to post it in response here.
TemplarGR - Tuesday, November 9, 2010 - link
Do you own NVIDIA stock, or are you a fanboy? Because really, only one of the two could not see how AMD destroys NVIDIA. AMD's architecture is much more efficient.How many "insane gamers" exist, that would pay 1200 or 1800 dollars just for gpus, and adding to that an insanely expensive PSU, tower and mainboard needed to support such a thing? And play what? Console ports? On what screens? Maximum resolution is still 2560x1600 and even a single 6870 could do fine in most games in it...
And just because there may be about 100 rich kids in the whole world with no lives who could create such a machine, does it make 580 a success?
Do YOU intent to create such a beast? Or would you buy a mainstream NVIDIA card, just because the posibility of 3x 580s exists?Come on...
Haydyn323 - Tuesday, November 9, 2010 - link
So, the answer is no; you cannot come up with something faster. Also, as shown right here on Anandtech:http://www.anandtech.com/show/3987/amds-radeon-687...
A single 6870 cannot play most modern games at anywhere near 60fps at 2560x1600. Even the 580 needs to be SLI'd to guarantee it.
That is all.
Haydyn323 - Tuesday, November 9, 2010 - link
Oh and yes I do intend to buy a couple of them in a few months. One at first and add another later. I also love when fanboys call other fanboys, "fanboys." It doesn't get anyone anywhere.smookyolo - Tuesday, November 9, 2010 - link
PC games are not simply console ports, the fact that you need a top of the line PC to get even close to 60 FPS in most cases at not even maximum graphics settings is proof of this.PC "ports" of console games have been tweaked and souped up to have much better graphics, and can take advantage of current gen hardware, instead of the ancient hardware in consoles.
The "next gen" consoles will, of course, be worse than PCs of the time.
And game companies will continue to alter their games so that they look better on PCs.
It's a fact, live with it.
mapesdhs - Tuesday, November 9, 2010 - link
'How many "insane gamers" exist, that would pay 1200 or 1800 dollars just for gpus, ...'
Actually the market for this is surprisingly strong in some areas, especially
CA I was told. I suspect it's a bit like other components such as top-spec
hard drives and high-end CPUs: the volumes are smaller but the margins
are significantly higher for the seller.
Some sellers even take a loss on low-end items just to retain the custom,
making their money on more expensive models.
Ian.
QuagmireLXIX - Sunday, November 14, 2010 - link
"Maximum resolution is still 2560x1600 and even a single 6870 could do fine in most games in it..."Multiple monitors (surround, eyefinity) resolutions get much larger.
7Enigma - Tuesday, November 9, 2010 - link
Just to clarify your incorrect (or misleading) statement 2 6870's in CF use significantly more power than a single 580, but also perform significantly better in most games (minimum frame rate issue noted however).TemplarGR - Tuesday, November 9, 2010 - link
True. I made a mistake on this one. Only in idle power it consumes slightly less. My bad.cjb110 - Tuesday, November 9, 2010 - link
"The thermal pads connecting the memory to the shroud have once again wiped out the chip markets", wow powerful adhesive that! Bet Intel's pissed.cjb110 - Tuesday, November 9, 2010 - link
"While the difference is’ earthshattering, it’s big enough..." nt got dropped, though not yet at my workplace:)Invader Mig - Tuesday, November 9, 2010 - link
I don't know the stance on posting links to other reviews since I'm a new poster, so I wont. I would like to make note that in another review they claim to have found a work around the power throttling that allowed them to use furmark to get accurate temps and power readings. This review has the 580 at 28w above the 480 at max load. I don't mean to step on anyone's toe's, but I have seen so many different numbers because of this garbage nvidia has pulled, and the only person who claims to have furmark working gets higher numbers. I would really like to see something definitive.7Enigma - Tuesday, November 9, 2010 - link
Here's my conundrum. What is the point of something like Furmark that has no purpose except to overstress a product? In this case the 580 (with modified X program) doesn't explode and remains within some set thermal envelope that is safe to the card. I like using Crysis as it's a real-world application that stresses the GPU heavily.Until we have another game/program that is used routinely (be it game or coding) that surpasses the heat generation and power draw of Crysis I just don't see the need to try to max out the cards with a benchmark. OC your card to the ends of the earth and run something real, that is understandable. But just using a program that has no real use to artificially create a power draw just doesn't have any benefit IMO.
Gonemad - Tuesday, November 9, 2010 - link
I beg to differ. (be careful, high doses of flaming.)Let me put it like this. The Abrams M1 Tank is tested on a 60º ramp (yes, that is sixty degrees), where it must park. Just park there, hold the brakes, and then let go. It proves the brakes on a 120-ton 1200hp vehicle will work. It is also tested on emergency brakes, where this sucker can pull a full stop from 50mph on 3 rubber-burning meters. (The treads have rubber pads, for the ill informed).
Will ever a tank need to hold on a 60º ramp? Probably not. Would it ever need to come to a screeching halt in 3 meters? In Iraqi, they probably did, in order to avoid IEDs. But you know, if there were no prior testing, nobody would know.
I think there should be programs specifically designed to stress the GPU in unintended ways, and it must protect itself from destruction, regardless of what code is being thrown at it. NVIDIA should be grateful somebody pointed that out to them. AMD was thankful when they found out the 5800 series GPUs (and others, but this was worse) had lousy performance on 2D acceleration, or none at all, and rushed to fix its drivers. Instead, NVIDIA tries to cheat Furmark by recognizing its code and throttling. Pathetic.
Perhaps someday, a scientific application may come up with repeatable math operations that just behave exactly like Furmark. So, out of the blue, you got a $500 worth of equipment that gets burned out, and nobody can tell why??? Would you like that happening to you? Wouldn't you like to be informed that this or that code, at least, could destroy your equipment?
What if Furmark wasn't designed to stress GPUs, but it was an actual game, (with furry creatures, lol)?
Ever heard of Final Fantasy XIII killing off PS3s for good, due to overload, thermal runaway, followed by meltdown? Rumors are there, if you believe them is entirely to you.
Ever heard of Nissan GTR (skyline) being released with a top-speed limiter with GPS that unlocks itself when the car enters the premises of Nissan-approved racetracks? Inherent safety, or meddling? Can't you drive on a Autoban at 300km/h?
Remember back in the day of early benchmark tools, (3DMark 2001 if I am not mistaken), where the Geforce drivers detected the 3DMark executable and cheated the hell out of the results, and some reviewers got NVIDIA red-handed when they renamed and changed the checksum of the benchmark??? Rumors, rumors...
The point is, if there is a flaw, a risk of an unintended instruction kill the hardware, the buyer should be rightfully informed of such conditions, specially if the company has no intention at all to fix it. Since Anand warned us, they will probably release the GTX 585 with full hardware thermal safeties. Or new drivers. Or not.
Just like the instruction #PROCHOT was inserted in the Pentium (which version?) and some reviewers tested it against an AMD chip. I never forgot that AMD processor billowing blue smoke the moment the heatsink was torn off. Good PR, bad PR. The video didn´t look fake to me back then, just unfair.
In the end, it becomes matter of PR. If suddenly all the people that played Crysis on this card caused it to be torched, we would have something really interesting.
Sihastru - Tuesday, November 9, 2010 - link
AMD has a similar system in place since the HD4xx0 generation. Remember when Furmark used to blow up 48x0 cards? Of course not. But look it up...What nVidia did here is what AMD has in all their mid/high end cards since HD4xx0. At least nVidia will only throttle when it detects Furmark/OCCT. AMD cards will throttle in any situation if the power limiter requires it.
JimmiG - Tuesday, November 9, 2010 - link
It's a very unfortunate situation that both companies are to blame for. That's what happens when you push the limits of power consumption and heat output too far while at the same time trying to keep manufacturing costs down.The point of a stress test is to push the system to the very limit (but *not* beyond it, like AMD and Nvidia would have you believe). You can then be 100% assured that it will run all current and future games and HPC applications, not matter what unusual workloads they dump on your GPU or CPU, without crashes or reduced performance.
cactusdog - Tuesday, November 9, 2010 - link
So if you want to use multiple monitors do you still need 2 cards to run it or have they enabled a third monitor on the 580?Sihastru - Tuesday, November 9, 2010 - link
Yes.Haydyn323 - Tuesday, November 9, 2010 - link
The 580 as with the previous generation still only supports 2 monitors max per card.Pantsu - Tuesday, November 9, 2010 - link
A good article, and a good conclusion overall. Much better that the fiasko that was the 6800-article.I do lament the benchmarking method AT uses though. Benchmarks like the Crysis Warhead one are not really representative of real world performance, but tend to be a bit too "optimized". They do not reflect real world performance very well, and even skew the results between cards.
carage - Tuesday, November 9, 2010 - link
No DOLBY/DTS HD bitstream = epic fail as far as HTPC usage is concerned.Thank you nVidia for failing again this round.
Sihastru - Tuesday, November 9, 2010 - link
Yes, you must be one of the only two persons in the world that was considering the most powerfull GPU on the planet for a HTPC setup.carage - Tuesday, November 9, 2010 - link
And somehow you still buy into the argument that mid-end offerings at half the price has more features than the top of the line card?nVidia has been doing this since 6800 era...
QuagmireLXIX - Sunday, November 14, 2010 - link
And I am the other :) What some people don't see is that someone may only want 1 desktop to do everything well.buildingblock - Tuesday, November 9, 2010 - link
I'm impressed that my local hardware dealer here in the UK has no less than 5 GTX 580s in stock today. It also includes, yes in stock, the first overclocked 580 the Palit Sonic which has a 835 Mhz CPU up from 772, 4200 memory up from 4008, and 1670 shaders up from 1544. All this for about 5% more than the price of the standard Palit GTX 580.buildingblock - Tuesday, November 9, 2010 - link
I meant 5 different makes of GTX 580 of course.mapesdhs - Tuesday, November 9, 2010 - link
Are you near Bolton by any chance? ;D
If not, which company?
Btw, shop around, the Sonic is 30 cheaper elsewhere.
Ian.
buildingblock - Tuesday, November 9, 2010 - link
I was looking at a standard Palit GTX 580 for £380, and the Sonic version for £398. These were about the best prices I could find today.nitrousoxide - Tuesday, November 9, 2010 - link
But how far it can go depends on its counterpart, the HD6970.Sihastru - Tuesday, November 9, 2010 - link
At this point it makes no sense to get rattled up about the 580. We must patiently wait for the 69x0 cards and see what they can bring to the table. I heard rumours of AMD delaying their cards to the end of the year in order to do some "tweaks"...nitrousoxide - Tuesday, November 9, 2010 - link
Delaying is something good because it indicates that Cayman can be very big, very fast and...very hungry making it hard to build. What AMD needs is a card that can defeat GTX580, no matter how hot or power-hungry it is.GeorgeH - Tuesday, November 9, 2010 - link
Is there any word on a fully functional GF104?Nvidia could call it the 560, with 5="Not Gimped".
Sihastru - Tuesday, November 9, 2010 - link
I guess once GTX470 goes EOL. If GTX460 had all it's shaders enabled then the overclocked versions would have canibalized GTX470 sales. Even so, it will happen on occasion.tomoyo - Tuesday, November 9, 2010 - link
My guess is there will be GTX 580 derivatives with less cores enabled as usual, probably a GTX 570 or something. And then an improved GTX 460 with all cores enabled as the GTX 560.tomoyo - Tuesday, November 9, 2010 - link
Good to see nvidia made a noticeable improvement over the overly hot and power hungry GTX 480. Unfortunately way above my power and silence needs, but competition is a good thing. Now I'm highly curious how close the Radeon 69xx will come in performance or if it can actually beat the GTX 580 in some cases.Of course the GTX 480 is completely obsolete now, more power, less speed, more noise, ugly to look at.
7eki - Tuesday, November 9, 2010 - link
What we got here today is a higher clocked, better cooled GTX 480 with a slightly better power consumption. All of that for only 80$ MORE ! Any first served version of non referent GTX 480 is equipped with a much better cooling solution that gives higher OC possibilites and could kick GTX 580's ass. If we compare GTX 480 to a GTX580 clock2clock we will get about 3% of a difference in performance. All thanks to 32 CUDA processors, and a few more TMU's. How come the reviewers are NOW able to find pros of something that they used to criticise 7 months ago ? Don't forget that AMD's about to break their Sweet Spot strategy just to cut your hypocrites tongues. I bet that 6990's going to be twice as fast as what we got here today . If we really got anything cause I can't really tell the difference.AnnonymousCoward - Tuesday, November 9, 2010 - link
32W less for 15% more performance, still on 40nm, is a big deal.7eki - Wednesday, November 10, 2010 - link
32W and 15% you say ? No it isn't a big deal since AMD's Barts GPUs release. Have on mind that GTX580 still consumes more energy than a faster (in most cases) and one year older multi GPU HD5970. In that case even 60 would sound ridiculosly funny. It's not more than a few percent improvement over GTX480. If you don't believe it calculate how much longer will you have to play on your GTX580 just to get your ~$40 spent on power consumption compared to a GTX480 back. Not to mention (again) that a nonreferent GTX480 provides much better cooling solutions and OC possibilities. Nvidia's diggin their own grave. Just like they did by releasing GTX460. The only thing that's left for them right now is to trick the reviewers. But who cares. GTX 580 isn't going to make them sell more mainstream GPUs. It isn't nvidia whos cutting HD5970 prices right now. It was AMD by releasing HD6870/50 and announcing 6970. It should have been mentioned by all of you reviewers who treat the case seriously. Nvidia's a treacherous snake and the reviewers job is not to let such things happen.Sihastru - Wednesday, November 10, 2010 - link
Have you heard about the ASUS GTX580 Voltage Tweak edition that can be clocked up to 1100 MHz, that's more then 40% OC? Have you seen the EVGA GTX580 FTW yet?The fact that a single GPU card is in some cases faster then a dual GPU card built with two of the fastest competing GPU's tells a lot of good things about that single GPU card.
This "nVidia in the Antichrist" speech is getting old. Repeating it all over the interwebs doesn't make it true.
AnnonymousCoward - Wednesday, November 10, 2010 - link
I'm with you, that AMD still has a superior performance per power design. But with the 580, nvidia took Fermi from being outrageous to competitive in that category, and even wins by a wide margin with idle power. Looking at the charts, the 580 also has a vastly superior cooling system to the 5970. Mad props to nvidia for turning things around.FragKrag - Tuesday, November 9, 2010 - link
Still no SC2? :(Ryan Smith - Tuesday, November 9, 2010 - link
Honestly, I ran out of time. I need to do a massive round of SC2 benchmarking this week, at which time it will be in all regular reviews and will be in Bench.ph3412b07 - Tuesday, November 9, 2010 - link
There is always some debate as to the value of single gpu solutions vs multi gpu. I've noticed that the avg/max framerate in multi gpu setups is in fact quite good in some cases, but the min fps paints a different picture, with nearly all setups and various games being plagued by micro-stutter. Has anybody else come across this as reason to go with a more expensive single card?eXces - Tuesday, November 9, 2010 - link
Why did u not include some overclocked 5970? Like u did with GTX 460 when u reviewed 6800 series?Ryan Smith - Wednesday, November 10, 2010 - link
If you don't recall from our 5970 review, we disqualified our 5970 when running at 5870 clocks. The VRMs on the 5970 cannot keep up with the power draw on some real world applications, so it does not pass our muster at those speeds by even the loosest interpretation.529th - Tuesday, November 9, 2010 - link
I knew OCCT was a culprit of causing problems.Ph0b0s - Tuesday, November 9, 2010 - link
Was very interested to look at the review today to see how the new GTX580 and other DX11 card options are in comparison to my GTX 285 SLI setup. But unfortunately for the games I am playing BFBC2, Stalker etc and would base my descition on, I still don't know as my card is not represented. I know why,, becuase they are DX11 games and my card is DX10, but my card still runs them and I would want to know how they compare even if one is running DX10 and the other running DX11. Even Anandtech's chart system gives no measure for my cards in these games . Please sort this out. Just becuase a card does not run the latest version of directx does not mean it should be forgotten. Escpecially since the people most likley to be looking at upgrading are those with this generation of card rather than people with DX 11 hardware...mapesdhs - Wednesday, November 10, 2010 - link
Don't worry, I'll have some useful info for you soon! 8800GT vs. 4890 vs. 460, in all
three cases testing 1 & 2 cards. You should be able to eaisly extrapolate from the
results to your GTX285 vs. 580 scenario. Send me an email (mapesdhs@yahoo.com)
and I'll drop you a line when the results are up. Data for 8800 GT vs. 4890 is already up:
http://www.sgidepot.co.uk/misc/pctests.html
http://www.sgidepot.co.uk/misc/stalkercopbench.txt
but I'm adding two more tests (Unigine and X3TC).
Ian.
juampavalverde - Tuesday, November 9, 2010 - link
NVIDIA exceed AMD with this... as long as the barts should have been 6770, this fermi slight improvement just in this universe can be called 5xx series. it is just the gf100 done right, and should have been named properly, as gtx 490.deeps6x - Tuesday, November 9, 2010 - link
Really, who has to have this card when for less money, you can get better results with a pair of 460s in SLI or a pair of 6850s in CF (even better than 460s in almost all cases) to give you better numbers than this card.Take the extra $100+ and bump your CPU up a speed bin or two.
Alberto8793 - Tuesday, November 9, 2010 - link
I will give Nvidia some props they have a very fast card, but interms of performance per dollar, AMD beats them out, go to newegg and click on the 5970 its 499$, go and check the 580, a cool 560$ now return to the anandtech review and check out the benchmarks, they speak for themselves, and finnally load and idle consumption of a two gpu card are lower than a single gpu?!?!?!?!? OUTRAGEOUS!slick121 - Wednesday, November 10, 2010 - link
I mean this is the "enough said!" post. So true, totally agree!ClagMaster - Tuesday, November 9, 2010 - link
Looks like Nvidia has done their homework optimize the GF100 after business pressures to release the GTX-480.Nvidia did a competent makeover of the GF100 and GTX-480 to reduce power, heat and noise.
The GF-110/GTX-580 offers about 10% more performance for the Watts, which are still too high. for my liking.
AnnonymousCoward - Wednesday, November 10, 2010 - link
I think it's more like 30% more performance per watt. Crysis power is 389W vs 421W and averages 15% more fps.the_elvino - Wednesday, November 10, 2010 - link
I would take Anand's power consumtion measurements of 421W for the GTX 480 and 389W for GTX 590 with a few pinches of salt.HardOCP compared the power draw in SLI and recorded following numbers for the entire system under full load:
GTX 480 SLI: 715W
GTX 590 SLI: 700W
Difference per card between 480 and 590: 7,5W!
The 590 is noticeably cooler though.
http://hardocp.com/article/2010/11/09/nvidia_gefor...
7Enigma - Wednesday, November 10, 2010 - link
Why would HardOCP's numbers be any more accurate than Anand's? One is comparing SLI and one is single card. We have no clue the differences in system configuration (individual variances of parts not the part name), not to mention the inherent variability of the cards themselves.I've said it in the 6850 article and before but power consumption numbers (and thus noise/heat) have to be taken with a grain of salt because there is no independent unbiased source giving these cards for review. A really good or poor cherry-picked sample can completely change the impression of a cards efficiency. And since we know NVIDIA ships cards with varying load voltages it is very easy to imagine a situation where one card would be similar in power draw to the 480 while another would show lower thermals.
Anand/Ryan I've said it before and I'll say it again, you have the chance to be a pioneer in the review field by getting independent sources of cards slightly after release. Yes your original review would have to exclude this (or have a caveat that this is what the manufacturer supplied you with).
Decide on a good number (3-5 shouldn't be unreasonable), and purchase these from etailers/locally anonymously. Test them for power/noise/temp and then either sell them on Ebay at slightly reduced cost or even increased price? (Anandtech approved!...touched by Ryan himself!....whatever....you've got a marketing department) :), and/or take one/all and give them away to your loyal readers like the other contests.
Then have a followup article with the comparisons between the "real" cards and the cherry-picked ones from NVIDIA/AMD. Just think how if the manufacturers know that you might call them out on a cherry-picked sample in a follow-up review, they might just try to avoid the bad press and ship one more closely in line with the product line.
Goblerone - Tuesday, November 9, 2010 - link
I'm confused here. In your review of ATI's new 6870, you were comparing against factory overclocked NVIDIA cards but here you are using reference ATI cards.What gives?
AnandThenMan - Wednesday, November 10, 2010 - link
Why even bother asking. It's clear that Anandtech is not about fairness in reviews anymore. Nvidia got special treatment, plain and simple.Just wait until we see Anandtech preview AMD's Brazos with some benches, you will see even more bizarre and head scratching benchmarks. Wait and see.
Sihastru - Wednesday, November 10, 2010 - link
Lol, give it a rest man... and while you're at it tells us about some similarly priced factory overclocked cards that AMD has on the stores shelves and could be used at the time the review was conducted. Relevent models only please, that have the same performance as the GTX580.AnandThenMan - Wednesday, November 10, 2010 - link
"Relevent models only please, that have the same performance as the GTX580."So we can only compare cards that have the same performance. Exciting graphs that will make.
RobMel85 - Tuesday, November 9, 2010 - link
I browsed through the 10 pages of comments and I don't think I saw anyone comment on the fact that the primary reason Nvidia corrected their heat problem was by blatantly copying ATi/Sapphire...not only did they plagiarize the goodies under the hood, but they look identical to AMD cards now! Our wonderful reviewer made the point, but no one else seemed to play on it.I say weak-sauce for Nvidia, considering the cheapest 580 on NewEgg is $557.86 shipped; the price exceeds what 480 was initially and the modded/OC'd editions aren't even out yet. It can't support more than 2 monitors by itself and is lacking in the audio department. Yes, it's faster than it's predecessor. Yes, they fixed the power/heat/noise issues, but when you can get similar, if not better, performance for $200 less from AMD with a 6850 CF setup...it seems like a no brainer.
Sure ATi re-branded the new cards as the HD6000 series, but at least they aren't charging top $ for them. Yes, they are slower than the HD5000 series, but you can buy 2 6850s for less than the price of the 480, 580, 5970(even 5870 for some manufacturers) and see similar or better performance AND end up with the extra goodies the new cards support.
I am looking forward to the release of the 69XX cards to see how well they will hold up against the 580. Are they going to be a worthy successor to the 5900, or will they continue the trend of being a significant threat in CrossFire at a reasonable price? Only time will tell...
The real question is, what will happen when the 28nm HD7000 cards hit the market?
tomoyo - Tuesday, November 9, 2010 - link
Actually the newegg prices are because they have a 10% coupon right now. I bet they'll go back to closer to normal after the coupon expires...assuming there's any stock.Sihastru - Wednesday, November 10, 2010 - link
Vapour chamber cooling technology was NOT invented by ATI/Sapphire. They are NOT the first to use it. Convergence Tech, the owner of the patent, even sued ATI/Sapphire/HP because of the infringement (basically means stolen technology).LOL.
RobMel85 - Sunday, November 14, 2010 - link
Where within my post did I say it was invented by ATi/Sapphire...nowhere. The point that I was trying to make was that Nvidia copied the design that ATi/Sapphire had been using to trounce the Nvidia cards. The only reason they corrected their problems was by making their cards nearly identical to AMD/ATi...And to tomoyo, when I made that post there was no 10% coupon on newegg. They obviously added it because everyone else was selling them cheaper.
Belard - Wednesday, November 10, 2010 - link
This is still a "400" series part as it's really technically more advanced than the 480.Does it have additional features? No.
Is it faster, yes.
But check out the advancement feature list.
The 6800s, badly named and should have been 6700s, are slightly slower than the 5800s, but costs a lot less and actually does some things differently from the 5000 series. And sooner or later, there will be a whole family of 6000s.
But here we are, about 6months later and theres a whole new "product line"?
dvijaydev46 - Wednesday, November 10, 2010 - link
Is there any problem with Mediaespresso? My 5770 is faster with mediashow than mediaespresso. Can you check with mediashow to see if your findings are right?Oxford Guy - Wednesday, November 10, 2010 - link
The 480 beats the 580, except at 2560x1600. The difference is most dramatic at 1680x1050.http://techgage.com/reviews/nvidia/geforce_gtx_580...
http://techgage.com/reviews/nvidia/geforce_gtx_580...
http://techgage.com/reviews/nvidia/geforce_gtx_580...
Why is that?
Sihastru - Wednesday, November 10, 2010 - link
Proof that GF110 is not just a GF100 with all the shaders enabled.Oxford Guy - Wednesday, November 10, 2010 - link
This seems to me to be related to the slight shrinkage of the die. What was cut out? Is it responsible for the lower minimum frame rates in Unigine?wtfbbqlol - Thursday, November 11, 2010 - link
Most likely an anomaly. Just compare the GTX480 to the GTX470 minimum framerate. There's no way the GTX480 is twice as fast as the GTX470.Oxford Guy - Friday, November 12, 2010 - link
It does not look like an anomaly since at least one of the few minimum frame rate tests posted by Anandtech also showed the 480 beating the 580.We need to see Unigine Heaven minimum frame rates, at the bare minimum, from Anandtech, too.
Oxford Guy - Saturday, November 13, 2010 - link
To put it more clearly... Anandtech only posted minimum frame rates for one test: Crysis.In those, we see the 480 SLI beating the 580 SLI at 1920x1200. Why is that?
It seems to fit with the pattern of the 480 being stronger in minimum frame rates in some situations -- especially Unigine -- provided that the resolution is below 2K.
I do hope someone will clear up this issue.
wtfbbqlol - Wednesday, November 10, 2010 - link
It's really disturbing how the throttling happens without any real indication. I was really excited reading about all the improvements nVidia made to the GTX580 then I read this annoying "feature".When any piece of hardware in my PC throttles, I want to know about it. Otherwise it just adds another variable when troubleshooting performance problem.
Is it a valid test to rename, say, crysis.exe to furmark.exe and see if throttling kicks in mid-game?
wtfbbqlol - Wednesday, November 10, 2010 - link
Well it looks like there is *some* official information about the current implementation of the throttling.http://nvidia.custhelp.com/cgi-bin/nvidia.cfg/php/...
Copy and paste of the message:
"NVIDIA has implemented a new power monitoring feature on GeForce GTX 580 graphics cards. Similar to our thermal protection mechanisms that protect the GPU and system from overheating, the new power monitoring feature helps protect the graphics card and system from issues caused by excessive power draw.
The feature works as follows:
• Dedicated hardware circuitry on the GTX 580 graphics card performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin, and PCI-Express).
• The graphics driver monitors the power levels and will dynamically adjust performance in certain stress applications such as Furmark 1.8 and OCCT if power levels exceed the card’s spec.
• Power monitoring adjusts performance only if power specs are exceeded AND if the application is one of the stress apps we have defined in our driver to monitor such as Furmark 1.8 and OCCT.
- Real world games will not throttle due to power monitoring.
- When power monitoring adjusts performance, clocks inside the chip are reduced by 50%.
Note that future drivers may update the power monitoring implementation, including the list of applications affected."
Sihastru - Wednesday, November 10, 2010 - link
I never heard anyone from the AMD camp complaining about that "feature" with their cards and all current AMD cards have it. And what would be the purpose of renaming your Crysis exe? Do you have problems with the "Crysis" name? You think the game should be called "Furmark"?So this is a non issue.
flyck - Wednesday, November 10, 2010 - link
the use of renaming is that nvidia uses name tags to identify wether it should throttle or not.... suppose person x creates a program and you use an older driver that does not include this name tag, you can break things.....Gonemad - Wednesday, November 10, 2010 - link
Big fat YES. Please do rename the executable from crysis.exe to furmark.exe, and tell us.Get furmark and go all the way around, rename it to Crysis.exe, but be sure to have a fire extinguisher in the premises. Caveat Emptor.
Perhaps just renaming in not enough, some checksumming is involved. It is pretty easy to change checksum without altering the running code, though. When compiling source code, you can insert comments in the code. When compiling, the comments are not dropped, they are compiled together with the running code. Change the comment, change the checksum. But furmark alone can do that.
Open the furmark on a hex editor and change some bytes, but try to do that in a long sequence of zeros at the end of the file. Usually compilers finish executables in round kilobytes, filling with zeros. It shouldn't harm the running code, but it changes the checksum, without changing byte size.
If it works, rename it Program X.
Ooops.
iwodo - Wednesday, November 10, 2010 - link
The good thing about GPU is that it scales VERY well ( if not linearly ) with transistors. 1 Node Die Shrink, Double the transistor account, double the performance.Combined there are not bottleneck with Memory, which GDDR5 still have lots of headroom, we are very limited by process and not the design.
techcurious - Wednesday, November 10, 2010 - link
I didnt read through ALL the comments, so maybe this was already suggested. But, can't the idle sound level be reduced simply by lowering the fan speed and compromising idle temperatures a bit? I bet you could sink below 40db if you are willing to put up with an acceptable 45 C temp instead of 37 C temp. 45 C is still an acceptable idle temp.RussianSensation - Wednesday, November 10, 2010 - link
Very good point techcurious. Which is why the comment in the review about having GTX580 not being a quiet card at load is somewhat misleading. I have lowered my GTX470 from 40% idle fan speed to 32% fan speed and my idle temperatures only went up from 38*C to 41*C. At 32% fan speed I can not hear the car at all over other case fans and Scythe S-Flex F cpu fan. You could do the same with almost any videocard.Also, as far as FurMark goes, the test does test all GPUs beyond their TDPs. TDP is typically not the most power the chip could ever draw, such as by a power virus like FurMark, but rather the maximum power that it would draw when running real applications. Since HD58/68xx series already have software and hardware PowerPlay enabled which throttles their cards under power viruses like FurMark it was already meaningless to use FurMark for "maximum" power consumption figures. Besides the point, FurMark is just a theoretical application. AMD and NV implement throttling to prevent VRM/MOSFET failures. This protects their customers.
While FurMark can be great for stability/overclock testing, the power consumption tests from it are completely meaningless since it is not something you can achieve in any videogame (can a videogame utilize all GPU resources to 100%? Of course not since there are alwasy bottlenecks in GPU architectures).
techcurious - Wednesday, November 10, 2010 - link
How cool would it be if nVidia added to it's control panel a tab for dynamic fan speed control based on 3 user selectable settings.1) Quiet... which would spin the fan at the lowest speed while staying just enough below the GPU temperature treshold at load and somewhere in the area of low 50 C temp in idle.
2) Balanced.. which would be a balance between moderate fan speed (and noise levels) resulting in slightly lower load temperatures and perhaps 45 C temp in idle.
3) Cool.. which would spin the fan the fastest, be the loudest setting but also the coolest. Keeping load temperatures well below the maximum treshold and idle temps below 40 C. This setting would please those who want to extend the life of their graphics card as much as possible and do not care about noise levels, and may anyway have other fans in their PC that is louder anyway!
Maybe Ryan or someone else from Anandtech (who would obviously have much more pull and credibility than me) could suggest such a feature to nVidia and AMD too :o)
BlazeEVGA - Wednesday, November 10, 2010 - link
Here's what I dig about you guys at AnandTech, not only are your reviews very nicely presented but you keep it relevant for us GTX 285 owners and other more legacy bound interested parties - most other sites fail to provide this level of complete comparison. Much appreciated. You charts are fanatastic, your analysis and commentary is nicely balanced and attention to detail is most excellent - this all makes for a more simplified evaluation by the potential end user of this card.Keep up the great work...don't know what we'd do without you...
Robaczek - Thursday, November 11, 2010 - link
I really liked the article but would like to see some comparison with nVidia GTX295..massey - Wednesday, November 24, 2010 - link
Do what I did. Lookup their article on the 295, and compare the benchmarks there to the ones here.Here's the link:
http://www.anandtech.com/show/2708
Seems like Crysis runs 20% faster at max res and AA. Is a 20% speed up worth $500? Maybe. Depends on how anal you are about performance.
lakedude - Friday, November 12, 2010 - link
Someone needs to edit this review! The acronym "AMD" is used several places when it is clear "ATI" was intended.For example:
"At the same time, at least the GTX 580 is faster than the GTX 480 versus AMD’s 6800/5800 series"
lakedude - Friday, November 12, 2010 - link
Never mind, looks like I'm behind the times...Nate007 - Saturday, November 13, 2010 - link
In the end we ( the gamers) who purchase these cards NEED to be be supporting BOTH sides so the AMD and Nvidia can both manage to stay profitable.Its not a question of who Pawns who but more importantly that we have CHOICE !!
Maybe some of the people here ( or MOST) are not old enough to remember the days when mighty
" INTEL" ruled the landscape. I can tell you for 100% fact that CPU's were expensive and there was no choice in the matter.
We can agree to disagree but in the END, we need AMD and we need NVIDIA to keep pushing the limits and offering buyers a CHOICE.
God help us if we ever lose one or the other, then we won't be here reading reviews and or jousting back and forth on who has the biggest stick. We will all be crying and complaining how expense it will be to buy a decent Video card.
Here's to both Company's ..............Long live NVIDIA & AMD !
Philip46 - Wednesday, November 17, 2010 - link
Finally, at the high end Nvidia delivers a much cooler and quiter, one GPU card, that is much more like the GTX 460, and less like the 480, in terms of performance/heat balance.I'm one where i need Physx in my games, and until now, i had to go with a SLI 460 setup for one pc and for a lower rig, a 2GB 460 GTX(for maxing GTA:IV out).
Also, i just prefer the crisp Nvidia desktop quality, and it's drivers are more stable. (and ATI's CCC is a nightmare)
For those who want everything, and who use Physx, the 580 and it's upcoming 570/560 will be the only way to go.
For those who live by framerate only, then you may want to see what the next ATI lineup will deliver for it's single GPU setup.
But whatever you choose, this is a GREAT thing for the industry..and the gamer, as Nvidia delivered this time with not just performance, but also lower temps/noise levels, as well.
This is what the 480, should have been, but thankfully they fixed it.
swing848 - Wednesday, November 24, 2010 - link
Again, Anand is all over the place with different video cards, making judgements difficult.He even threw in a GTS 450 and an HD 4870 here and there. Sometimes he would include the HD 5970 and often not.
Come on Anand, be consistent with the charts.