Comments Locked

135 Comments

Back to Article

  • Ryan Smith - Sunday, July 7, 2019 - link

    Hey all,

    As you've probably noticed, this review is a bit bare-bones on information outside of benchmarks. With 2 video card launches in the span of a week - not to mention AMD's very important CPU launch - I've been stretched a bit thin trying to get everything to fall into place.

    So with the basics done and all of the benchmarks processed, I'm posting this now to let you all see how the RX 5700 series did on our testbench, and what my impressions are of the card. After getting some much-needed sleep, I'll be working on fleshing out the article some tonight, and then later in the week after I return from a further trip. I have 15 pages of notes on everything from threads to video decoding, and I want to get to it all.
  • Icehawk - Sunday, July 7, 2019 - link

    I know there is Bench and only so much space but considering who the target market is (you even specify in the conclusion) it would have been nice to see a 970 and a 580 in graphs. I’m still thinking it’s wait another generation though as my 970 handles 1440p on most games and I want the full jump to 4k from my next card - especially if it’s going to run north of $350.
  • 0ldman79 - Sunday, July 7, 2019 - link

    Agreed.

    I've got a 960M, a GTX 970 in one machine and 970 SLI in another.

    I'm just now starting to look hard at the newer cards. Direct comparisons would be appreciated, however, direct comparisons are already found on the Bench.
  • Surfacround - Monday, July 15, 2019 - link

    sorry, but dump the gtx970 SLI, and get a super2060, or a gtx1660ti... or an AMD card, nvidia needs to have a ZERO graphic cards sales period for a year, so they realize that they have it be in the GPU making business, not the sell-overpriced-GPU-business...

    nvidia needs to sell its rtx2070,2080 cars at half price... because the new consoles are going be as fast or faster than a gtx1660ti... (same speed as a 5700 my guess, or effectively that speed... for AMD it is moot, if you buy the new console in two years, or their graphics card... NVIDIA still loses.
  • Ryan Smith - Sunday, July 7, 2019 - link

    Just so it's noted, a 580 is in the graphs (it's in blue). As for a 970, with half of AT's video cards on the other side of the country right now, I couldn't swing one. But I do have a 980 in there.
  • just4U - Sunday, July 7, 2019 - link

    I think I would have preferred to see the last generation king included (1080ti) but other than that there's some great reference points overall to compare to.
  • yankeeDDL - Monday, July 8, 2019 - link

    If I may: in general, it's great to see the new cards stacked against their direct competitors, however, for people looking to upgrade, it's almost always more interesting to be able to look back at 2, 3, even 4 generations.
    The benchmarks of the RX400 and RX300, with latest drivers and modern titles, don't exist, so it's difficult to compare them against the new breeds.
    Obviously, everyone expects the new ones to be considerably faster, however, the question is "is it worth it"?
    Example: I have a Ryzen 5 1600 and I wonder if I should upgrade to the 3600x. More cores, higher clock, same power: for sure it is faster, but does it really make a difference?
  • jabber - Monday, July 8, 2019 - link

    They always forget to include the previous 2-3 generations which are in fact the cards 85% of us 'normal' folks looking for a midrange upgrade have. We only get to see how they compare with cards from 6 months previous.

    Just a note...some of us sweat our hardware for 2-4 years...cos we have to.
  • BikeDude - Monday, July 8, 2019 - link

    "some of us sweat our hardware for 2-4 years...cos we have to."

    No, we are not cheap, but "environmentally conscious". ;)
  • jabber - Monday, July 8, 2019 - link

    That and some of us just don't waste our lives running benchmarks all day.

    "Mmmm man that $650 got me an extra 3FPS!"
  • LoneWolf15 - Thursday, July 11, 2019 - link

    Or we do it because technology really stagnated up until this year.

    Until Coffee Lake-R and Ryzen 3xxx (and the significant DDR4 price drop) I couldn't justify replacing a 4790K with 32GB of DDR3 and a good board.
  • just4U - Monday, July 8, 2019 - link

    Oh well hell.. here let me help you then.. the RX480 was only slightly slower than the RX580.. although the 380 was substantially slower than the 480 and a bit of a power sucker... If your sitting on 300 series cards (or shoot 700 series Nvdia) then anything today is a substantial upgrade across the board... this should be fairly easy to see.
  • yankeeDDL - Tuesday, July 9, 2019 - link

    It was a general statement (at least, mine was). There's so much that has changed from the time that the 300 and 400 series was tested (drivers changes) that there's simply no correct information today. Even more so on the CPU side, with all the mitigation that came into play and with the new scheduler of Windows 1903. My personal opinion is that a "quick" review when the embargo drops, showing the changes over the previous gen, is great. But 1 or 2 weeks later, the data could be extended to cover previous generations, and have a very clear, up to date picture. I think it would be very helpful.
  • just4U - Wednesday, July 10, 2019 - link

    @YankeeDDL,
    I try to look at the bench when gauging overall performance of older cards, find a reference with something newer and then calculate that based on a current review. That doesn't always tell the whole tale though. The 390X is listed in this review yet I struggled with a 390 at the tail end of 2017 on new games. Video lag was getting the better of me on a fast system. So it's a mixed bag, all you can do is gauge how it is in comparison..
  • artk2219 - Tuesday, July 9, 2019 - link

    Also I'd argue that if you're sitting on a 390x you compare very well against an RX580, like within 5%. So the minimum you should be looking at for an upgrade would be a 1070 ti\2060\Vega 56\RX 5700. But if you only game at 1080p, that card still has some time left in it.
  • Ryan Smith - Monday, July 8, 2019 - link

    You are absolutely not forgotten. It's why a 390X was included. That was a $400 card on launch (and got cheaper pretty quickly), making it a good comparison point to the $400 5700XT.
  • Meteor2 - Monday, July 8, 2019 - link

    Thank you Ryan
  • IUU - Thursday, August 1, 2019 - link

    I feel you. Just an "irrelevant note". The upgrade of a midrange gpu which delivers between 5 and 8 teraflops single precision costs 350 if cheap, 500 if expensive. To get flagship performance in cellphones you need to cash out about 350 if cheap and 1000+ if expensive, for the "overwhelming " gpu performance between half and 1 teraflops. Just to set things straight. Hardcore computer users who are price conscious , and maybe environmentally conscious are stuck on desktop and think twice and under full investugation what to spend.
    Holiday goers , that haven't ever heard of a gpu or gflops , are happily cashing out for inferior computationally products, without any deep thought; thus strengthening the manufacture of products that do not contribute sybstantially to the increase of home users computing capacity. Food for thought.
  • russz456 - Monday, July 29, 2019 - link

    m just now starting to look hard at the newer cards. Direct comparisons would be appreciated, however, direct comparisons are already found on the Bench.
    https://thegreatsaver.com/lennyssurvey-lennys-subs...
  • 0ldman79 - Sunday, July 7, 2019 - link

    Awesome.

    Any info you can give us on hardware encoding/decoding is appreciated.

    Half of the use of my Nvidia cards is NVENC. I recently tried my wife's laptop (A6-6320 I believe) and was pleasantly surprised with it's encoding capabilities. Might take a look at AMD again, but I need info before a purchase. The hardware encoding capabilities are mostly overlooked outside of streaming games online.
  • Betonmischer - Thursday, July 11, 2019 - link

    I've got some benchmarks for you.

    https://3dnews.ru/assets/external/illustrations/20...
    https://3dnews.ru/assets/external/illustrations/20...
    https://3dnews.ru/assets/external/illustrations/20...
    https://3dnews.ru/assets/external/illustrations/20...
    https://3dnews.ru/assets/external/illustrations/20...

    The legend is in Russian, but It's gonna be easy to figure out everything else. The first three is for decode, fourth and fifth for encode (ffmpeg fast/speed preset).
  • ZolaIII - Sunday, July 7, 2019 - link

    Just to let you know that AMD's Windows property drivers actually ain't in that good shape. I noticed based on Michael's Linux tests (OGL only for now) that RadeonSI open source how new AMD GPU's actually achieve better results compared to Nv cards with mature property drivers.
    https://www.phoronix.com/scan.php?page=article&...
    Naturally RadeonSI is also far from optimized for NAVI. Point is there's more to expect from driver optimisations in the future.
    . Best regards.
  • Zingam - Sunday, July 7, 2019 - link

    Well, AMD is failing - the have no competitive mobile parts - CPU & GPU. How much is the market share of mobile vs desktop these days? In the company I work for, every new hire gets an Intel based Dell laptop and the towers are disappearing all the time.
  • WaltC - Sunday, July 7, 2019 - link

    If I had a nickel for every time I've heard "AMD is doomed" since 1999 I'd be worth considerably more than I am...;)
  • just4U - Sunday, July 7, 2019 - link

    If your count is on par with mine.. likely you'd be able to pay all your house bills for a couple of months!
  • Zingam - Monday, July 8, 2019 - link

    I didn't mean to say that AMD is doomed. I meant that way that AMD failed to deliver anything compelling for me - good, competitive, high performance mobile products.
  • Korguz - Monday, July 8, 2019 - link

    " failed to deliver anything compelling for me - good, competitive, high performance mobile products. "
    not yet you mean :-)
  • just4U - Sunday, July 7, 2019 - link

    It also needs noting here that OEMs are getting ready to release some pretty impressive AMD mobile setups.. It was mentioned during trade shows over the past month. Their basically going to be hitting practically every sector pretty hard over the coming months... The Ryzen 2700U and 3750H look fairly solid to me..
  • Korguz - Sunday, July 7, 2019 - link

    Zingam.. " the have no competitive mobile parts - CPU & GPU " um is amd even launching mobile parts to day ?? nope.. maybe later in the year.. but not yet...
  • evernessince - Monday, July 8, 2019 - link

    TF does this have to do with this review?
  • Gastec - Tuesday, July 9, 2019 - link

    Absolutely nothing at all. It's just ranting for the sake of ranting. A.k.a. trolling.
  • psiboy - Monday, July 8, 2019 - link

    @Zingam Mate they've had good mid range mobile CPU's in laptops for a while now with a whole new Generation coming.. How is it AMD's fault your purchasing people don't do theor homework? You have 1 example to base your entire argument on. If a blind man touches an elephants trunk I fear he feels it is like a snake.. hardly the truth.
  • Da W - Monday, July 8, 2019 - link

    i believe they have something, they are called PS 5 and XBOX 4 !
  • Gastec - Tuesday, July 9, 2019 - link

    We don't give a *censored by Google and the SJW woke armies* about your "mobile".
  • WaltC - Sunday, July 7, 2019 - link

    Ryan--looks good--I'll be watching for the flesh-out later....;) I have a polite request--would you be so kind as to put the XT and 5700 together with a Ryzen 3k-series CPU on a x570 mobo and post some D3d12 multi-GPU benchmarks in SotTR? Reason I ask is because I currently have an RX-590/480 8GB X-fire system set up, and running the SotTR in-game bench in d3d12 multi-GPU mode I get ~62 fps @ 3840x2160, ~32fps running only the 590 (@ 1.6GHz), and I was curious as to how two 5700's in tandem might run this game. Also, I use lots of eye-candy--but not all--but it's better than medium settings, imo. I'd actually like to see more X-fire/d3d12 multiGPU results. I bought the RX-480 a couple of years ago, then added the 590 this past X-mas, on a whim, and have been surprised and pleased by how well it runs. Don't know why most sites won't cover multi-GPU anymore--AMD has made it almost idiot-proof these days--no more having to match operating frequencies of the cards, no more having to reboot to turn on/off X-fire in a game profile, etc. Thanks, if you can squeeze it in sometime...;)
  • ballsystemlord - Sunday, July 7, 2019 - link

    Not that I intend to do mulit-GPU, but I always wondered why they stopped bench marking it.
  • Ryan Smith - Sunday, July 7, 2019 - link

    Unfortunately our GPUs and CPUs are on opposite continents. So I don't have access to Ryzen 3000 at the moment, and Gavin doesn't have a Radeon 5700.
  • WaltC - Sunday, July 7, 2019 - link

    Well, when you guys get some time..;) OK, I just finished six hours of driver hell with the 19.7.1's--I haven't seen this kind of behavior out of a GPU driver in > 15 years...! Make a long story short, the 19.7.1's I installed today threw a black screen up at the first initialization point during the Adrenalin's routine install--locked up the display at that point with constant signal to the monitor that kept the screen black. I got out of it, manually, but it was pretty severe--so when I got the desktop back I uninstalled the AMD GPU drivers in toto and thought I'd try it again from scratch, and after I'd made a restore point (Win10x63 18362.1000+)--which I didn't make the first time--woulda', shoulda' coulda', you know how it goes...;) I tried again thinking that maybe the problem was interference from some older driver files. Nah--locked at the first D3d initialization, again, same place, same black screen with same constant signal hard locked (monitor did not lose sync or go to standby for hours unless I powered it off.) It was horrible--it somehow hosed my boot bcd arrangement good--I couldn't even get to safe mode! Any way, it was like getting stuck in fly paper--the more I struggled the worse the mess got until tonight I was despairing of getting back in--nothing worked--! I could boot to my Rufus USB flash drive Win10 installations without a problem--but it was of little use since every time I tried to change my C:\ drive (L:\ when booting from RUFUS USB) the system rejected my attempts--even calling up Sys Restore manually from the X:\ Admin prompt did nothing--because of course it was booting from the wrong drive...;) Arrrgh! Finally I popped in my Macrum Reflect Rescue DVD and thought to have go ahead and simply restore my backup--and lo and behold I noticed a little blurb in the MR UI that said "Want your boot-drive problems fixed?" Yeah!--and selected for that and the MR fixed them at least well enough that I could get into safe mode again and then apply my Sys Restore file. So that was my Marathon in Hell on the day that I ordered my Ryzen 3K series CPU and my Aorus Master...I thought you'd like to know about this because you said you'd had a rough time with the drivers on the 5700's...I can second that loudly for my hardware! It's been many moons since I've seen a GPU driver this bad out of AMD--in fact, I can think of only one other similar incident since 2002, so this is definitely unusual for AMD. You know, my experience was so bad and weird that I could believe it if I heard that some disgruntled (or bribed!) employee at AMD decided to spread a little havoc and rain on AMD's parade today...OK, 'nuff said--I'm sticking with the 19.6.3's until I hear differently!
  • psiboy - Monday, July 8, 2019 - link

    WaltC I don't install the "optional" drivers such as the 19.7.1 I onlyu install the "recomended" drivers as they have usually passed Microsofts WHQL testing. Thought that might help you not run into those sort of issues with "beta drivers" to quickly again mate. Pays to do your homework!
  • WaltC - Monday, July 8, 2019 - link

    I don't disagree, and thanks for the comment--but you know, I always update when a new driver is released--and this is the first time in I don't know how many years that I've had this kind of problem! That's why I wrote it up--coupled with the problems Ryan and others have reported here even on the 5700/XT--although they didn't have the problems I did. I'm positive I won't be the only one seeing the problems here. I usually have lots of praise for AMD--but I can't do that when I see something like this. Looks like they might've pushed the card out a couple of months early, as I couldn't find them on Amazon US--or Newegg--which had so few in stock they didn't even have a category for the 5700s. Imagine--I was going to pick up an Anniversary card yesterday anyway but as of this morning can't find one at these outlets!
  • WaltC - Monday, July 8, 2019 - link

    All FIXED--had to come back and correct the record. I used the "clean" install method from the 19.6.3's--and all is well so far with the 19.7.1's, I am delighted to say...;) Couldn't leave it hanging like that. I should have done that the first time, because it is a new driver set of greater size and feature support (for the 5700's)--yes, mea culpa! How many times is it the last thing I try that *works*? (utterly rhetorical)...Arrghghghg! Only thing out of the ordinary I saw were some funky desktop colors that vanished with the initial reboot after installation--running fine now!
  • WaltC - Sunday, July 7, 2019 - link

    Drivers need, it would seem.
  • ksec - Sunday, July 7, 2019 - link

    Thanks. It is good enough for me as it is. Technical Details could be another article. But the Number of Performance, Price in Percentage does the job.
  • IGTrading - Sunday, July 7, 2019 - link

    Thank you for the good work Ryan and thank you for not awarding some ridiculous silver "award" to Radeon of your colleagues did with Ryzen 3000 :)
  • Ryan Smith - Sunday, July 7, 2019 - link

    So it's noted, I'm the one that suggested the Silver award. I'm as equally culpable for that as I am the lack of an award here.
  • jospoortvliet - Monday, July 8, 2019 - link

    Fwiw both decisions were prudent if you ask me. Ryzen 3 changed the the entire direction CPU performance was taking - for over half a decade we had incremental improvements at best. Now the triplet price/performance/power is completely upended by the new generarion zen. While the new radeons are interesting they are nowhere near as market-moving...
  • jakky567 - Sunday, July 7, 2019 - link

    Does it have VP9 10 bit? I know Raven Ridge does. It's nice for YouTube hdr.
  • Ryan Smith - Monday, July 8, 2019 - link

    Yes.
  • crashtech - Wednesday, July 10, 2019 - link

    Well, the Compute portion is still bare-bones. Do these cards just not run on the omitted applications?
  • alfatekpt - Wednesday, October 23, 2019 - link

    Why no 1660 TI included in this benchmark?
  • GeoffreyA - Sunday, July 7, 2019 - link

    Many thanks, Ryan, to you and the team for all the hard work. We do appreciate it.
  • catavalon21 - Sunday, July 7, 2019 - link

    Hoping for really competitive results in the mid-range for compute, that AMD doesn't have drivers that support the new architecture is absurd. To not even run on some older computer work means this was clearly not ready for prime time. Shame on you, Lisa.

    I write this, very disappointed that the choice of a mid range GPU right now isn't much more difficult.
  • catavalon21 - Sunday, July 7, 2019 - link

    ...older COMPUTE work...<sigh>
  • just4U - Sunday, July 7, 2019 - link

    Holy crap.. I wasn't actually expecting Amd to come close to Nvidia with these. (Regardless of the hype by Amd) The 5700XT is just a smidge slower than the 2070S.. and it's quite a impressive jump over the RX580/90s they replace.
  • catavalon21 - Sunday, July 7, 2019 - link

    My whining about compute aside, you're right. The 5700XT competes very well against the 2070S - better than I hoped for.
  • DanNeely - Sunday, July 7, 2019 - link

    Yeah. AMD's showing is strong enough I'm wondering if we'll see farther NVidia price cuts in the near future.
  • Kevin G - Sunday, July 7, 2019 - link

    They are indeed impressive agains nVidia's Super cards but by pricing they're more of a Vega 56/64 replacement.
  • just4U - Sunday, July 7, 2019 - link

    I was considering it from a new norm on video card pricing as to me their upper mid range and don't appear to compete with Vega multipurpose cards to replace them.
  • tipoo - Sunday, July 7, 2019 - link

    Looks like that completely outsized Particle Physics subscore was real, from multiple results coming in. Interesting. Given AMD seems to be going for a hybrid RT approach for RDNA 2.0 in 2020, I wonder if this was a half step towards building out this portion of the chip for it.

    https://browser.geekbench.com/v4/compute/4259036

    Under OpenCL, it beats a 2080TI under CUDA, in that one subtest.
  • mildewman - Sunday, July 7, 2019 - link

    Can someone explain to me why Navi requires twice the number of transistors (10.3B) compared to Polaris (5.7B) for the same number of CU's ?
  • rUmX - Sunday, July 7, 2019 - link

    It's no longer the same architecture. RDNA vs GCN. The fact that a 36 CU (5700) consistently beats Vega 56 (56 CU) shows the design changes. Sure part of it is clock speeds and having 64 ROPs but still Navi is much more efficient than GCN, and it's doing it with much less shaders. Imagine a bigger Navi can match it exceed the 2080 TI.
  • Kevin G - Sunday, July 7, 2019 - link

    Not all those transistors are for the improved CUs either. There is a new memory controller to support GDDR6, new video codec engine and some spent on the new display controller to support DSC for 4K120 on DP 1.4.
  • peevee - Thursday, July 11, 2019 - link

    Why would GDDR6 need substantially more transistors than GDDR5? Video codec seems more or less the same also.
    Looks like there are some hidden features not enabled yet, hard to explain that increase in transistors per stream processor (not CU) otherwise (CUs are just twice as wide).
  • Meteor2 - Monday, July 8, 2019 - link

    I was wondering the same thing.
  • JasonMZW20 - Tuesday, July 16, 2019 - link

    Because it's now a VLIW2 architecture via RDNA. Each CU is actually a dual-CU set (2x32SPs, 64 SPs total) and is paired with another dual-CU to form of workgroup processor (4x32) or 128 SPs. Tons of cache has been added and rearranged. This requires space and extra logic.

    Geometry engines (via Primitive Units) are fully programmable and are no longer fixed function. This also requires extra logic. Rasterizer, ROPs, and Primitive Unit with 128KB L1 cache are closely tied together.

    Navi definitely replaces both Polaris and Vega 10/20 for gaming, so average out Polaris 30 (5.7B) and Vega 10 (12.5B) transistor amounts and you'll be somewhere near Navi 10. Vega 20 is still great at compute tasks, so I don't see it being phased out in professional markets soon.
  • Cooe - Tuesday, March 23, 2021 - link

    ... RDNA is NOT VLIW (like Terascale) ANYTHING. It's still exclusively a scaler SIMD architecture like GCN.
  • tipoo - Sunday, July 7, 2019 - link

    Do those last (at least two) Beyond3D tests look a little suspect to anyone? Multiple AMD generations all clustering around 1.0, almost looks like a driver cap.
  • rUmX - Sunday, July 7, 2019 - link

    Ugh no edit... I meant "Big Navi can match or exceed 2080 TI".
  • Kevin G - Sunday, July 7, 2019 - link

    Looking at the generations it doesn't surprise me about the RX 580 but it is odd to see the RX 5700 there, especially when Vega is higher. An extra 20% of bandwidth for the RX 5700 via compression would go a long way at 4K resolutions.
  • Ryan Smith - Sunday, July 7, 2019 - link

    It's a weird situation. The default I use for the test is to try to saturate the cards with several textures; however AMD's cards do better with just 1-2 textures. I'll have a longer explanation once I get caught up on writing.

    From my notes: (in GB/sec, random/black)

    UINT8 1 Tex: 333/472
    FP32 1 Tex: 445/469
    UNIT8 6 Tex: 389/406
    FP32 6 Tex: 406/406
  • bananaforscale - Sunday, July 7, 2019 - link

    I expected to be buying a 2070 Super next, but now I'm absolutely waiting for third party cards on both sides. Didn't expect 5700XT to have a lower power draw than 2070S under full load either.
  • Skiddywinks - Sunday, July 7, 2019 - link

    It's the slightly slower than a 2070S but 20% better perf/cost that's got me.

    Will have to see how those numbers stack up with the third party cards. But the XT is looking better than I expected.
  • Kevin G - Sunday, July 7, 2019 - link

    Given the pricing structure, AMD as initially targeting the RTX 2070 performance for the RX 5700 and the result point to a victory in that comparison. The gotcha is that nVidia went Super and AMD has pre-emtpively dropped prices. The result is that what was the performance at $599 nine months ago from nVidia can now be hand for $399 from AMD. Street pricing over time should be more interesting as AMD has more room to move downward while boosting performance over time with some driver updates. Due to the restructuring of nVidia's line up, the RX 5700XT isn't a clear win but certainly not a wrong choice to make.

    Huh, I wonder what feature set AMD has for HDMI for it not work at boot on your workbench. Makes me wonder what would happen with this card in a 2010 Mac Pro which exhibits similar display oddities with 3rd party GPUs at boot.

    Drivers do need some polish looking at the benchmark data. Strange Brigade's 99th percentile numbers are very similar at 1440p and 1080p where the averages are more divergent. Chances are that there is a hiccup there that might be able to be flattened out. Similarly the synthetic numbers are troublesome and point to a driver issue: the buffer compression figures indicate, well, that there is buffer compression going on. Vega on the other hand has a small amount going on which helps. I'd be really curious if compression data from real games can be extracted to fully isolate it to drivers. Also how well do Vulkan games run? I know it isn't part of the normal test bench any cursory data on this API support with the launch drivers?
  • imaheadcase - Sunday, July 7, 2019 - link

    AMD has never been known for good drivers, or slow to fix stuff. I doubt they have more room to move downward, they pretty much knew the pricing of cards at release and made it seem like it price drop.

    For a brand new "cheap" pc build these be ok cards, but if already got a decent nvidia gpu no reason at all to upgrade to it, considering the time it takes AMD to catch up in hardware, they always fall back behind for years.
  • just4U - Sunday, July 7, 2019 - link

    Wait... what? I am sitting on last gen 1080s(sli) and Vega's(cf) and while I don't see any card out there I want besides the Vega VII currently.. to suggest these would only be good in a "cheap" build is nonsense.
  • mapesdhs - Sunday, July 7, 2019 - link

    imaheadcase, you're out of date re drivers, AMD has more reliable drivers these days, and (it seems) generally better image quality, hence why Google chose AMD over NVIDIA for Stadia. There's always an exception of course, Radeon VII's launch drivers were awful, that card was launched two weeks too early.
  • haukionkannel - Monday, July 8, 2019 - link

    True... AMD drivers have been better than Nvidia drivers for a some time and that is small miracle considering how big programming team Nvidia has... wonder what They have been doing lately? Optimising RT performance?
  • tamalero - Monday, July 8, 2019 - link

    The usual, gaining exclusives by optimizing the games FOR their hardware first.
  • zodiacfml - Sunday, July 7, 2019 - link

    I guess the choice of games matter.
  • Korguz - Sunday, July 7, 2019 - link

    yep.. and i dont play any of the games tested.. so.. to attempt to choose a new vid card.. will be interesting....
  • mapesdhs - Sunday, July 7, 2019 - link

    Check out Hardware Unboxed and (when it's up) Gamers Nexus for Navi reviews, they're likely to have a different selection of games.
  • fizzypop1 - Sunday, July 7, 2019 - link

    There is a 5 % performance gap between the 5070-XT and the 2070 super I am thinking the anniversary edition may be able to catch the 2070super and at a lower price may be worth considering. Other reviews are saying they have driver issues so there may be more performance to be had.
  • GreenMeters - Sunday, July 7, 2019 - link

    Wow, AMD has really done it on both CPU and GPU fronts. Looks like next system will be 3700X + RX 5700, Linux only, open source drivers. Only catch is the need to wait for 3rd party GPU cooler, just for quieter operation.
  • rolfaalto - Sunday, July 7, 2019 - link

    Would be interesting to run tests using the new AMD CPUs ... taking full advantage of PCIe-4!
  • Kevin G - Sunday, July 7, 2019 - link

    I was hoping for a last minute surprise that when paired together the link between a RX 5700 and Ryzen 3000 series chip would negotiate to an Infinity Fabric link with even more bandwidth and more importantly memory coherency. This would be more of an efficiency play than shifting peak performance higher. Compute work loads should love this arrangement.
  • mapesdhs - Sunday, July 7, 2019 - link

    There will be no benefit for games with PCIe 4.0 and current Navi products. Maybe Navi 20 at 8K, but not at the moment. Benefits from 4.0 are far more related to storage just now, which for most users again is largely irrelevant.
  • rahvin - Monday, July 8, 2019 - link

    Storage speed is never a non-factor. It affects everything you do. Sure it's not like going from HD to SSD but any increase in speed of the disk system has an impact because it's the slowest part of the whole computer.
  • msroadkill612 - Monday, July 8, 2019 - link

    We are a rare breed Sir. I very much agree but it is heresy to say it out loud. Its not just raw speed either - even the lag & processing overhead of chipset sata ssd vs native pcie nvme is significant, especially on an underpowered rig like an APU.
  • ballsystemlord - Sunday, July 7, 2019 - link

    Thanks for your hard work ryan. I'll read this after you flesh it out a bit as its sparsity makes checking it for typos rather pointless.
    I look forward to your post on the compute benchmarks in the coming weeks (months?).
  • ballsystemlord - Sunday, July 7, 2019 - link

    @ryan , what's the die size?
  • Ryan Smith - Sunday, July 7, 2019 - link

    251mm2.
  • ballsystemlord - Sunday, July 7, 2019 - link

    Thanks!
  • eastcoast_pete - Sunday, July 7, 2019 - link

    @Ryan: Thanks for the initial review! If I read it correctly, AMD is at risk of (again) screwing up on the driver side. I really hope that they get on this pronto, and put their collective backs into it. Even great hardware can suck if the software is buggy.
  • moozoo - Sunday, July 7, 2019 - link

    "In short, OpenCL doesn't have any good champions right now."
    Actually Intel, especially with their new discrete graphics card coming up. They would be mad to start a new GPU computer API and I'd expect Its unlike they would join AMD with Rocm or Nvidia with CUDA.
    Intel's fpga's also have opencl and its an obvious direction for them.
    They have been pushing intel ispc lately (unreal physics) for the cpu so it is not impossible they could push a gpu based version of that, but I think that's unlikely. ispc is more about pushing their cpu's avx512 against amd.

    Also about fp64, I didn't see it mentioned anyway. for the minority like me that care, according to techpowerup it is 1:16 which would be great as I feared they would drop it all together as Intel has with Gen11. The techpowerup 5700xt entry is at https://www.techpowerup.com/gpu-specs/radeon-rx-57...
  • FourEyedGeek - Sunday, July 7, 2019 - link

    How effective is ray tracing on a RTX 2060 Super, it is all well and good to have a feature, but if most people who purchase the card do not enable it due to the massive performance hit, is the price premium worth it?

    To me it seems like the 5700 is a fantastic price to performance option, and the 2070 Super is the realistic minimum GPU you'd want if utilising ray tracing.
  • webdoctors - Thursday, July 11, 2019 - link

    i don't have a card or the games but I think on a 2060 Super you'd be able to do really great at 1080P with RT. Honestly I'd rather have ray tracing and 1080P than non-RT at 4K.

    https://wccftech.com/geforce-gtx-ray-tracing-1080p...

    This link shows the non-super 2060 hitting 60 FPS in Metro and Justice Tech so the 2060 Super should be great. I've seen demos like Bioshock with raytracing on and its just amazing. Ppl complaining about RT just doesn't make sense, the idea and tech has been around for decades and used by movie studios because its not the hacky way of visualization and its what your eyes perceive.

    I'm glad we're having some evolution in gaming after DX11 in graphics thats in real image quality and not just the polygon increase the last few years. Sure its not cheap but Intel and AMD also doing it so the prices should come down when its standard. Look at how much SSD and DDR4 have dropped.
  • Bensam123 - Monday, July 8, 2019 - link

    No testing of input delay or anti-lag? Definitely one of the things I think a lot of gamers are looking forward to the most if they've heard about it and if they haven't they'll be interested once they do.

    That aside one of the most interesting things in the benchmarks is that the 5700XT is beating the 2070S in some 1080p tests, sometimes by a large margin. Given most gamers, especially competitive ones play at 1080p that's a pretty big deal.
  • isthisavailable - Monday, July 8, 2019 - link

    I want AMD to make a 2080ti super competitor (navi 20?), and stick 2 of those together on a single PCB, putting PCIe 4.0 to good use and taking the performance crown from Nvidia.
  • jjj - Monday, July 8, 2019 - link

    lol going backwards in terms of perf per dollar vs Vega 56.
    Both Nvidia and AMD should be squashed by regulators in a sane world as this is ridiculous.
  • eva02langley - Monday, July 8, 2019 - link

    Idiot...

    https://static.techspot.com/articles-info/1870/ben...
  • eastcoast_pete - Tuesday, July 9, 2019 - link

    Actually, I like the idea used in the graph you linked to: $ per fps, averaged from 18 games, all at 1080p very high settings. It allows a value comparison all the way from lower to high-end cards.
  • Meteor2 - Monday, July 8, 2019 - link

    C'mon jjj you're better than that.
  • sgkean - Monday, July 8, 2019 - link

    How does enabling the various advanced features (Ray Tracing, AMD Fidelity FX, AMD Image Sharpening) affect the game scores? With the performance being so close, and these new features/technologies being the main difference, would be nice to see what effect they have on performance.
  • Wardrop - Monday, July 8, 2019 - link

    I assume the noise of these is such due to the use of a blower? I'm guessing we'll have to wait for custom PCB's and coolers to get something quieter, or otherwise got with water cooling.
  • xrror - Tuesday, July 9, 2019 - link

    Argh... yet again, it seems like AMD is pushing beyond the sweet spot of the process node to try and force as much raw performance out as they can.

    I really don't want to be yet another person bashing on Raja. He probably did get a bit of "short changed" on personnel resources at AMD as Ryzen really DID need to succeed else AMD dies. And he did deliver on giving good GPU compute GPU cores for the higher margin workstation markets.

    But... it just feels like AMD needs to get to terms with their fabrication node and how to get GPU cores to "kickith the butt" beyond beating Intel IGP graphics.

    Which... feels unfair in a way. The only reason AMD "sucks" is that nVidia right now is so stupid dominant in discrete graphics (and major kudo's to nVidia for mastering that on an "older node" even). I mean even Intel had really bad problems porting it's IGP graphics to 10nm Cannon Lake.

    But that all said, RX 5700 really feels like it's fighting against the process node to not suck. Intel may (hopefully, might) actually get it's s**t together and bring forth a competitive descrete card (and if they "fail" guess what, that fail will hammer the lower end market) and nVidia...

    well like, nVidia even -2 process nodes behind at this rate would probably still be faster. Which is stupid. All credit to nVidia, it's just I really hoped for a few more process "rabbits out of the hat" before GPU's slammed into the silicon stagnation wall.

    I just wish we could have gotten maybe a doubling of graphics performance for VR before "market forces" determined that a VR/4K capable video setup is going to cost you over $1000.
  • Meteor2 - Tuesday, July 9, 2019 - link

    "RX 5700 really feels like it's fighting against the process node to not suck." -- what are you talking about?
  • peevee - Thursday, July 11, 2019 - link

    Actually, for GPUs with their practically linear scaling of performance from ALUs, using the densest nodes is the right approach. They probably should have used denser, low-power variant (libraries) of TSMC's "7nm" process and add more ALUs in the same space at the expense of frequency, but that would be different from what Ryzen 3, so add the extra expense to R&D.
  • CiccioB - Tuesday, July 9, 2019 - link

    In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency.
    Thanks to the new 7nm PP they manage to create something that looks like acceptable.
    But as we already saw in the past, they somewhat filled the gap only because Nvidia is still waiting for the new PP to become cheaper.
    Once it will, Nvidia new architecture is going to leave these useless piece of engineering in the dust. Be it just a Turing shrink with no other enhancements.
    10 Billions transistors to improve IPC of about 1.25x and spare just few W thanks to the 7nm PP. And be on par to Pascal at the end. 10 Billions transistors without the support of a single advanced feature that Turing has, such has VRS that is going to improve performances a lot in future games and is going to be the real trump card for Nvidia against this late Pascal, no mesh shading or similar, no FP+INT, no RT and no tensors that can be used for many things included advanced AI.
    10 billions transistors that simply have given evidence that GCN is problematic and really needs a lot of workarounds to perform well. 4.4 millions transistors used to improve GCN efficiency. And that resulted in a mere 1.25x.
    10 billions transistors spent on fixing a crap architecture that would not be enough to make it look good but, again, if the frequency/W curve would not have been ignored completely making this chip consume the same as the rival which is on a older PP. Like for all the previous failing architectures starting from Tahiti.

    In the end this architecture is a try to fix an un-fixable GCN and relies only on the delay that Nvidia has in the 7nm adoption. On the same node it would have been considered the same as Polaris or Vega, big, hot, worthless to the point to be sold with no margins.
    As we can see this is equal in being a waste of transistors and W and has been discounted even before launched. Worthless piece of engineering that will be "steamrolled" by next Nvidia architecture that will pose the basic path for all the next graphics evolution while already extending what is already available today thought Turing.
    AMD has still to put all those missing features, and it already has a really big transistor budget to handle today. 7nm, though by some revision, are here to stay for long time. If AMD is not going to change RDNA completely they won't be able to compete but by skipping the support of the more advanced features in the next years and are going to enjoy this match of the performances for just few months. Of course the missing features will be considered useless until they will eventually catch up. And they have still the console weapon to help them keep the market to a stall as they are quite behind with what the market can provide in the next years. RT is just the point of the iceberg. But also advanced geometry like mesh shading features that could already boos the scene complexity to the moon. But we just learnt that with NAVI AMD just managed to match Maxwell geometry capacity. Worthless piece of silicon, already discounted before launch.
  • Meteor2 - Tuesday, July 9, 2019 - link

    "In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency." -- that makes no sense at all.

    Didn't bother reading the rest of your comment, sorry not sorry.
  • CiccioB - Wednesday, July 10, 2019 - link

    I just wonder what you have seen.
    NAVI gets the same perf/W that Pascal has and the same exact features.
    No RT, no tensor, No VSR, no geometry shading, no Voxel acceleration (that was already in Maxwell), no doble projection (for VR).
    7nm and 10 billions transistor to be just a bit faster than a 1080 that is based on a 5.7 billion transistor chip. And using more power do to so.

    Don't bother reading. It is clear you can't understand what's written.
  • Zoolook13 - Friday, July 19, 2019 - link

    1080 has 7,2 Billion trans, and 1080 Ti has 11,7B IIRC, so your figures are all wrong and there is a number of features on Navi that isn't in Pascal, not to mention it's vastly superior in compute.
  • ajlueke - Wednesday, July 10, 2019 - link

    At the end of the day, the 5700 is on the identical performance per watt and performance per dollar curve as the "advanced" Turing GPUs. From that we can infer that that "advanced" Turing features really don't amount to much in terms of performance.
    Also, the AMD RDNA GPUs are substantially smaller in die area than the NVidia counterparts. More chips per wafer, and thus lower production costs. AMD likely makes more money on each sale of Navi than NVidia does on Turing GPUs.
  • CiccioB - Wednesday, July 10, 2019 - link

    So having "less features is better" is now the new AMD fanboys motto?
    The advanced Turing express its power when you use those features, not when you test the games that:
    1. Have been optimized exclusively for GCN architecture showin
    2. Use few polygons as AMD's GPUs geometry capacity is crap
    3. Turn off Turing exclusive features

    Moreover the games are going to use those new feature s as AMD is going to add them in RDNA2, so you already know these piece of junks are going to have zero value in few months.

    Despite this, the size is not everything, as 7nm wafer do not cost as 12nm ones and being small is a need more than a choice:in fact AMD is not going to produce the bigger GPUs with all the advanced features (or just a part of them that fit on a certain die size, as they are going to be fatter than these) on this PP until it comes to better costs, the same is doing Nvidia that does not need 7nm to create better and less power hungry cards.
    These GPUs are just a fill up waiting for the next real ones that have the features then next console will enjoy. And this will surely include VRS and somewhat a RT acceleration of some sort as AMD cannot be without to not be identified as being left in stone age.
    You know these piece of junk will be soon forget as they are not good for nothing but to fil the gap as Vega tried in the past, The time to create a decent architecture was not enough. They are still behind and all they are trying is to make a disturbance move just by placing the usual discounted cards with not new features exploiting TSMC 7nm granted allocation for Ryzen and EPYC.

    It does not cost AMD anything stil making a new round of zero margin cards as they have done all these years, but they gain in visibility and make pressure to Nvidia with it features rich big dies.
  • SarruKen - Tuesday, July 9, 2019 - link

    Last time I checked the turing cards were on 12nm, not 16...
  • CoachAub - Tuesday, July 9, 2019 - link

    The article states in the graphic that Navi has PCI-e 4.0 Support (2x bandwidth). Now, if this card is paired with the new X570 mobo, will this change benchmark results? I'd really like to see this card paired with a Ryzen 3000 series on an X570 mobo and tested.
  • CiccioB - Wednesday, July 10, 2019 - link

    No, it won't. Today GPUS can't even saturare 8x PCIe-3 gen bandwidth, do having more does not help at all. Non in the consumer market, at least.
  • peevee - Thursday, July 11, 2019 - link

    They absolutely do saturate everything you give them, but only for very short periods necessary to load textures etc from main memory, which is dwarfed by loading them from storage (even NVMe SSDs) first.

    BUT... the drivers might have been optimized (at compile time and/or manual ASM snippets) for AMD CPUs. And that makes significant difference.
  • CiccioB - Friday, July 12, 2019 - link

    Loading time is not the bottleneck of the GPU PCIe bandwidth, nor the critical part of its usage. Lading textures and shade code in .3 secs instead of 0.6s does not make any difference.
    You need more bandwidth only when you saturate the VRAM and the card starts using system memory.
    But being much slower than VRAM, having PCIe 2, 3 or 4 does not change much: you'll have big stuttering and frame drops.
    And in SLI/cross fire mode PCIe 3 8x is still not fully saturated. So at the end, PCIe 4 is useless for GPUs. It is a big boost for NVe disk and to increase the number of available connections using half of the PCIe 3 lines for each of them.
  • msroadkill612 - Thursday, July 25, 2019 - link

    "Today GPUS can't even saturare 8x PCIe-3 gen bandwidth, do having more does not help at all. "

    It is a sad reflection on the prevailing level of logic, that this argument has gained such ~universal credence.

    It presupposes that "todays" software is set in stone, which is absurd. ~Nothing could be less true.

    Why would a sane coder EVEN TRY to saturate 8GB/s, which pcie 2 x16 was not so long ago when many current games evolved their DNA?

    The only sane compromise has been to limit game's resource usage to mainstream gpu cache size.

    32GB/s tho, is a real heads up.

    It presents a competitive opportunity to discerningly use another tier in the gpu cache pool, 32GB/s of relatively plentiful and cheap, multi purpose system ram.

    We have historically seen a progression in gpu cache size, and coders eager to use it. 6GB is getting to the point where it doesnt cut it on modern games with high settings.
  • ajlueke - Wednesday, July 10, 2019 - link

    In the Radeon VII review, the Radeon VII produced 54.5 db in the FurMark test. This time around, the 5700 produced 54.5 db so I would expect that the Radeon VII and 5700 produce identical levels of noise.
    Except for one caveat. The RX Vega 64 is the only GPU present in both reviews. In the Radeon VII review it produced 54.8db, nearly identical to the Radeon VII. In the 5700 review, it produced 61.9 db, significantly louder than the 5700.
    So are the Radeon VII and 5700 identical in noise? Would the Radeon VII also have run at 61.9 db in this test with the 5700? Why the discrepancy with the Vega result? A 13% gain in noise in the identical test with the identical GPU makes it difficult to determine how much noise is actually being generated here.
  • CiccioB - Wednesday, July 10, 2019 - link

    The difference in gain is not 13%. dB are not linear but logarithmic.
    Each 3 dB means that the noise doubles, so 61.9 - 54.5 is a difference of 7.5db, meaning that the loudest measure is about 6 times more loud that the quieter one.
    I don't know about the discrepancy in the measurements, but it is quite big to not need a deeper look by the reviewer reporting those numbers.
  • Silma - Thursday, July 11, 2019 - link

    Too bad, I would have been interested in the RX 5700 XT but it's way too noisy.
  • CiccioB - Friday, July 12, 2019 - link

    Wait for the custom version. In a couple of months. Probably.
  • tomc100 - Friday, July 12, 2019 - link

    I'll probably wait for the water cooled version since this card is too hot and a fan will just make my room 80 degrees fahrenheit within 30 minutes. Sick of all the price gouging from Nvidia and the fact that they release a super before AMD's launch proves it. Despicable company.
  • MDD1963 - Saturday, July 13, 2019 - link

    Nice review; I think the 5700Xt stacks up against the 2060 Super and 2070 Super quite nicely.....; Hell, I'd hit it if I could! :)
  • coolrock2008 - Saturday, July 13, 2019 - link

    The cards were tested on an Intel CPU platform and i understand why. But, I was just wondering if the PCIe 4 switch brought any benefit at all to the cards? Any specific workload/benchmark that can benefit from the increased bandwidth? What are your thoughts on it @Ryan ?
  • Mason1232 - Saturday, July 27, 2019 - link

    Aimed at what these days is the midrange segment of the video card market, AMD is looking to carve out a new place for the company
  • ballsystemlord - Saturday, July 27, 2019 - link

    I love coffee, I love tea, I love Ryan's article on Navi! (Even though it's not finished yet...)
  • viivo - Thursday, August 1, 2019 - link

    The 5700s losing in all the Vulkan benchmarks is making me consider hitting that "cancel order" button and go with a 2060/70 Super. AMD should never perform worse in Vulkan against supposedly equally powered competition.
  • cvearl - Tuesday, September 10, 2019 - link

    I ahve the XFX 5700 non XT model. Refence cooler. It was less so noise that was the issue but more the revving of the fan. It was never sure on a speed to settle at. So I went into Wattman and locked the fanspeed to 30% (~1700 RPM) for anything above 50C across the board. Clocks hover between 1650 - 1700 MHz during long gaming sessions. Temp wise it settles in at about 85C and is a bit quieter than my old RX580 ASUS Rog Strix OC model. I wish they just shipped it like than rather than leave me to tweak it myself.
  • pcgpus - Thursday, September 19, 2019 - link

    Nice review and very good video cards from AMD, finally!

    If you want to compare RX5700 and RX5700XT please check this link:
    http://warmbit.blogspot.com/2019/08/rx5700-i-rx570...

    There are results from 22 games in 3 resolutions from few computer services (anandtech too). To translate just use google translate on the right sie of site.
  • ballsystemlord - Monday, September 23, 2019 - link

    @Ryan , please flesh this out; it's been months since the review.
  • catavalon21 - Thursday, October 10, 2019 - link

    That

Log in

Don't have an account? Sign up now