Comments Locked

60 Comments

Back to Article

  • 0ldman79 - Tuesday, January 29, 2019 - link

    "You guys don't really have an Atom compliment to Ryzen"

    Has he forgotten about Jaguar? Y'know, the CPU in the consoles and a pretty large percentage of the laptops on the market...
  • jaju123 - Tuesday, January 29, 2019 - link

    I think he was referring to a jaguar-style follow-up. Jaguar is beyond slow by this point.
  • levizx - Wednesday, January 30, 2019 - link

    It's not that slow, the point is to save power with light tasks, a Puma+ quad-core module on 12nm running at 2.4GHz alone would be more than adequate.
    But sure, if they were to do it, they would need to match Zen's instruction set (AVX2) in order for it to work and along with those redesigns we would expect IPC growth.

    If they could incorporate a quad-core "Puma++" and 192SP Vega into the I/O die, coupled with a CPU and a GPU chiplets on the package, they could potentially make a very efficient SoC
  • 0ldman79 - Thursday, January 31, 2019 - link

    Gotta wonder if the PS4/XB1 have AVX2.

    Jaguar was a decent CPU. My wife's quad core, Beema I think it is, is pretty damn close to the first gen Phenom x4.

    That's not monstrous, but it's a 15w chip that rarely hits 15w and matches the performance of a previous generation's 95w chip. It does have a few limitations that prevent it from really doing well, but in pure number crunching it's decent. It'll play Crysis, even with the integrated video.

    It's funny how it's a useless, horrible, slow CPU but it's still in the consoles today. Admittedly their CPU load on the games lowers some details vs the PC, but still...
  • IGTrading - Thursday, January 31, 2019 - link

    I don't think that Intel has any Atom that can beat an AMD Jaguar for the same form-factor and the same price while showing at least the same iGPU performance.
  • sing_electric - Tuesday, January 29, 2019 - link

    That's not a "Ryzen part" - it's a Dozer-era CPU with all the baggage that brings. The question was actually a pretty good one - Intel's decided to create a big.LITTLE-style x86 chip, which has some interesting applications where x86 wouldn't have necessarily been on people's first thought.

    When you look at Intel's 10nm process, it's actually a great idea - there's high density libraries that allow for very low power usage in minimal space, and lower density libraries that allow for higher performance, and this architecture lets Intel mix them.

    And Su's answer was also interesting - it seems like its not really a focus at the moment, though she hedged her bets significantly.
  • Byte - Wednesday, January 30, 2019 - link

    big.LITTLE had a hard road, and still isn't as effective as in theory. Nvidia failed hard with it, Samsung has stuck to it and has it working only decently in their phones. Apple is also working hard on it and getting close, their first iteration was not very good for battery life. Bottom line is that it doesn't bring as much power savings in theory and brings lots of complexities. On phones, always on devices, it makes sense. On desktops, almost no sense, laptops maybe. We are already seeing normal ultrabooks with 12+ hour battery life. I think this is enough for just about anyone compared to the old days of 4-6 hours. The LG Grams can hit 14 hours watching netflix at under 2 and a half lbs!!! As you can see AMD doesn't feel we need big little, and I agree. We need more speed!
  • piroroadkill - Wednesday, January 30, 2019 - link

    Actually, Jaguar has nothing to do with Bulldozer.
  • Oxford Guy - Thursday, January 31, 2019 - link

    As far as I recall, Jaguar has worse IPC than Piledriver.
  • levizx - Wednesday, January 30, 2019 - link

    What baggage? Jaguar/Puma (Family 16h) cores are very much traditional OOO CPU cores, very similar to Silvermont from Intel.
  • Death666Angel - Tuesday, January 29, 2019 - link

    It's a 2013 microarchitecture with hardly any relevance in modern day laptops. There are a total of 14 laptops on sale with Jaguar SoCs in them (A6-5200 to E1-2100). Maybe another handful of others not listed there. But they all only have 1 seller, some are listed but not in stock. So, citation needed regarding "a pretty large percentage of the laptops on the market...". And they are all behind competing laptops with worse screens, performance, size, weight etc. Consoles still ship with Jaguar style CPUs, but they are also on their way out by all accounts (rumor has it that 2020 might bring a new generation from MS and Sony with Ryzen CPUs). That is also not in any way relevant to a PC or laptop audience. Or was the use of a PowerPC architecture in one of the most successful consoles of all time (Wii) in any way relevant to the industry at large that is being discussed here?
    The Atom microarchitecture isn't that modern either, but it is shipping in laptops from 200 to 500€, with 1080p IPS screens and 10+ hours of battery life. It isn't a great experience when you come from a high clocked 4+ cores desktop PC, but when people just browse Facebook, Youtube and write an Email or two, it is pretty decent together with eMMC or even SSD storage.
    So his question regarding a current Atom competitor is very valid and it's a shame there is none.
  • 0ldman79 - Tuesday, January 29, 2019 - link

    "On the market" was probably a poor choice of words.

    I used to run an IT shop, still advise, just not involved in it directly any more.

    90% of the laptops that come through are either Atom or Jaguar based. Probably a 50/50 split between AMD and Intel.

    Honestly, unless something has changed recently, the Jaguar is quicker than the Atom, so old architecture or not, it is on par with the Atom. They're not made for performance.

    Jaguar isn't on 14nm or 12nm, I'll grant you that, but it's still out there. No reason they can't rehash the Jaguar.
  • 0ldman79 - Tuesday, January 29, 2019 - link

    If you can find a better comparison, by all means, I'm listening, but...

    https://cpu.userbenchmark.com/Compare/Intel-Celero...
  • wumpus - Tuesday, January 29, 2019 - link

    Basically the jaguar replacement is the V1000 series. The real question is if they will ever cut the die down to "just the working bits" of the V1000 series, or have enough broken raven ridges to scavenge enough V1000s (I'm guessing that a proper "jaguar replacement" should have a much higher volume than available scavenged parts, but the current V1000 might have enough).

    Zen is designed to qualify as either BIG or little (probably because AMD couldn't afford to design both). But it should do the job well for either (and even Intel can't really compete in mobile).
  • edzieba - Tuesday, January 29, 2019 - link

    Intel are still the dominant force in mobile anywhere above phone and phablet power levels. Apart form the fruit ecosystem, the tablet space has moved from mainly from Android-on-ARM to Windows-on-Intel, especially for notionally-convertible devices.
  • Dijky - Tuesday, January 29, 2019 - link

    I wouldn't say Ryzen Embedded V1000 is the counterpart to the Atom that is being talked about.
    Atom is a brand but also Intel's smaller, less powerful architecture next to the Core family, just like the cat cores were next to the construction cores.
    AFAIR, AMD has said in the past - and repeated now - that they don't currently see the need for a "little" core next to Zen. It scales nicely across all the segments AMD cares about, all the way down to competing with Atom.
    On Zen and Zen+ this might have coincided with the choice of 14LPP that is designed for, and performs really well on low power. Zen2 is currently being deployed on 7HPC, so it will be really interesting to see whether Zen2 for mobile might be deployed on the lower-power, higher-density 7FF instead.
    Also, now that AMD has significantly widened the core (specifically wider SIMD and data pipes), there might be an opportunity to diverge a Zen2 "slim" version as well - effectively addressing the role of the cat cores/Atom.

    We will see, it's going to be really interesting regardless.
  • Zizy - Tuesday, January 29, 2019 - link

    Well, V1000 doesn't get quite as low as Atoms do. It is more like the Y series, except without that implicit question mark.
    I would say AMD simply knows they can't beat ARM in this market and aren't bothering with x86 architecture going that low. I doubt you can make ARM/x86 big.little, so I believe they will glue it some other way (master ARM / slave x86)
  • 0ldman79 - Thursday, January 31, 2019 - link

    As much as I like Ryzen, it's idle and base level power draw is too high to be effectively compared to Atom/Jaguar.

    Performance per watt is excellent, but the idle wattage is about the same as some Jaguar systems under load.
    Last I checked Zen platform actually idled quite high on mobiles, pretty much the exact same as their desktop counterparts. Excellent idle for desktops, not so much for laptops.
  • wolfesteinabhi - Wednesday, January 30, 2019 - link

    its a shame yes ...but unlike Intel, AMD has very limited resources, so designing a cpu around ultra low power usage specifically to addresss such market where they want cheaper things ..is sort of waste of money on AMD side, since putting R&D money will make the cost even higher for such parts, and ther is no guarantee that it will be in demand by the time it comes out...market by itself is low for it.

    big.Little is fine for mobile/tabs, but For PC side, i think 1 core arch is enough. now they might come up with technique, to scale the frequency for some cores to be highly efficient in tasks needing lower perf, compared to others. so even if they are no 100% there, they can still cater to that need without a "special" new core.
  • Jorgp2 - Tuesday, January 29, 2019 - link

    Have you seen gemini lakes performance?
  • sing_electric - Tuesday, January 29, 2019 - link

    The "but it's got to have minimal performance loss because we're not trying to be a tablet" line was interesting when paired with Dr. Su's answer to the question about a "Ryzen answer to Atom."

    I can't decide how coy she was trying to be, given the rumors of an AMD-powered Surface device. One way is that she's saying "we're not going into tablets," while another way of looking at it is "we're not going to go into low-performing tablets, so if you see us in the next Surface, you know it'll have workstation-laptop level performance."
  • HStewart - Tuesday, January 29, 2019 - link

    To me the rumor of AMD powered Surface - is a fan generated rumor.
  • Hifihedgehog - Tuesday, January 29, 2019 - link

    @HStewart: Hardly a fan rumor. Expert Microsoft blogger Brad Sams wrote it in his latest book “Beneath a Surface.”
  • SydneyBlue120d - Tuesday, January 29, 2019 - link

    It's always a pleasure to read QA with Dr. Lisa Su, she is the greatest woman nerd in the industry :D
  • del42sa - Tuesday, January 29, 2019 - link

    @ Ian Cutres: Why you didn´t ask such important question as goes current state of Primitive Shader and NGG developement with Vega 7 ? Seriously I don´t get it.....
  • Ian Cutress - Tuesday, January 29, 2019 - link

    Everyone asks for different things. Some people want roadmaps, some people want technical stuff, some people want to know about the person, some people only care about financials. To those people, the questions on those topics 'are the important questions' and they don't care about primitive shaders in the slightest.

    I only had a small number of questions, this was a round-table with other press. I did ask what I felt was most relevant given AMD's situation and hardware. A LOT of people we asking about Navi, and the Wafer Supply Agreement, or where the new money is going.
  • StevoLincolnite - Tuesday, January 29, 2019 - link

    I have to agree. Primitive Shaders/NGG/Draw Stream Binning Rasterizer is a feature that was promised what feels like years ago... But really hasn't come to fruition, would have been good to get more clarification on the state of those technologies.
  • darkswordsman17 - Wednesday, January 30, 2019 - link

    AMD has killed NGG for Vega as far as I know, they won't enable it in drivers. I seem to recall that Primitive Shaders was enabled, but its a feature that is tied to NGG so it effectively isn't actually working either on Vega and thus likely won't ever either. (In the video card subforum, there's a thread titled: "No NGG for Vega" with link to an e-mail exchange between a developer trying to enable NGG just to play with it and the AMD rep's response is that it won't be enabled in GFX9, which is Vega, but supposedly will in GFX10 which I believe is Navi).
  • del42sa - Wednesday, January 30, 2019 - link

    yes, it seems so.... But that is what we just getting from many others hints like developers e-mail exchange and such and puzzling them together. In fact AMD remaining silent about this and they never respond directly to the the issue. They remain silent about PS/NGG SDK, as they never unveiled any to my knowledge. In fact they play possum....
  • Opencg - Tuesday, January 29, 2019 - link

    Well shes good at not getting into the shit talk. And it will be interesting to see where radeon 7 performs. Hopefully its cheaper than the 2080. Those prices are stupid. Not sure why they think its ok to go backwards in terms of performance/price. I havent seen ANYTHING impressive from dlss / rtx. If its cheaper than 2080 with about the same performance thats a big win for everyone (consumers).
  • iwod - Tuesday, January 29, 2019 - link

    The thing I noticed is that she is getting a lot more white hair, and she is only 49. I guess the AMD CEO job is very taxing, she did manage to turn so many things around and generate excitement in the community.

    I hope she could stay with AMD for a long time to come.
  • iwod - Tuesday, January 29, 2019 - link

    And I forgot to mention, why hasn't anyone asked when will AMD CPU gets into Apple.
  • RSAUser - Wednesday, January 30, 2019 - link

    Because Apple is trying for creating their own in house.

    If the performance is good enough, most Mac users will get them as they don't need the performance and like the Apple eco system.

    The higher end Pros will may be swap 2021, since right now ryzen 2000 mobile is not IPC competitive with Intel and the mobile ryzen 3000 series is just a rehash of 2000. We'll have to wait for the 4000 series to see if AMD mobile beats Intel (and it should if Intel doesn't manage to shrink their architecture by then).
  • Oxford Guy - Thursday, January 31, 2019 - link

    "most Mac users will get them as they don't need the performance"

    The same thing is true of most "PC" users. Only a smaller portion of the computing market needs cutting-edge performance. Of course, though, what was once adequate performance becomes inadequate with increasing bloat and/or workload sophistication complexity.
  • Dcoca83 - Friday, February 1, 2019 - link

    This is not true, most mac users that I know are designers, using applications like cinema4d and affter effects can bring down any computer..
  • willis936 - Tuesday, January 29, 2019 - link

    What a charming CEO. Take notes, Jensen.
  • HStewart - Tuesday, January 29, 2019 - link

    I find it interesting that a lot of responses that she said that not ready to respond. Meaning like they have no response.
  • Hifihedgehog - Tuesday, January 29, 2019 - link

    Meaning they do not want to tell yet. Unlike Raja Koduri, Lisa Su has the strength of a put-up-or-shut-up mentality, where she does not throw her cards down until absolutely everything is primed and ready for release and no sooner. The littlest of things, even a mere month before, can gum up a product release so that is why she holds her cards close to her chest. That is one skill Intel would do well to learn especially after their 28-core 5GHz flub-up last year which turned out to be non-stock clocked, lab-concocted unobtainium.
  • Rudde - Wednesday, January 30, 2019 - link

    Intels 28-core 5GHz was deliberate. It was meant to steal AMDs Threadripper thunder. Intel didn't want to release a 28-core 'consumer' cpu, but they had too much pride to let AMD stomp on them.
  • Guspaz - Wednesday, January 30, 2019 - link

    Was it actually effective, though? The 5GHz turned out to be a lie (the actual product has a base clock of only 3.1 GHz and 5GHz was an extreme overclock with a nearly two kilowatt water cooler setup), and the actual shipping product turned into the Xeon W-3175X, a $3,000 CPU that must be paired with a $1,500 motherboard, for a cost of $4,500.

    Meanwhile, a 32-core threadripper with motherboard is significantly less than *half* the cost, at $1,800 for the CPU and $300 for the motherboard.

    Yeah, the Xeon W-3175X is a little bit faster in multi-threader workloads and moderately faster in single-threaded, but at more than double the price, that's not saying much.
  • maroon1 - Tuesday, January 29, 2019 - link

    "We win some, we lose some"

    So, what is point of Vega 7 ?! 5 months late, with much higher power consumption and lack ray tracing and tensor cores for same cost as RTX 2080 ?!

    It need to be noticeably faster, otherwise RTX 2080 is better deal.
  • jtd871 - Tuesday, January 29, 2019 - link

    It's a stopgap until Navi is ready.

    Tensor cores are not ready for primetime for mainstream applications, hence Lisa's comment that the entire ecosystem needs to be ready for raytracing to take off.

    Go ahead and waste your money on NVidia's 14nm cash grab. I'm waiting on 7nm tech where the power budget can really be used effectively.
  • darkswordsman17 - Wednesday, January 30, 2019 - link

    Navi is not a high end GPU (its intended for the same market as Polaris), so I hope your expectations are in line with that. It might not even match Radeon VII in performance (but it will be much faster than what has so far been available in its intended market).

    I don't think Tensor Cores are doing the ray-tracing. They're just running de-noising algorithms so that they can run lower resolution (or with ray-tracing, with fewer rays, which helps performance) and without losing image quality.

    Er, RTX is 12nm? I am too, and hopefully 7nm will help get perf/$ back in line. I'm not impressed by Radeon VII because it seems fairly mediocre. While he's acerbic, I think JHH is right about this. Its largely just Vega 10 on 7nm, with full ECC support so that it could offer improved DP performance, and some extra ops for Machine Learning. Bit higher clock speeds and double the memory bandwidth (likely the main reason for performance improvement over Vega 10) improves performance some (seems to be average of about 25%). I think its because Vega was so compute focused (and that was exacerbated by them not getting NGG working on it). Navi will almost certainly be more graphics focused, so hopefully we see big efficiency gains.
  • shompa - Thursday, January 31, 2019 - link

    7nm is not cheaper per transistor, so it will not help with price.
  • Oxford Guy - Thursday, January 31, 2019 - link

    As far as I know, it also costs more to design for smaller nodes, due to increasing rules complexity. So, that cost has to be recouped.
  • Rudde - Wednesday, January 30, 2019 - link

    Erm, ray tracing is handled om ray-tracing-cores, not tensor. Tensor is AI (int8). Ray-tracing-cores are optimised for BTS (Binary tree search).
  • Smell This - Tuesday, January 29, 2019 - link

    Faster at what? Gimping the RTX 2080 to protect the Titan?
  • Topweasel - Tuesday, January 29, 2019 - link

    Complaing about ray tracing at this stage is the most hilarious thing ever. Same for those who say similar things about AVX-512. Ray tracing is possibly a useful tool in the future and kudos to Nvidia to include it first. Not the first time Nvidia has thrown a tech out there before it useful. The hardware is now there and developers can start to use it.

    But Ray tracing is useless as far as the 2k series goes. It's a dog now and only going to get worse. In the future when its not a razors edge feature Nvidia and maybe AMD will have cards that will handle it with no or little perceptable loss in fidelity or performance and who knows maybe an increase. But like SS and T&L. When the time comes that you really want to use Ray Tracing, you certainly won't be running it on the launch hardware (ie the 2k series).

    Tensor cores are just a marketing term for a proprietary arch choice Nvidia made. It's possible that AMD may develop similar technology or more than likely they don't take part in the Machine learning business because it's so heavily tied to Cuda right now. No reason to spend extra resources on a tech that only affects a market they can't really participate in. Those transistors are better served somewhere else.

    The question is why are you so absorbed with marketing checkboxes and not things that actually apply to the average user who would purchase these (whether for gaming or something else)?

    Not saying the Radeon VII is the card to get. But I am guessing the increased memory and higher bandwidth is going to have a better long term affect for gaming then Tensor cores and Ray Tracing.
  • Smell This - Tuesday, January 29, 2019 - link

    "...Tensor cores are just a marketing term for a proprietary arch choice Nvidia made. It's possible that AMD may develop similar technology or more than likely they don't take part in the Machine learning business because it's so heavily tied to Cuda right now."
    ___ _____ _____ _____

    AMD'S MIOpen: Support for OpenCL and HIP enabled frameworks for optimized GEMM’s in Deep Learning acceleration -- Tensorflow 1.8 (latest release -?-)

    Mr Maroon may feel free to compare the RTX 2080 specs to that, presumably, will be in the range of the Radeon VII / Instinct M150...

    Peak Half Precision (FP16) Performance: 26.8 TFLOPs
    Peak Single Precision (FP32) Performance: 13.4 TFLOPs
    Peak Double Precision (FP64) Performance: 6.7 TFLOPs

    Content creation, compute, GEMM, gaming ... what's not to like? The Radeon VII may well turn out to be a winner ... especially, dare I say, in Cross-Fire 'Cluster" with PCI 4 and Infinity Fabric linkage.

    Make it so, Dr Su!
  • just4U - Tuesday, January 29, 2019 - link

    Hmmm.. so in a roundabout way she did confirm Ryzen's third gen will feature more cores at some point..
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    Duh
  • FireSnake - Wednesday, January 30, 2019 - link

    "performance is and so you will us push single threaded performance."
    See missing?
    "performance is and so you will see us push single threaded performance."
  • BigMamaInHouse - Wednesday, January 30, 2019 - link

    Do you know what New technologies Lisa could be referring to?
    Quote: "There are lots of other important technologies you will hear more about".

    Q: NVIDIA has ray tracing, what has AMD?
    "What I will say is [that] ray tracing is an important technology – it is one of [a number of] important technologies. There are lots of other important technologies you will hear more about, [as well as] what we're doing with ray tracing."
  • darkswordsman17 - Wednesday, January 30, 2019 - link

    Considering how open ended that is, it could mean literally anything.
  • del42sa - Friday, February 1, 2019 - link

    it means probably nothing
  • Hul8 - Wednesday, January 30, 2019 - link

    Why is Dr. Su so bashful about the actual maximum core count - which everyone automatically assumes will be 16?

    - Maybe they haven't yet decided whether they can fit 16 cores within the thermal and power envelope; or

    - maybe... https://imgur.com/ZdSipbw ; By moving the I/O die downwards, there will be enough room for a third core chiplet for a maximum of 24 cores.
  • Mahoumatic - Wednesday, January 30, 2019 - link

    Great interview. Just that many of those words in brackets are actually not required and redundant, and some of those even changed the meaning of the speakers.
  • Hul8 - Wednesday, January 30, 2019 - link

    "Literal meaning of a sentence taken out of context" does not equate "the meaning of the sentence".

    The context of each question and answer has been altered/lost due to shuffling - and I assume leaving out the boring bits - so the the writers need to fill in that information. That's where the brackets come in.

    If you weren't present in the Q&A session, I trust the writers of this Q&A piece more in both knowing what the context and meaning was, and to accurately convey it.
  • HollyDOL - Thursday, January 31, 2019 - link

    It looks weird. Ofc I wasn't there on the interview, but over all that laughing it seemed they said, confirmed or denied very little... and in quite an ambiguous statements. Or is it just me?
  • Oxford Guy - Thursday, January 31, 2019 - link

    It seems a bit pointless, really. If AMD has something to announce it will do so.

Log in

Don't have an account? Sign up now