Instead of making stuff about Samsungs own development up, read the press release: "AMD will license custom graphics IP based on the recently announced, highly-scalable RDNA graphics architecture to Samsung for use in mobile devices, including smartphones, and other products that complement AMD product offerings."
AMD knows how to create high performance GPUs and also it builds GPUs.
Samsung knows how to create high efficiency hardware, including GPUs and also owns factories where those AMD GPUs can be build. TSMC doesn't have infinity capacity to support both AMD's CPUs and GPUs.
In the end this is a win win partnership for both companies. A deal also about IP licensing, if it is necessary, would probably just be the icing on the cake of this deal.
Both Mediatek and Unisoc (Spreadtrum) seem to be picking PowerVR over Mali graphics recently, so the phone market does move away from Mali. In other markets, companies like Allwinner, Amlogic, Rockchip still seem to go exclusively with Mali, and there are still lots of royalties for the older chips.
Can someone please clarify this for me - Unlike x86, the graphics IP is independent of architecture? I mean the same graphics if it is scaled down works with ARM CPUs on a phone and the same will work as a graphics card alongside a x86 processor?
Thanks ! 1 more thing though, why was this not tried so far or was it tried but hasn't worked? (Nvidia?) Do GPU designs scale so easily across power envelopes?
Nvidia put Geforce "cores" on their Tegra SoCs and it worked well. Tegra 2 was a good SoC at the time, Tegra 3 was fine but suffered from being a bit power hungry. And then they didn't iterate fast enough to compete with basic ARM CPU and GPU cores or Qualcomm offerings. It was also a rather low profit margin business, so not much money was invested into R&D. Nvidia SoCs were very capable but really needed that extra 5W horsepower (10W compared to 5W on smartphones) to shine. So only Nvidia and Nintendo used it in stationary or non-standard tablet formfactor. AMD used to own Adreno and sold it off to QC. Guess they see another shot at bringing in some money by doing this. And judging by their semi-custom suceess business model, it might for out well for everyone involved (AMD, Samsung and the consumer).
Between process shrinks and a more scalable architecture the constrains that have prevented more powerful GPUs from being practical in mobile devices are slowly disappearing. If an RDNA based GPU does find its way into a phone it'll likely offer leading performance but still come at the expense of battery life relative to traditional mobile GPUs.
Yup. Let's remember that "Adreno" was "Radeon" with the letters reshuffled. The question now is whether AMD's GPU team can catch up with QCs efforts quickly enough. It's been a long time since Radeon on mobile...
The question now is whether SAMSUNGSs GPU team can catch up with QCs efforts quickly enough. They are buying the basic building blocks but its up to them to build and engineer the sillycon.
Actually not so simple. AMD acquired a company named Bitboys in 2006. And that's where Adreno comes from. Bitboys committed to mobile graphics computing as complementation. So Adreno was born for mobile.
Yup, but there's other standards like programming APIs . So on Windows you have DirectX, which is obviously not on Apple or Android OS. But stuff like vulkan and openGL is already running on Linux so would run fine on Apple/Android stuff.
Apple uses Metal, which AMD already supports on MacOS.
Android uses Vulkan, OpenGL, and RenderScript - not sure if AMD supports the last one, but I guess it'd be a requirement to have AMD APUs in Chromebooks.
Support for Renderscript could be flagging at Google. There have been large performance regressions on Renderscript code with the latest versions of Android on the latest mobile SoCs. While I don't know the reason for this it is possible that Google could be thinking about the virtues of industry standards - and the standards promulgated by AMD are open and as universal as you can get at this point - for Android, not so much as an advertising point but to remove resistance from developers. Still, no announcements yet. I am speaking only about possibilities, here. But those regressions do need to be explained.
Note of caution: I am not someone with the proper qualifications to be answering this question but I too would like to see it answered. Hopefully, someone more qualified will comment. Here is my take on it.
Deep within an x86 CPU there are architectural elements that aren't directly reliant on the x86 instruction set to do what they are designed to do. Or to generalise the point: processors that are rather similar from an architectural point of view do in fact support different instruction sets. Notably, Intel CPUs are RISC age processors with a CISC instruction set tacked on for the sake of backward compatibility. Now, good GPU designers who know what it takes to make a good GPU are not thinking about the CPU instruction set that much in developing their design. Their designs are certainly affected by such considerations insofar as the GPU has to be adapted to work with the CPU but those things are not dominating considerations. Probably more thought would go into how software very often designed with a specific family of CPUs in mind will benefit from the GPU. That this is somewhat achievable is pretty amazing.
It's quite simple. CPU and GPU work on separate domains relative to instructions and data to process. All they need to work together is a "bus", that is a communication channel of whatever type, with whatever protocol (of course both HW have to support them) and through this communication channel CPU can send to the GPU the commands and the basic data the GPU needs to start processing the image getting more data autonomously from storage (be it shared RAM or dedicated graphics RAM) like for example textures. Any CPU can instruct a GPU to run a certain job if you know exacly what the GPU is expecting, where and how and how it communicates back the results of its operations. That's the work of the driver, which translates high level language commands (like those of the above mentioned graphics libraries) into simple list of commands/data (the draw calls) for the GPU.
So in the end you can have CPU X, Y and Z work together with GPU A (for example x86, ARM, Power work with nvidia pascala architecture) exactly as you can have GPU A, B and C work with CPU X (that is GCN, Maxwell, Pascal and Turing work with x86). It's all based on standards (the type of bus used to comminicate) and the provision of the right drivers (which in PC market are written by AMD and nvidia both to translate standard (or mainstream) graphics library functions into GPU own instruction stream).
They bought the team that built mobile GPUs and they licensed whatever they needed, but now AMD is developing "scalable" architectures with their main team...
So, did Imageon basically become the new name of the ArtX team? Or did they get fully-integrated into the core of ATI's organization before that team was formed?
I wondered about Imagination's graphics IP, too. It was/is a reasonably capable architecture, and likely available for pennies on the dollar, given the shape the company is in. The one (big) fly in the ointment is probably that many/most/all key people there have since left, and IP w/o the people who know it and can make sense of it is can be borderline useless.
And, to your point about the talent departing, private equity funds do have a reputation for firing lots of people and milking existing contracts and IP. So, it could be the case that Imagination Technologies is already little more than a husk of its former self.
Makes sense. AMD did help engineer Samsung's M4 core and I think we all know by now that Samsung's GPU department hasn't produced anything to even reach ARM's Mali perf-per-watt, let alone Adreno or Apple's iGPU. Licensing AMD's technology might put them back on the table + 8K TVs will require immense gpu power, which current mobile tech can't handle.
This could also signal a shift back to ARM designed CPU cores and away from the custom Samsung variants. I am not suggesting any direct connection between electing to use AMD GPUs and abandoning the in-house Mongoose cores but Samsung will certainly have its work cut out for it if it decides to work on custom elements across the whole SoC. And there is the fact that with the exception of posting impressive Geekbench peak performance scores the Mongoose core based Exynos SoCs have been disappointing.
Yes, but Samsung hasn't managed to make the Exynos parts stand out in any way in all the time since Andrei looked into that. No, wait, the Exynos processors do stand out - they are unnecessarily power hungry relative to the modest performance that they offer. Notwithstanding what looks to be a processor development project that never quite got there if Samsung eventually delivers a chip that proves the virtue of it custom ARM endeavour, I will welcome it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
46 Comments
Back to Article
brakdoo - Monday, June 3, 2019 - link
Instead of making stuff about Samsungs own development up, read the press release:"AMD will license custom graphics IP based on the recently announced, highly-scalable RDNA graphics architecture to Samsung for use in mobile devices, including smartphones, and other products that complement AMD product offerings."
brakdoo - Monday, June 3, 2019 - link
+ "Samsung will pay AMD technology license fees AND royalties"ksec - Monday, June 3, 2019 - link
>complement AMD product offerings.I think the key here is that none of these IP, whether they are for patents or actual IP usage will not compete directly against AMD's own offering.
BigMamaInHouse - Monday, June 3, 2019 - link
I smell Nvidia Shield competitor.Irata - Tuesday, June 4, 2019 - link
They could integrate this in their Smart TVs directly.yannigr2 - Monday, June 3, 2019 - link
AMD knows how to create high performance GPUs and also it builds GPUs.Samsung knows how to create high efficiency hardware, including GPUs and also owns factories where those AMD GPUs can be build. TSMC doesn't have infinity capacity to support both AMD's CPUs and GPUs.
In the end this is a win win partnership for both companies. A deal also about IP licensing, if it is necessary, would probably just be the icing on the cake of this deal.
ZolaIII - Monday, June 3, 2019 - link
So now ARM loses the 90% (Huawei + Samsung) of MALI GPU market.RaduR - Monday, June 3, 2019 - link
MediaTek ?arnd - Monday, June 3, 2019 - link
Both Mediatek and Unisoc (Spreadtrum) seem to be picking PowerVR over Mali graphics recently, so the phone market does move away from Mali. In other markets, companies like Allwinner, Amlogic, Rockchip still seem to go exclusively with Mali, and there are still lots of royalties for the older chips.mode_13h - Monday, June 3, 2019 - link
Mediatek should've bought Imagination.Teckk - Monday, June 3, 2019 - link
Can someone please clarify this for me - Unlike x86, the graphics IP is independent of architecture? I mean the same graphics if it is scaled down works with ARM CPUs on a phone and the same will work as a graphics card alongside a x86 processor?brakdoo - Monday, June 3, 2019 - link
As long as it can be accessed through OpenGL/Vulkan/DirectX drivers and offers all necessary operations, YES.Teckk - Monday, June 3, 2019 - link
Thanks ! 1 more thing though, why was this not tried so far or was it tried but hasn't worked? (Nvidia?) Do GPU designs scale so easily across power envelopes?Death666Angel - Monday, June 3, 2019 - link
Nvidia put Geforce "cores" on their Tegra SoCs and it worked well. Tegra 2 was a good SoC at the time, Tegra 3 was fine but suffered from being a bit power hungry. And then they didn't iterate fast enough to compete with basic ARM CPU and GPU cores or Qualcomm offerings. It was also a rather low profit margin business, so not much money was invested into R&D. Nvidia SoCs were very capable but really needed that extra 5W horsepower (10W compared to 5W on smartphones) to shine. So only Nvidia and Nintendo used it in stationary or non-standard tablet formfactor. AMD used to own Adreno and sold it off to QC. Guess they see another shot at bringing in some money by doing this. And judging by their semi-custom suceess business model, it might for out well for everyone involved (AMD, Samsung and the consumer).tk11 - Monday, June 3, 2019 - link
Between process shrinks and a more scalable architecture the constrains that have prevented more powerful GPUs from being practical in mobile devices are slowly disappearing. If an RDNA based GPU does find its way into a phone it'll likely offer leading performance but still come at the expense of battery life relative to traditional mobile GPUs.evolucian911 - Monday, June 3, 2019 - link
Adreno on Qualcom used to be AMD graphics for mobile devices as well so it has been done long beforeeastcoast_pete - Monday, June 3, 2019 - link
Yup. Let's remember that "Adreno" was "Radeon" with the letters reshuffled. The question now is whether AMD's GPU team can catch up with QCs efforts quickly enough. It's been a long time since Radeon on mobile...Wardrive86 - Monday, June 3, 2019 - link
Very true but more competition in the market is always a good thing.dromoxen - Tuesday, June 4, 2019 - link
The question now is whether SAMSUNGSs GPU team can catch up with QCs efforts quickly enough.They are buying the basic building blocks but its up to them to build and engineer the sillycon.
1llumi - Wednesday, June 5, 2019 - link
Actually not so simple. AMD acquired a company named Bitboys in 2006. And that's where Adreno comes from. Bitboys committed to mobile graphics computing as complementation. So Adreno was born for mobile.tekniknord - Monday, June 3, 2019 - link
Mobile GPU mostly support and use OpenGL ES and Vulkan.webdoctors - Monday, June 3, 2019 - link
Yup, but there's other standards like programming APIs . So on Windows you have DirectX, which is obviously not on Apple or Android OS. But stuff like vulkan and openGL is already running on Linux so would run fine on Apple/Android stuff.mode_13h - Monday, June 3, 2019 - link
Apple uses Metal, which AMD already supports on MacOS.Android uses Vulkan, OpenGL, and RenderScript - not sure if AMD supports the last one, but I guess it'd be a requirement to have AMD APUs in Chromebooks.
ChrisGX - Tuesday, June 4, 2019 - link
Support for Renderscript could be flagging at Google. There have been large performance regressions on Renderscript code with the latest versions of Android on the latest mobile SoCs. While I don't know the reason for this it is possible that Google could be thinking about the virtues of industry standards - and the standards promulgated by AMD are open and as universal as you can get at this point - for Android, not so much as an advertising point but to remove resistance from developers. Still, no announcements yet. I am speaking only about possibilities, here. But those regressions do need to be explained.mode_13h - Monday, June 3, 2019 - link
You can use both AMD and Nvidia GPUs on POWER systems, FWIW.ChrisGX - Tuesday, June 4, 2019 - link
Note of caution: I am not someone with the proper qualifications to be answering this question but I too would like to see it answered. Hopefully, someone more qualified will comment. Here is my take on it.Deep within an x86 CPU there are architectural elements that aren't directly reliant on the x86 instruction set to do what they are designed to do. Or to generalise the point: processors that are rather similar from an architectural point of view do in fact support different instruction sets. Notably, Intel CPUs are RISC age processors with a CISC instruction set tacked on for the sake of backward compatibility. Now, good GPU designers who know what it takes to make a good GPU are not thinking about the CPU instruction set that much in developing their design. Their designs are certainly affected by such considerations insofar as the GPU has to be adapted to work with the CPU but those things are not dominating considerations. Probably more thought would go into how software very often designed with a specific family of CPUs in mind will benefit from the GPU. That this is somewhat achievable is pretty amazing.
CiccioB - Tuesday, June 4, 2019 - link
It's quite simple.CPU and GPU work on separate domains relative to instructions and data to process.
All they need to work together is a "bus", that is a communication channel of whatever type, with whatever protocol (of course both HW have to support them) and through this communication channel CPU can send to the GPU the commands and the basic data the GPU needs to start processing the image getting more data autonomously from storage (be it shared RAM or dedicated graphics RAM) like for example textures.
Any CPU can instruct a GPU to run a certain job if you know exacly what the GPU is expecting, where and how and how it communicates back the results of its operations. That's the work of the driver, which translates high level language commands (like those of the above mentioned graphics libraries) into simple list of commands/data (the draw calls) for the GPU.
So in the end you can have CPU X, Y and Z work together with GPU A (for example x86, ARM, Power work with nvidia pascala architecture) exactly as you can have GPU A, B and C work with CPU X (that is GCN, Maxwell, Pascal and Turing work with x86).
It's all based on standards (the type of bus used to comminicate) and the provision of the right drivers (which in PC market are written by AMD and nvidia both to translate standard (or mainstream) graphics library functions into GPU own instruction stream).
Hope it is clear.
Wardrive86 - Monday, June 3, 2019 - link
So what exactly did Qualcomm buy all those years ago if AMD is able to still build smartphone mobile GPUs?brakdoo - Monday, June 3, 2019 - link
They bought the team that built mobile GPUs and they licensed whatever they needed, but now AMD is developing "scalable" architectures with their main team...NixZero - Monday, June 3, 2019 - link
the Imageon team from Ati, their IP plus several licenses enabling them to use OpenGL ES 2.0 and OpenVG 1.0 in their productsmode_13h - Monday, June 3, 2019 - link
So, did Imageon basically become the new name of the ArtX team? Or did they get fully-integrated into the core of ATI's organization before that team was formed?Wardrive86 - Monday, June 3, 2019 - link
Would like to know this alsoeldakka - Tuesday, June 4, 2019 - link
They _probably_ bought a set period of time non-compete as well. Maybe that has finally expired.ToTTenTranz - Monday, June 3, 2019 - link
Qualcomm bought the architecture and IP for the Xbox's Xenos graphics chip, as well as the team behind its development.RaduR - Monday, June 3, 2019 - link
They could easily buy Imagination (powerVR ) as Apple managed to almost destroy the company !eastcoast_pete - Monday, June 3, 2019 - link
I wondered about Imagination's graphics IP, too. It was/is a reasonably capable architecture, and likely available for pennies on the dollar, given the shape the company is in. The one (big) fly in the ointment is probably that many/most/all key people there have since left, and IP w/o the people who know it and can make sense of it is can be borderline useless.mode_13h - Monday, June 3, 2019 - link
According to Wikipedia:"on 25 September 2017, the board of directors announced that the company was being acquired by Canyon Bridge, a China-aligned private equity fund."
So, it's not necessarily on the market.
mode_13h - Monday, June 3, 2019 - link
And, to your point about the talent departing, private equity funds do have a reputation for firing lots of people and milking existing contracts and IP. So, it could be the case that Imagination Technologies is already little more than a husk of its former self.tkSteveFOX - Monday, June 3, 2019 - link
Makes sense. AMD did help engineer Samsung's M4 core and I think we all know by now that Samsung's GPU department hasn't produced anything to even reach ARM's Mali perf-per-watt, let alone Adreno or Apple's iGPU.Licensing AMD's technology might put them back on the table + 8K TVs will require immense gpu power, which current mobile tech can't handle.
Human000 - Monday, June 3, 2019 - link
Can you link the source for " AMD did help engineer Samsung's M4 core "Eris_Floralia - Monday, June 3, 2019 - link
Former AMD employees employed by SLSI is now counted as "AMD helping engineering Samsung architecture"?ET - Monday, June 3, 2019 - link
That's promising. It implies that RDNA scales down to very low power with good performance. That's great not only for mobile but for future APUs.mode_13h - Monday, June 3, 2019 - link
I wonder if this makes it any more likely we'll see HBM2 in mobile phones.ChrisGX - Tuesday, June 4, 2019 - link
This could also signal a shift back to ARM designed CPU cores and away from the custom Samsung variants. I am not suggesting any direct connection between electing to use AMD GPUs and abandoning the in-house Mongoose cores but Samsung will certainly have its work cut out for it if it decides to work on custom elements across the whole SoC. And there is the fact that with the exception of posting impressive Geekbench peak performance scores the Mongoose core based Exynos SoCs have been disappointing.Rudde - Tuesday, June 4, 2019 - link
Anandtech reported some years ago that the problem with Samsungs Mongoose is mostly about software.ChrisGX - Thursday, June 6, 2019 - link
Yes, but Samsung hasn't managed to make the Exynos parts stand out in any way in all the time since Andrei looked into that. No, wait, the Exynos processors do stand out - they are unnecessarily power hungry relative to the modest performance that they offer. Notwithstanding what looks to be a processor development project that never quite got there if Samsung eventually delivers a chip that proves the virtue of it custom ARM endeavour, I will welcome it.