We typically don’t write about what hardware vendors aren’t going to be doing, but then most things hardware vendors don’t do are internal and never make it to the public eye. However when those things do make it to the public eye, then they are often a big deal, and today’s press release from Imagination is especially so.

In a bombshell of a press release issued this morning, Imagination has announced that Apple has informed their long-time GPU partner that they will be winding down their use of Imagination’s IP. Specifically, Apple expects that they will no longer be using Imagination’s IP for new products in 15 to 24 months. Furthermore the GPU design that replaces Imagination’s designs will be, according to Imagination, “a separate, independent graphics design.” In other words, Apple is developing their own GPU, and when that is ready, they will be dropping Imagination’s GPU designs entirely.

This alone would be big news, however the story doesn’t stop there. As Apple’s long-time GPU partner and the provider for the basis of all of Apple’s SoCs going back to the very first iPhone, Imagination is also making a case to investors (and the public) that while Apple may be dropping Imagination’s GPU designs for a custom design, that Apple can’t develop a new GPU in isolation – that any GPU developed by the company would still infringe on some of Imagination’s IP. As a result the company is continuing to sit down with Apple and discuss alternative licensing arrangements, with the intent of defending their IP rights. Put another way, while any Apple-developed GPU will contain a whole lot less of Imagination’s IP than the current designs, Imagination believes that they will still have elements based on Imagination’s IP, and as a result Apple would need to make lesser royalty payments to Imagination for devices using the new GPU.

An Apple-Developed GPU?

From a consumer/enthusiast perspective, the big change here is of course that Apple is going their own way in developing GPUs. It’s no secret that the company has been stocking up on GPU engineers, and from a cost perspective money may as well be no object for the most valuable company in the world. However this is the first confirmation that Apple has been putting their significant resources towards the development of a new GPU. Previous to this, what little we knew of Apple’s development process was that they were taking a sort of hybrid approach in GPU development, designing GPUs based on Imagination’s core architecture, but increasingly divergent/customized from Imagination’s own designs. The resulting GPUs weren’t just stock Imagination designs – and this is why we’ve stopped naming them as such – but to the best of our knowledge, they also weren’t new designs built from the ground up.

What’s interesting about this, besides confirming something I’ve long suspected (what else are you going to do with that many GPU engineers?), is that Apple’s trajectory on the GPU side very closely follows their trajectory on the CPU side. In the case of Apple’s CPUs, they first used more-or-less stock ARM CPU cores, started tweaking the layout with the A-series SoCs, began developing their own CPU core with Swift (A6), and then dropped the hammer with Cyclone (A7). On the GPU side the path is much the same; after tweaking Imagination’s designs, Apple is now to the Swift portion of the program, developing their own GPU.

What this could amount to for Apple and their products could be immense, or it could be little more than a footnote in the history of Apple’s SoC designs. Will Apple develop a conventional GPU design? Will they try for something more radical? Will they build bigger discrete GPUs for their Mac products? On all of this, only time will tell.


Apple A10 SoC Die Shot (Courtesy TechInsights)

However, and these are words I may end up eating in 2018/2019, I would be very surprised if an Apple-developed GPU has the same market-shattering impact that their Cyclone CPU did. In the GPU space some designs are stronger than others, but there is A) no “common” GPU design like there was with ARM Cortex CPUs, and B) there isn’t an immediate and obvious problem with current GPUs that needs to be solved. What spurred the development of Cyclone and other Apple high-performance CPUs was that no one was making what Apple really wanted: an Intel Core-like CPU design for SoCs. Apple needed something bigger and more powerful than anyone else could offer, and they wanted to go in a direction that ARM was not by pursuing deep out-of-order execution and a wide issue width.

On the GPU side, however, GPUs are far more scalable. If Apple needs a more powerful GPU, Imagination’s IP can scale from a single cluster up to 16, and the forthcoming Furian can go even higher. And to be clear, unlike CPUs, adding more cores/clusters does help across the board, which is why NVIDIA is able to put the Pascal architecture in everything from a 250-watt card to an SoC. So whatever is driving Apple’s decision, it’s not just about raw performance.

What is still left on the table is efficiency – both area and power – and cost. Apple may be going this route because they believe they can develop a more efficient GPU internally than they can following Imagination’s GPU architectures, which would be interesting to see as, to date, Imagination’s Rogue designs have done very well inside of Apple’s SoCs. Alternatively, Apple may just be tired of paying Imagination $75M+ a year in royalties, and wants to bring that spending in-house. But no matter what, all eyes will be on how Apple promotes their GPUs and their performance later this year.

Speaking of which, the timetable Imagination offers is quite interesting. According to Imaginations press release, they have told the company that they will no longer be using Imagination’s IP for new products in 15 to 24 months. As Imagination is an IP company, this is a critical distinction: this doesn’t mean that Apple is going to launch their new GPU in 15 to 24 months, it’s that they’re going to be done rolling out new products using Imagination’s IP altogether within the next 2 years.

Apple SoC History
  First Product Discontinued
A7 iPhone 5s
(2013)
iPad Mini 2
(2017)
A8 iPhone 6
(2014)
Still In Use:
iPad Mini 4, iPod Touch
A9 iPhone 6s
(2015)
Still In Use:
iPad, iPhone SE
A10 iPhone 7
(2016)
Still In Use

And that, in turn, means that Apple’s new GPU could be launching sooner rather than later. I hesitate to read too much into this because there are so many other variables at play, but the obvious question is what this means for the the (presumed) A11 SoC in this fall’s iPhone. Apple has tended to sell most of their SoCs for a few years – trickling down from iPhone and high-end iPad to their entry-level equivalents – so it could be that Apple needs to launch their new GPU in A11 in order to have it trickle-down to lower-end products inside that 15 to 24 month window. On the other hand, Apple could go with Imagination in A11, and then just avoid doing trickle-down, using new SoC designs for entry-level devices instead. The only thing that’s safe to say right now is that with this revelation, an Imagination GPU design is no longer a lock on A11 – anything is going to be possible.

But no matter what, this does make it very clear that Apple has passed on Imagination’s next-generation Furian GPU architecture. Furian won’t be ready in time for A11, and anything after that is guaranteed to be part of Apple’s GPU transition. So Rogue will be the final Imagination GPU architecture that Apple uses.

Imagination: Patents & Losing Apple
Comments Locked

144 Comments

View All Comments

  • Meteor2 - Tuesday, April 4, 2017 - link

    What name99 said. Which is awfully like what Qualcomm is doing, isn't it? A bunch of conceptually-different processor designs in one 'platform'. Software uses whichever is most appropriate.
  • peevee - Tuesday, April 18, 2017 - link

    It is certainly easier to design your own ISA than to build your own core for somebody else's ISA. And ARM64 us FAR from perfect. So 1980s.
  • quadrivial - Monday, April 3, 2017 - link

    Very unlikely. They gave up that chance a couple years ago (ironically, to imagination).

    Consider, it takes 4-5 years from initial architecture design to final shipment. No company is immune to this timeframe no matter how large. Even more time is required for new ISAs because there are new, unexpected edge cases that occur.

    Consider, ARM took about 4 years to ship from the time the ISA was announced. Most of their licensees took closer to 5 years. Apple took a bit less than 2 years.

    Consider, Apple was a front-runner to buy MIPS so they could have their own ISA, but they backed out instead. The new ARM ISA is quite similar to MIPS64.

    Thought, Apple started designing a uarch that could work well with either MIPS or their new ARMv8. A couple years in (about the time nailing down the architecture would start to become unavoidable), they show ARM a proposal for a new ISA and recommend ARM adopt that ISA otherwise they buy MIPS and switch. ARM announces a new ISA and immediately has teams start working on it, but Apple has a couple year head start. Apple won big because they shipped an extremely fast CPU a full two years before their competitors and even more years for their competitors to catch up.

    Maybe imperfect, but its the best explanation I can come up with for how events occurred.
  • TheMysteryMan11 - Monday, April 3, 2017 - link

    Computing still heavily relies on CPU for all the things that matter to power users. ARM is long way away from being powerful enough to actually be useful for power users and creators.
    It is good enough for consumption and getting better.

    But then again, Apple hasnt been doing well catering to creators anyway. Still no refresh for Mac Pro. So you might be right. But that means Apple is Ok with ignoring that segment, which they probably are.
  • lilmoe - Monday, April 3, 2017 - link

    Single purpose equipment aren't mainly CPU dependent. This is my point. Relying on the CPU for general purpose functionality is inherently the least efficient, especially for consumer workloads.

    Outside the consumer market, for example engineering and video production software, are still very CPU dependent because the software isn't written efficiently. It's so for the sole purpose of supporting the most amount of currently available hardware. I'd argue that if a CAD program was re-written from the ground up to be hardware dependent and GPU accelerated ONLY, then it would run faster and more fluidly on an iPad than on a Core i7 with integrated graphics, if the storage speed was the same.

    This leaves only niche applications that are inherently dependent on a CPU, and can't be offloaded to hardware accelerates. With more work on efficient multi-threaded coding, Apple's own CPU cores, in a quad/octa configuration, can arguably suffice. Single-threaded applications are also arguably good enough, even on A72/A73 cores.

    Again, this conversation is about consumer/prosumer workloads. It's evident that Apple isn't interested in Server/corporate workloads.

    This has been Apple's vision since inception. They want to do everything in-house as a single package for a single purpose. They couldn't in the past, and almost went bankrupt, because they weren't _big enough_. This isn't the case now.

    The future doesn't look very bright for the personal computing industry as we know it. There has been talk and rumors that Samsung is taking a VERY similar approach. Rumors started hitting 2 years ago that they were also building their own in-house GPU, and are clashing with Nvidia and AMD graphics IP in the process. It also lead Nvidia to sue Samsung for reasons only known behind the scenes.
  • ddriver - Monday, April 3, 2017 - link

    Yeah let's make the x chip that can only do one n task. And while we are at it, why not scrap all those cars you can drive anywhere there is an open road, and make cars which are best suited for one purpose. You need a special car to do groceries, a special car to go to work, a bunch of different special cars when you go on vacation, depending on what kind of vacation it is.

    Implying prosumer software isn't properly written is laughable, and a clear indication you don't have a clue. That's the only kind of software that scales well with cores, and can scale linearly to as many threads as you have available.

    But I'd have to agree that crapple doesn't really care about making usable hardware, their thing is next-to-useless toys, because it doesn't matter how capable your hardware is, what matters is how much of that lacking and desperately needed self esteem you get from buying their branded, overpriced toy.

    Back in the days apple did good products and struggled to make profit, until genius Jobs realized how profitable it can be to exploit dummies and make them even dummer, giving rise to crapple, and the shining beacon of an example that the rest of the industry is taking, effectively ruining technology and reducing it to a fraction of its potential.
  • lilmoe - Monday, April 3, 2017 - link

    Chill bro.
    I said current software is written to support the most amount of hardware combinations possible. And yes, that's NOT the most efficient way to write software, but it _is_ the most accessible for consumers.

    I wasn't implying that one way is better than the other. But it's also true that a single $200 GPU runs circles around $1500 10 core Intel CPU in rendering.
  • steven75 - Monday, April 3, 2017 - link

    How amazing that an "overpriced toy" still shames all Android manufacturers in single thread performance. The brand new S8 (with a price increase, no-less) can't even beat a nearly 2 year old iPhone 6S.

    I wish all "toys" were superior like that!
  • fanofanand - Monday, April 3, 2017 - link

    How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time.
  • FunBunny2 - Monday, April 3, 2017 - link

    -- How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time.

    I guess Mao was right.

Log in

Don't have an account? Sign up now