The main focus was on power efficiency here. Thats why they went with a smaller, lower powered GPU. And you can feel it a lot. Actually it has been said that the A8 is the first Apple A chip that has lower peak power than its predescessor. In battery life test in GFX bench it pushes 38% more pixels on a 16% bigger battery 50% faster and 25% longer than the iPhone 5S. I call that insane... Because even with the same battery cappacity like 5S and same 4,7" display from the 6 it would last somewhat longer. With A7 iPads got a nice battery life bumb. A7 is already very low powered chip for an iPad and A8 is even less so I think it wont be a big impact in battery life, the display is the biggest bottleneck by far in iPad battery life. So efficient in a phone, there is deffinitelly a headroom for improvement in an iPad with "A8X" it could get the G6650 thats about 50% faster as well. It would finally balance the performance (iPads got ~50% more pixels than 1080p 6+) without much battery life impact at all. It would pull about as much power as A7 which isn't a lot at all in an iPad.
We will see if Apple will decide to put in that ultra-efficient iPhone chip like in 2013 or an bumped up chip to utilize platform resources like in 2012.
IF the iPad Pro is real that makes the existence of an A8X even more likely because you then have even more pixels to drive. Of course, another way Apple could handle this is by more aggressive binning --- iPads get 1.5GHz chips, iPad Pro's get 1.6GHz, and Apple considers that good enough --- but that seems like it would not be good enough...
Except the iPhone 6 Plus uses almost 3x as many pixels as the iPhone 5S/A7 combo. So in fact, the weaker GPU should be LESS efficient for that phone, than a more powerful GPU would've been.
What 'can't' you do on your iPad? How is it underpowered and finally, how in the WORLD aren't you getting the same battery life I'm getting? 12/14 hours easily, always on. Do a lot of sound and video work and in the field with Lightroom, it's easy to manipulate, correct and upload shots to the base camp in seconds! Sorry. Thought that was a very bizarre comment as an owner of each (still have an original and it's still ten hours away from the wall) & current owner of the second gen minis, an Air and several iPad 4s (we've got a business) One thing I've NEVER considered was the iPad to be 'underpowered'. I'm floored by its efficiency and abilities to utilize an amazing amount, more than any point in history, of apps/software WITH such power! Photo and video manipulation, music and audio recording and mastering as well, my mini is now me kneeboard providing Jep Charts/Plates ...weather, terrain and traffic in real time, flight planning, filing and 'gas calculator';) ...I guess I've yet to find anything ever on the iPad it couldn't 'handle'. The new MS suite is incredible as are most of the tools/apps provided (gratis) by Apple. iLife and iWork. GarageBand and iMovie instead of Samsung's bloat I've got on my AT&T Note 3 (from both parties, samsung and AT&T).
Anyway, as an iPhone, iPad mini 2 and 5s owner, I can honestly say the 'speed' and 'power', 'efficiency' and yes, battery life are still, four years later a real treat.
That said I'm 43 this weekend and remember my brick Motorola @ work lasting about 100 minutes & my early laptops weighing 12 pounds with a three pound wart that would maybe do an hour and a ½. Today's battery life on these phones, iPads and both our 2012 & 2014 rMBPs.
Have you 'honestly' found software on the iPad that runs slow?
Sorry, third to last sentence was supposed to be iPad Air, Mini 2 and 5s. Just to show and reflect on my experiences with the run of A7 devices. Apologies. Still. Need. Edit. :-)
So how long will it be before Apple (and other companies will surely follow) starts suing people for disassembling their chips and divulging all their proprietary IP?
Stop putting everything in a car metaphor. Cars are constantly copied and nobody is suing each other. In the phone industry they want to own everything and you may copy nothing.
Also, Chipworks is doing nothing but taking a picture of a physical object. Which is not illegal, regardless of whether Apple approves of it or not. If Chipworks did get sued for this, I'd definitely contribute to their legal fund.
And, the car companies DID at one time go around suing people for stuff like this. There were some famous legal cases dealing with the large service manuals like Chilton's. But auto manufacturing is a mature industry; most of these types of things were resolved 40 years ago or so (much of the legal action was in the 1970s). Smartphones on the other hand are a brand new industry, and there need to be some pissing matches before companies figure out that stupid lawsuits are a waste of time and money.
I'm aware of 5 (Google for specifics): BMW sued Shaunguan over an X3 copy Mercedes-Benz sued Shaunguan over a Smart Car copy Ford sued JAC over an F150 copy GM sued Chery over a Matiz copy Fiat sued Great Wall over a Panda copy
Any chip IP in there will be far smaller than what this picture can resolve. We can count sram blocks and registers all we want, but the low level optimizations are far deeper.
Think about it, we've been looking at AMD and Intel and Nvidia die shots for years (well, some bullshots too). They even divulge them freely. If it was a competitive concern, would they?
Besides, reverse engineering is perfectly legal. If Chipworks takes super high resolution images and ships them to a fab for manufacture, that would be illegal. Merely taking things apart, and even publishing detailed analysis, is not illegal. IP law does not recognize a right to secrecy (except trade secrets, but it's not illegal to discover / reverse engineer a trade secret)
If you don't actually know for sure that it's a 4-cluster GPU then write a trivial Metal kernel that occupies just under 16KB of local memory per kernel workgroup and performs a fixed amount of work.
Launch a grid of (12*N) workgroups. If it's a 4-cluster GPU it will take (3*N) time units to execute. If it's a 6-cluster then it will take (2*N). Adjust for clock frequency and architectural differences.
True, you're right! The die reveals everything once you know what to look for.
I think everyone assumed Apple's on-stage "50%" claim *had* to be accomplished with GX6650 and didn't realize that ImgTec had already stated on their blog that improvements in the XT series could result in 50% performance gains.
A couple of those early iPhone 6/6+ benchmarks show solid improvement over the 5s despite the new devices having 1.4x / 3.8x (6/6+) more pixels (real/virtual).
The proportion of the die taken up by the CPU, GPU, and L3 cache has decreased between the A7 and A8. Any thoughts where the transistors are now going?
One would assume more fixed function hardware. H.265 (HEVC) capabilities, NFC, maybe a better ISP. There is admittedly a lot of plumbing in these SoCs that we can't readily identify right now.
Most of mobile SOCs have a lot of additional functionality, so these 50% of other stuff (besides CPU and GPU) shouldn't actually come as a surprise. Other mobile SOCs these days (e.g. from Qualcomm and Mediatek) should be similar in this aspect, I suppose.
I'd argue: because the GPU is not HSA (if it were, that would be visible in the developer material).
My guess is that when Apple slides in their own GPU (we pretty much all agree that's coming) they will use the fact that they own both sides of the connection to provide not just a GPU (ie a fancy peripheral) but an HSA GPU (ie a device that looks to the OS like additional CPUs, that shares a memory map with the main CPU via the OS, shares interrupts with the OS, etc).
Until they're ready to provide all that, there's no reason not to just continue with their existing solution.
I don't take it as given that Apple would highlight this in an already packed presentation. Apple has been hiring ex-AMD engineers for some time now and already have mobile design experience as evident by their recent CPU advances. Some food for thought: Apple lists the GPU of the A8 as "Apple A8 GPU" and not SGX as per older devices see official developer documentation (https://developer.apple.com/library/ios/documentat...
Come on, the comment there was CLEARLY sarcastic. Have we all become so stupid on the internet that even Onion articles need to have an explicit /sarcasm tag?
It was sarcastic, but it wasn't ironic. He meant what he said, and he was wrong. If it kept that same screen size, it would have used enough less battery to have remained right where it was. Instead, it's got better battery life.
So is the memory interface still 2x32-bit? With the same LPDDR3-1600 and 4 MB L3 cache, it seems strange that Apple would forgo improvements in the memory subsystem when they've been pretty consistent and aggressive in improvements in this area.
With the A7 Apple did attempt to compensate for the decreased system memory bandwidth compared to the A6X by introducing the L3 cache. This time memory bus, memory speed, and L3 cache size appear unchanged so improvements look to be more subtle from things like improved memory controller efficiency and re-organized cache structure.
It looks like the L2 cache is no longer shared but allocated on a per core basis. This has some coherency implications and could also mean an increase in aggregate L2 cache size.
Is that some app someone made? If so, I bet they scan it to see which iPhone it is and then put up a generic datasheet that they compiled themselves (probably with AnandTech's old guess data?)
All that does it go to prove Jason Mick missed the clue train again. Not the first time DT's desire to be able to scream "FRIST!!!" like a 4 year old on a comment thread resulted in them greatly exceeding their technical skills and looking like an idiot as a result.
Regarding the SRAM block,my guess is that it's still an L3 cache, and that it was moved closer to the CPU. Recall the graph on http://www.anandtech.com/show/7460/apple-ipad-air-... which tells us we have an L3 cache of about 4MB and a latency of around 90 cycles or so.
That's obviously not great, and if moving the core physically closer to it (along with, presumably improving the interface to it which may well, in Cyclone, have been a rushed and half-assed job) can bring that down to maybe 50 cycles, that's a nice improvement.
IF the L3 is also being used as a staging area to move data to the GPU, then even if that transfer operation is slowed as a consequence, that's a sensible tradeoff, since GPUs are by design latency tolerant in a way that CPUs are not.
Kind of interesting, if the SRAM is making up for the main memory bandwidth deficit (it was reduced from a6-a7 I think?), why is it so far from the GPU cores?
Wait, you still thought it was a GX6650 AFTER Apple announced that it's ONLY 50 percent faster?
I'm disappointed in you Anandtech. That was obvious as daylight. GX6650 can be 2-4x faster, depending on configuration, than the old A7 GPU. OF COURSE it wasn't the GX6650, but something weaker.
what configuration are you talking about? the frequency? well, theoreticly, GX6650 is 50% faster than G6430 at same frequency(because it has 50% more clusters).
No theoretically it's around 2,25x faster. You got a 50% improvement of performance from new architecture (going from G6430 to GX6450) in series 6XT which promises 50% higher performance at about the same power level. Then you got another 50% from 50% more clusters (going from GX6450 to GX6650) and possibly 2x the texture units as well if we will follow the pattern.
What is so interesting about the chip analysis of the A8 is how much is completely unknown about it. For example, fully half of the A8's regions are unlabeled and have mysterious functions.
Wonderful. Truly wonderful engineering. Shame that new iphones battery life IRL sucks so much compared to competition in both (4.7 and 5.5) groups, who are not even with 20nm process yet.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
65 Comments
Back to Article
kron123456789 - Tuesday, September 23, 2014 - link
I knew that there is a GX6450 GPU and not GX6650)) But maybe GX6650 will be present in A8X chip, who knows.GC2:CS - Tuesday, September 23, 2014 - link
The main focus was on power efficiency here. Thats why they went with a smaller, lower powered GPU. And you can feel it a lot. Actually it has been said that the A8 is the first Apple A chip that has lower peak power than its predescessor. In battery life test in GFX bench it pushes 38% more pixels on a 16% bigger battery 50% faster and 25% longer than the iPhone 5S. I call that insane... Because even with the same battery cappacity like 5S and same 4,7" display from the 6 it would last somewhat longer.With A7 iPads got a nice battery life bumb. A7 is already very low powered chip for an iPad and A8 is even less so I think it wont be a big impact in battery life, the display is the biggest bottleneck by far in iPad battery life. So efficient in a phone, there is deffinitelly a headroom for improvement in an iPad with "A8X" it could get the G6650 thats about 50% faster as well. It would finally balance the performance (iPads got ~50% more pixels than 1080p 6+) without much battery life impact at all. It would pull about as much power as A7 which isn't a lot at all in an iPad.
We will see if Apple will decide to put in that ultra-efficient iPhone chip like in 2013 or an bumped up chip to utilize platform resources like in 2012.
name99 - Tuesday, September 23, 2014 - link
IF the iPad Pro is real that makes the existence of an A8X even more likely because you then have even more pixels to drive.Of course, another way Apple could handle this is by more aggressive binning --- iPads get 1.5GHz chips, iPad Pro's get 1.6GHz, and Apple considers that good enough --- but that seems like it would not be good enough...
Krysto - Wednesday, September 24, 2014 - link
Except the iPhone 6 Plus uses almost 3x as many pixels as the iPhone 5S/A7 combo. So in fact, the weaker GPU should be LESS efficient for that phone, than a more powerful GPU would've been.akdj - Thursday, September 25, 2014 - link
What 'can't' you do on your iPad? How is it underpowered and finally, how in the WORLD aren't you getting the same battery life I'm getting? 12/14 hours easily, always on. Do a lot of sound and video work and in the field with Lightroom, it's easy to manipulate, correct and upload shots to the base camp in seconds!Sorry. Thought that was a very bizarre comment as an owner of each (still have an original and it's still ten hours away from the wall) & current owner of the second gen minis, an Air and several iPad 4s (we've got a business)
One thing I've NEVER considered was the iPad to be 'underpowered'. I'm floored by its efficiency and abilities to utilize an amazing amount, more than any point in history, of apps/software WITH such power! Photo and video manipulation, music and audio recording and mastering as well, my mini is now me kneeboard providing Jep Charts/Plates ...weather, terrain and traffic in real time, flight planning, filing and 'gas calculator';) ...I guess I've yet to find anything ever on the iPad it couldn't 'handle'. The new MS suite is incredible as are most of the tools/apps provided (gratis) by Apple. iLife and iWork. GarageBand and iMovie instead of Samsung's bloat I've got on my AT&T Note 3 (from both parties, samsung and AT&T).
Anyway, as an iPhone, iPad mini 2 and 5s owner, I can honestly say the 'speed' and 'power', 'efficiency' and yes, battery life are still, four years later a real treat.
That said I'm 43 this weekend and remember my brick Motorola @ work lasting about 100 minutes & my early laptops weighing 12 pounds with a three pound wart that would maybe do an hour and a ½. Today's battery life on these phones, iPads and both our 2012 & 2014 rMBPs.
Have you 'honestly' found software on the iPad that runs slow?
akdj - Thursday, September 25, 2014 - link
Sorry, third to last sentence was supposed to be iPad Air, Mini 2 and 5s. Just to show and reflect on my experiences with the run of A7 devices. Apologies. Still. Need. Edit. :-)subflava - Tuesday, September 23, 2014 - link
So how long will it be before Apple (and other companies will surely follow) starts suing people for disassembling their chips and divulging all their proprietary IP?designerfx - Tuesday, September 23, 2014 - link
never?They don't own images other people generate of a chip's die.
barleyguy - Tuesday, September 23, 2014 - link
That would be like car manufacturers suing people for taking the engine apart...Fergy - Wednesday, September 24, 2014 - link
Stop putting everything in a car metaphor. Cars are constantly copied and nobody is suing each other. In the phone industry they want to own everything and you may copy nothing.barleyguy - Wednesday, September 24, 2014 - link
First of all, it was an analogy, not a metaphor.Also, Chipworks is doing nothing but taking a picture of a physical object. Which is not illegal, regardless of whether Apple approves of it or not. If Chipworks did get sued for this, I'd definitely contribute to their legal fund.
And, the car companies DID at one time go around suing people for stuff like this. There were some famous legal cases dealing with the large service manuals like Chilton's. But auto manufacturing is a mature industry; most of these types of things were resolved 40 years ago or so (much of the legal action was in the 1970s). Smartphones on the other hand are a brand new industry, and there need to be some pissing matches before companies figure out that stupid lawsuits are a waste of time and money.
ancientarcher - Wednesday, September 24, 2014 - link
How many car manufacturers sued competitors for having 4 wheels and the same shape as their cars??BrooksT - Wednesday, September 24, 2014 - link
I'm aware of 5 (Google for specifics):BMW sued Shaunguan over an X3 copy
Mercedes-Benz sued Shaunguan over a Smart Car copy
Ford sued JAC over an F150 copy
GM sued Chery over a Matiz copy
Fiat sued Great Wall over a Panda copy
CharonPDX - Wednesday, September 24, 2014 - link
Um, yes, actually: http://www.automoblog.net/2011/01/14/100-years-ago... (Or, more basically, for putting an engine on a chassis with wheels.)tipoo - Tuesday, September 23, 2014 - link
Any chip IP in there will be far smaller than what this picture can resolve. We can count sram blocks and registers all we want, but the low level optimizations are far deeper.Think about it, we've been looking at AMD and Intel and Nvidia die shots for years (well, some bullshots too). They even divulge them freely. If it was a competitive concern, would they?
BrooksT - Wednesday, September 24, 2014 - link
Besides, reverse engineering is perfectly legal. If Chipworks takes super high resolution images and ships them to a fab for manufacture, that would be illegal. Merely taking things apart, and even publishing detailed analysis, is not illegal. IP law does not recognize a right to secrecy (except trade secrets, but it's not illegal to discover / reverse engineer a trade secret)allanmac - Tuesday, September 23, 2014 - link
If you don't actually know for sure that it's a 4-cluster GPU then write a trivial Metal kernel that occupies just under 16KB of local memory per kernel workgroup and performs a fixed amount of work.Launch a grid of (12*N) workgroups. If it's a 4-cluster GPU it will take (3*N) time units to execute. If it's a 6-cluster then it will take (2*N). Adjust for clock frequency and architectural differences.
This is the only way to be sure.
Homeles - Tuesday, September 23, 2014 - link
Or, you know, look at the die. That is, if you know what to look for.allanmac - Wednesday, September 24, 2014 - link
True, you're right! The die reveals everything once you know what to look for.I think everyone assumed Apple's on-stage "50%" claim *had* to be accomplished with GX6650 and didn't realize that ImgTec had already stated on their blog that improvements in the XT series could result in 50% performance gains.
A couple of those early iPhone 6/6+ benchmarks show solid improvement over the 5s despite the new devices having 1.4x / 3.8x (6/6+) more pixels (real/virtual).
ltcommanderdata - Tuesday, September 23, 2014 - link
The proportion of the die taken up by the CPU, GPU, and L3 cache has decreased between the A7 and A8. Any thoughts where the transistors are now going?Ryan Smith - Tuesday, September 23, 2014 - link
One would assume more fixed function hardware. H.265 (HEVC) capabilities, NFC, maybe a better ISP. There is admittedly a lot of plumbing in these SoCs that we can't readily identify right now.Metaluna - Wednesday, September 24, 2014 - link
Still, that's a huge number of transistors to chalk up to miscellaneous support functions. About 50% of that die shot is unidentified logic.TiGr1982 - Wednesday, September 24, 2014 - link
Most of mobile SOCs have a lot of additional functionality, so these 50% of other stuff (besides CPU and GPU) shouldn't actually come as a surprise. Other mobile SOCs these days (e.g. from Qualcomm and Mediatek) should be similar in this aspect, I suppose.hpglow - Tuesday, September 23, 2014 - link
You are aware that the die shrunk between the A7 and A8? 28 to 20nm.psychobriggsy - Wednesday, September 24, 2014 - link
He did write "proportion", and he's correct. There is definitely more additional stuff on the die now.amarcus - Tuesday, September 23, 2014 - link
Why the obsession that the A8 GPU has to be part of the Imagination PowerVR series and not a derivative or even a unique Apple creation?kron123456789 - Tuesday, September 23, 2014 - link
Because Apple would mentioned it in their presentation, like theirs "64-bit desktop-class architecture".hojnikb - Tuesday, September 23, 2014 - link
Because Apple didn't buy any companies lately, that can make GPUs ...asendra - Tuesday, September 23, 2014 - link
But, If I recall correctly, they have hired quite a bit of gpu talent this past few years through ex AMD guys.name99 - Tuesday, September 23, 2014 - link
I'd argue: because the GPU is not HSA (if it were, that would be visible in the developer material).My guess is that when Apple slides in their own GPU (we pretty much all agree that's coming) they will use the fact that they own both sides of the connection to provide not just a GPU (ie a fancy peripheral) but an HSA GPU (ie a device that looks to the OS like additional CPUs, that shares a memory map with the main CPU via the OS, shares interrupts with the OS, etc).
Until they're ready to provide all that, there's no reason not to just continue with their existing solution.
Fleeb - Tuesday, September 23, 2014 - link
A fact is not an obsession.amarcus - Tuesday, September 23, 2014 - link
I don't take it as given that Apple would highlight this in an already packed presentation. Apple has been hiring ex-AMD engineers for some time now and already have mobile design experience as evident by their recent CPU advances. Some food for thought: Apple lists the GPU of the A8 as "Apple A8 GPU" and not SGX as per older devices see official developer documentation (https://developer.apple.com/library/ios/documentat...amarcus - Tuesday, September 23, 2014 - link
Correct link here: https://developer.apple.com/library/ios/documentat...(If only there was an edit button #firstworldproblems)
kron123456789 - Tuesday, September 23, 2014 - link
And there is an "Apple A7 GPU", but it's also PowerVR GPU.Homeles - Tuesday, September 23, 2014 - link
Everything about the GPU floorplan is distinctively PowerVR.nicolapeluchetti - Tuesday, September 23, 2014 - link
Is it true, like suggested on sealed abstract, that the size of the A8 prevented from having a 4" phone? http://sealedabstract.com/rants/on-the-apple-watch...nicolapeluchetti - Tuesday, September 23, 2014 - link
No just saw the dies size, it can't be :Dname99 - Tuesday, September 23, 2014 - link
Come on, the comment there was CLEARLY sarcastic.Have we all become so stupid on the internet that even Onion articles need to have an explicit /sarcasm tag?
mkozakewich - Tuesday, September 23, 2014 - link
It was sarcastic, but it wasn't ironic. He meant what he said, and he was wrong.If it kept that same screen size, it would have used enough less battery to have remained right where it was. Instead, it's got better battery life.
ltcommanderdata - Tuesday, September 23, 2014 - link
So is the memory interface still 2x32-bit? With the same LPDDR3-1600 and 4 MB L3 cache, it seems strange that Apple would forgo improvements in the memory subsystem when they've been pretty consistent and aggressive in improvements in this area.kron123456789 - Tuesday, September 23, 2014 - link
Until A7. A7 was step back in this area compared to A6X.ltcommanderdata - Tuesday, September 23, 2014 - link
With the A7 Apple did attempt to compensate for the decreased system memory bandwidth compared to the A6X by introducing the L3 cache. This time memory bus, memory speed, and L3 cache size appear unchanged so improvements look to be more subtle from things like improved memory controller efficiency and re-organized cache structure.Homeles - Tuesday, September 23, 2014 - link
The interface remains the same size.Kevin G - Tuesday, September 23, 2014 - link
It looks like the L2 cache is no longer shared but allocated on a per core basis. This has some coherency implications and could also mean an increase in aggregate L2 cache size.PorkayM - Tuesday, September 23, 2014 - link
Utility app on my 6plus says gpu is g6650. Possibly a slightly different gpu for the plus?kron123456789 - Tuesday, September 23, 2014 - link
More likely utility app is wrong)PorkayM - Tuesday, September 23, 2014 - link
Obvious possibility but the app reads what ios is telling it (i think?) its weird that thats the gpu its mistaken for.kron123456789 - Tuesday, September 23, 2014 - link
iOS doesn't tell anything about gpu, except "Apple A8 GPU"(look at GFXbench, for example).PorkayM - Tuesday, September 23, 2014 - link
https://www.dropbox.com/s/h9f6xa1aevg95dp/Photo%20...Im not disagreeing with you, just saying what the app says. If someone has a regular 6 maybe they can see if theres a difference?
ryanthered - Tuesday, September 23, 2014 - link
My 6 reports the same 6650.mkozakewich - Tuesday, September 23, 2014 - link
Is that some app someone made? If so, I bet they scan it to see which iPhone it is and then put up a generic datasheet that they compiled themselves (probably with AnandTech's old guess data?)Jorgio - Tuesday, September 23, 2014 - link
What is the name of this app?PorkayM - Tuesday, September 23, 2014 - link
System statusAchtung_BG - Tuesday, September 23, 2014 - link
http://www.dailytech.com/Die+Shots+Confirm+A8+Pack...This article say GX6650 with graphics die shot ?!?
ltcommanderdata - Tuesday, September 23, 2014 - link
Anyone can take a die shot and draw some boxes and put labels on it. You just need to decide who is the more knowledgeable analyst.DanNeely - Tuesday, September 23, 2014 - link
All that does it go to prove Jason Mick missed the clue train again. Not the first time DT's desire to be able to scream "FRIST!!!" like a 4 year old on a comment thread resulted in them greatly exceeding their technical skills and looking like an idiot as a result.name99 - Tuesday, September 23, 2014 - link
Regarding the SRAM block,my guess is that it's still an L3 cache, and that it was moved closer to the CPU. Recall the graph onhttp://www.anandtech.com/show/7460/apple-ipad-air-...
which tells us we have an L3 cache of about 4MB and a latency of around 90 cycles or so.
That's obviously not great, and if moving the core physically closer to it (along with, presumably improving the interface to it which may well, in Cyclone, have been a rushed and half-assed job) can bring that down to maybe 50 cycles, that's a nice improvement.
IF the L3 is also being used as a staging area to move data to the GPU, then even if that transfer operation is slowed as a consequence, that's a sensible tradeoff, since GPUs are by design latency tolerant in a way that CPUs are not.
tipoo - Tuesday, September 23, 2014 - link
Kind of interesting, if the SRAM is making up for the main memory bandwidth deficit (it was reduced from a6-a7 I think?), why is it so far from the GPU cores?adityarjun - Wednesday, September 24, 2014 - link
If it not a 6 core GPU and how is Apple claiming a 50% increase in performance?Anandtech, do benchmarks actually show a 50% increase?
Krysto - Wednesday, September 24, 2014 - link
Wait, you still thought it was a GX6650 AFTER Apple announced that it's ONLY 50 percent faster?I'm disappointed in you Anandtech. That was obvious as daylight. GX6650 can be 2-4x faster, depending on configuration, than the old A7 GPU. OF COURSE it wasn't the GX6650, but something weaker.
kron123456789 - Wednesday, September 24, 2014 - link
what configuration are you talking about? the frequency? well, theoreticly, GX6650 is 50% faster than G6430 at same frequency(because it has 50% more clusters).GC2:CS - Monday, September 29, 2014 - link
No theoretically it's around 2,25x faster.You got a 50% improvement of performance from new architecture (going from G6430 to GX6450) in series 6XT which promises 50% higher performance at about the same power level.
Then you got another 50% from 50% more clusters (going from GX6450 to GX6650) and possibly 2x the texture units as well if we will follow the pattern.
jameskatt - Thursday, September 25, 2014 - link
What is so interesting about the chip analysis of the A8 is how much is completely unknown about it. For example, fully half of the A8's regions are unlabeled and have mysterious functions.Inventor14 - Saturday, September 27, 2014 - link
https://secure.myBookOrders.com/order/zvonko-pavlo...Sliderpro - Tuesday, October 21, 2014 - link
Wonderful. Truly wonderful engineering. Shame that new iphones battery life IRL sucks so much compared to competition in both (4.7 and 5.5) groups, who are not even with 20nm process yet.