Original Link: https://www.anandtech.com/show/6330/the-iphone-5-review
The iPhone 5 Review
by Anand Lal Shimpi, Brian Klug & Vivek Gowri on October 16, 2012 11:33 AM EST- Posted in
- Apple
- Smartphones
- Mobile
- iPhone 5
The last significant redesign of the iPhone platform came in 2010 with the iPhone 4. It was an update that literally touched all aspects of the device, from SoC to display to baseband and of course, chassis. Last month’s launch of the iPhone 5 is no different in magnitude. The sixth generation iPhone makes some of the biggest changes to the platform since its introduction in 2007.
Visually the device begins by evolving the design language of the iPhone 4/4S chassis. From the launch of the iPhone 4 it was quite obvious that Apple had picked a design it was quite proud of. Thus it’s not too surprising that, from a distance, the iPhone 5 resembles the previous two iPhone models. We’ll get into material differences shortly, but what make the iPhone 5 design such a radical departure is its larger display.
All previous iPhones have maintained the same 3.5-inch, 3:2 aspect ratio display. With the rest of the world quickly moving to much larger displays, and with 16:9 the clear aspect ratio of choice, when faced with the decision of modernizing the iPhone platform the choice was obvious.
The iPhone 5 embraces a taller, 4-inch, 16:9 1136 x 640 display opting to lengthen the device instead of increasing its area in both dimensions. The result is a device that is distinctly an iPhone, albeit a modern one. The taller display doesn’t do much to make desktop web pages any easier to read as a result of the width staying the same. Those longing for an HTC One X or Galaxy S 3 sized device running iOS are out of luck. Reading emails and typing are both improved though as there’s now more room for lists and the keyboard no longer occupies as much of the display. The taller device can be more awkward to use if you have smaller hands, but the added screen real estate is honestly worth it. Once you get used to the iPhone 5’s display, going back to the older models is tough.
The taller chassis went on a diet as well. The iPhone 5 is now considerably thinner and lighter than its predecessor, which is yet another factor that contributes to it feeling more modern.
Internally the device changes are just as significant, if not more, than those on the outside. The iPhone 5 includes LTE support, which in areas where LTE networks are deployed can be enough reason alone to warrant an upgrade.
The iPhone 5 also includes a brand new SoC from Apple: the A6. For the first time since the introduction of the iPad, Apple has introduced a major branded SoC on an iPhone first. The iPhone 4 used the A4 after it debuted on the iPad, and the 4S picked up the A5 months after the iPad 2 launched with it. The A6 however arrives first on the iPhone 5, and with it comes two of Apple’s first, custom designed CPU cores. We’ve always known Apple as a vertically integrated device and software vendor, but getting into CPU design takes that to a new level.
Physical Comparison | ||||
Apple iPhone 4S | Samsung Galaxy S 3 (USA) | HTC One S | Apple iPhone 5 | |
Height | 115.2 mm (4.5") | 136.6 mm (5.38" ) | 130.9 mm (5.15" ) | 123.8 mm (4.87") |
Width | 58.6 mm (2.31") | 70.6 mm (2.78") | 65 mm (2.56") | 58.6 mm (2.31") |
Depth | 9.3 mm ( 0.37") | 8.6 mm (0.34") | 7.8 mm (0.31") | 7.6 mm (0.30") |
Weight | 140 g (4.9 oz) | 133g (4.7 oz) | 119.5g (4.21 oz) | 112 g (3.95 oz) |
CPU | Apple A5 @ ~800MHz Dual Core Cortex A9 | 1.5 GHz MSM8960 Dual Core Krait | 1.5 GHz MSM8260A Dual Core Krait | 1.3 GHz Apple A6 (Dual Core Apple Swift) |
GPU | PowerVR SGX 543MP2 | Adreno 225 | Adreno 225 | PowerVR SGX 543MP3 |
RAM | 512MB LPDDR2-800 | 2 GB LPDDR2 | 1 GB LPDDR2 | 1 GB LPDDR2 |
NAND | 16GB, 32GB or 64GB integrated | 16/32 GB NAND with up to 64 GB microSDXC | 16 GB NAND | 16, 32, or 64 GB integrated |
Camera | 8 MP with LED Flash + Front Facing Camera | 8 MP with LED Flash + 1.9 MP front facing | 8 MP with LED Flash + VGA front facing | 8 MP with LED Flash + 1.2MP front facing |
Screen | 3.5" 960 x 640 LED backlit LCD | 4.8" 1280x720 HD SAMOLED | 4.3" 960x540 Super AMOLED | 4" 1136 x 640 LED backlit LCD |
Battery | Internal 5.3 Whr | Removable 7.98 Whr | Removable 6.1 Whr | Internal 5.45 Whr |
There’s a lot to talk about when it comes to the new iPhone. Whether it is understanding the architecture of the A6 SoC or investigating the improved low light performance of the iPhone 5’s rear facing camera, we’ve got it here in what is easily our most in-depth iPhone review to date. Let’s get started.
Design
Section by Vivek Gowri
The iPhone 4, when it launched, represented a clean break for Apple's industrial design. It replaced the soft organic curvature of the iPhone 3G/3GS with a detailed sandwich of metal and glass, something that arguably brought the feel of a premium device to a new level. Obviously, Apple had their fair share of issues with the design initially, and nothing could match the sinking feeling of dropping one and shattering the glass on the front and back simultaneously, but it was a small price to pay for the jewel-like feel of the device. Combined with the (at the time) incredible pixel density of the then-new Retina Display, the iPhone 4 was a revolution in hardware design. The chassis has aged remarkably well over the last two-plus years, so naturally it's a hard act to follow.
The 5 keeps a similar design language to the 4, keeping roughly the same shape as before but with a taller and thinner form factor. At first glance, the 5 actually looks almost the same as the 4, with an unbroken glass front face, prominent corner radiuses, the familiar home button, a rectangular cross-section, and metallic sides with plastic antenna bands. However, those metallic sides are part of an anodized aluminum frame that makes up a majority of the body, and that's where the industrial design diverges from the 4 and 4S.
In contrast to the predominantly glass body of the previous generation iPhone, the 5 is almost entirely aluminum other than the glass front face and two small glass windows at the top and bottom of the back. It's a return to the original iPhone/3G/3GS-style of construction, with the front glass clipping into a unibody chassis. It's a significant departure from the 4 and 4S, where the stainless steel band in the center was the main housing that the front and rear panels clipped into. That was pretty radical way of doing things, so it's not all that surprising to see Apple revert to a more conventional and less complex method for the 5.
The aesthetic is actually pretty awesome, especially in the black version. The combination of black glass and off-black aluminum (Apple is calling it slate) gives the 5 an almost murdered out look that's three parts elegant and one part evil. The white and silver model has a classy look that's much friendlier in appearance than the black one. The color schemes and overall design aesthetic remind me of the Dell Adamo, one of my favorite notebook designs of all time. The similarities may be purely coincidental, but it's interesting to note nonetheless and should give you an idea of how premium the industrial design is.
All three previous iPhone body styles had very similar dimensions, so the biggest question with the 5 was how much the larger display would do to change that. Unlike many Android manufacturers, Apple still believes in things like small pockets, small hands, and one-handed smartphone usage. With the 5 being vertically stretched but no wider than the previous iPhones, the biggest impact on in-hand feel is actually the thinner body. If you're used to a larger Android or Windows device, the change seems radical, but even compared to the 22% thicker iPhone 4S, it feels a good deal smaller.
It's not just the minimized z-height though, the 25% weight loss is definitely also a factor. Even a few weeks later, I still find it striking how much less substantial it feels than the 4 and 4S. The densely-packed glass body just had a reassuring weight to it that the 5 simply lacks. But as you get used to the new form factor, you realize how far Apple is pushing the boundaries of ultrathin design. When the 4th generation iPod touch came out, I told Brian that I wanted an iPhone with that form factor - well, the 5 is essentially there (0.3mm thicker and 11 grams heavier, but close enough). It's pretty impressive to think about. If you thought the 4S was one of the best phone designs on the market in terms of aesthetics and build quality, the iPhone 5 just pushes that advantage further.
Build Quality Issues, Scuffgate
Section by Vivek Gowri
Despite all of the effort put into the iPhone 5, Apple has had its fair share of growing pains with the 5 design. The main thing here is definitely Scuffgate, which we'll get to in a moment, but it's not just that. My personal unit had an issue with the front glass not being properly mounted into the frame, something I tried to correct by just applying pressure until it clipped in, but it ended up unclipping again after some time. I'm not sure how widespread it is, but worth noting nonetheless. Add in the litany of other issues with the 5, including a fair number of nitpick-level complaints that SNL chose to poke fun at last weekend, and it's clear that this isn't a perfectly smooth launch.
Which brings us to Scuffgate, a two-fold issue that relates to the scratchability (scuffability?) of the new iPhone. Now, iPod users have been used to devices that are near-impossible to keep in decent condition for quite some time now. Any iPod with a chrome back (the first four iPod touches, all classic iPods, 1st and 3rd gen nano) is liable to scratch just by looking at it wrong, and there was actually a class-action lawsuit filed about this some time ago. But an iPhone that scratches easily is a pretty new phenomenon, which is why this is becoming a big deal.
These surface defects are commonly occurring on both black and white iPhone 5s, with the key difference being that the silver metal doesn't show imperfections nearly as much. The raw aluminum colour is a silver that's similar enough that unless you go hunting for it, in most cases you won't notice the texture difference. The black/slate 5s tend to show it pretty clearly though - the bright silver of the raw metal contrasts quite a bit with the dark finish, so even small imperfections tend to be high visibility. If you like to keep your phones naked (without a case or protective skin), I recommend going with the white 5. As sexy as the dark metal casing is, it starts looking a bit more low rent with a couple of scratches in it.
The other problem? People are having iPhone 5s delivered with noticeable scratches and dents, straight out of the box. Mine came with a couple of very minor ones that I only noticed after hunting for them, no big deal, but I've seen some aggrieved owners posting unboxing pictures showing relatively major surface flaws in the metal. In my opinion, this is the more concerning part of the "Scuffgate" equation. It's just not acceptable for significant surface defects to exist on brand new phones out of the box. With that said, I can understand how the 5 bodies are getting scratched in the factories. Let me explain, starting with the electrochemical anodization process for aluminum.
It works like this: the raw aluminum is submerged in an electrolyte through which a direct current is applied, growing an oxide at the anode electrode (the aluminum) and hydrogen at the cathode. Essentially, this is just a controlled electrochemical corrosion reaction. It results in the production of AL2O3, which we know as aluminum oxide (or α-alumina), along with a bunch of hydrons (H+) at the anode, plus dihydrogen gas (H2) at the cathode. The anode reaction looks something like this: 2 Al (s) + 3 H2O (l) = Al2O3 (s) + 6H+ + 6e-. As this process continues, a porous alumina film is created at the surface. This gives the slightly rough texture we're used to seeing on anodized aluminum products, but also allows for coloured dye to be poured in. The dye is then sealed into the material by putting the aluminum in boiling water. Professor Bill Hammack from the University of Illinois Urbana-Champaign biomolecular engineering department gives a pretty solid rundown of the basics in the video below, if you want a more visual explanation of the process.
Basically, the key to all this is the porous aluminum oxide layer. Based on the voltage, anodization time, and the specific electrolyte solution used, the depth of aluminum oxide created and size of the pores can vary. It's actually also possible to create a non-porous barrier-type alumina if an insoluble electrolyte is used in the anode, but that's a different story for a different time. Also, since this came up during the podcast and in the comments later, it's worth mentioning that aluminum reacts with air naturally to create a very thin oxide layer to protect the bare metal in a spontaneous mechanism known as passivation. By very thin, I'm talking on the ångström level - 50 of them, give or take. That's five nanometers, which is almost negligible, but more importantly, it's nonreactive to air beyond that so there is essentially no corrosion. This makes perfect sense if you think about how bare aluminum or any other raw metal reacts to air in purely physical terms, but it's always good to relate real-world observations to the chemical reactions taking place. Now, back to the various factors that dictate the properties of the anodization process - we don't have access to any of that information, beyond knowing that the specific aluminum being used is a 6000-series alloy. My digging suggests that it is likely some form of 6061, which is composed of 95.85%–98.56% aluminum, along with some combination of silicon, iron, copper, magnesium, manganese, chromium, zinc, and titanium, amongst other elements. It's hard to know exactly what Apple doing, but we're in a pretty good position to make educated guesses as to their methods and intentions.
A diagram showing the four steps of pore formation during the aluminum anodization process. The blue indicates the electrolyte solution, the light gray is the aluminum oxide, and the dark gray is the base aluminum. E indicates the flow of electrons. (Source: University of Halle-Wittenberg)
Apple has been anodizing handheld devices since the iPod mini debuted almost a decade ago, but obviously the process has been updated in the intervening years. The last notable change was a switch to an anodization process that resulted in denser pores around two years ago - it first showed up in the 2011 MacBook Pros and the iPad 2, eventually spreading to the rest of the lineup. The iPhone 5 takes that to a whole new level, with even finer and denser pores than I've seen used on any Apple product in the past (pore density is inversely proportional to pore size.) It's also a thinner metal than we've seen Apple use before. The material thickness for the iPhone 5 is just significantly thinner than they use on iPads or MacBooks, or even the old iPods that used anodized shells (iPod mini, 2nd, 4th, 6th generation nano, the last few iPod shuffles).
Which brings us to the next key detail with the anodization process: typically, the thickness of the anodization adds about half that thickness to the total aluminum thickness. So if you had an aluminum plate that was 1mm thick and added a 0.2 mm oxide, post-anodization, you would end up with a total material thickness of 1.1mm. With Apple trying to maintain as slim a profile as possible, it's in their best interests to have a relatively thin anodization. Given the graining of the anodization and based on what I've seen from scratching up my own iPhone 5, I think Apple's anodization process results in a super-thin alumina, something on the order of less than a hundred microns, at most; I'm estimating around 50-75um. (I'd also just like to note that in the process of this review, I took a jeweler's screwdriver to the back of my previously pristine iPhone 5. I love you guys, don't ever forget it.)
The oxide is even thinner on the bands, particularly the chamfers, which are just painted metal. So while the entire thing is easy to nick, it seems easiest to scratch off lots of paint on the bands, as well as the various metal edges. The soft-anodized surface is just a magnet. And the thing is, I'm not even sure they have the material thickness to oxidize more of the surface to get a more durable finish. The entire phone is so thin, and especially on the bands, I can't see a way for them to corrode any more of the aluminum than they already have without it raising questions about structural integrity. So, without very special care inside the factories, it's pretty easy to see how defects could occur. The rumors of Apple tightening down on quality control inside the iPhone 5 assembly factories comes as no surprise, since the 5 really does need extra attention to make it out of the factory unscathed.
So, are there any solutions to Scuffgate? Not really, or not anymore than there were with Antennagate. If you owned an AT&T 4, your only options were to put a case on it or just deal with the potential dropped calls. Here, your only options are to put a case on it, or just be very careful and deal with the potential scratches. If your phone came with defects out of the box, I'd just try returning it in hopes of getting a closer-to-perfect replacement unit. In the meantime, Apple needs to implement some controls internally to ensure that shipping devices don't contain any major surface-level defects. Maybe put some of those 29-megapixel cameras to good use. If Apple really wanted to fix it, they could put some sort of scratch-resistant polymer coating over the bare metal, but that'd absolutely ruin the surface feel. If I was Jony Ive, there's no way I'd let that happen, so until Apple changes up the design (like building antenna diversity into the CDMA 4 and 4S), we've just got to deal with it.
The A6 SoC
Section by Anand Shimpi
All great tech companies have their showing up to the fight moment. I borrow this phrasing from former ATI/AMDer, current Qualcomm-er Eric Demers. While at ATI/AMD, Eric came to the conclusion that the best way to lose market share was by simply not showing up to the fight. Customers tend to spend their money at key points throughout the year (holidays, back to school, etc...). If you don't have something shiny and new when those upticks in spending happen, you're not going to win. Eric called it showing up to the fight. By showing up to the fight every year, you at least had a chance of selling whatever it is that you're trying to hock.
Intel came to a similar realization after the Pentium 4, which eventually resulted in its famous tick-tock cadence. Every year you get evolutionary improvements, either in power or performance (sometimes in both). Over the course of several years, especially if your competition isn't as aggressive, you end up with a series of products that look downright revolutionary.
Apple learned from the best and quickly adopted a similar approach after it released the iPhone in 2007. With the exception of last year's 4S launch, like clockwork, Apple brought out a new iPhone every year at around the same time. The summer launch cycle was pushed back to fall last year but, since then Apple continued its roughly 12 month cadence for the iPhone.
The smartphone SoC space is still operating on this hyper Moore's Law curve which allows for significant innovation on a yearly cadence rather than a big update every 18 - 24 months. Even Intel recognized this fact as it will shift Atom to a yearly update cadence starting towards the end of next year.
The fast pace of changes on the smartphone side combined with the similarly aggressive release schedules from its competitors explain the difference in Apple's approach to iPhone/iPad vs. new Mac releases. The former are launched with much more pomp and circumstance, and are on a 2-year chassis redesign cadence. There's also the fact that devices running iOS make up the largest portion of Apple's yearly revenue. At some point I would expect the innovation/release cadence to slow down, but definitely not for the next few years.
The first few iPhones largely leveraged Samsung designed and manufactured silicon. Back then I heard Samsung was paying close attention to Apple's requirements and fed that experience into its own SoC and smartphone design.
With a couple of successful iPhone generations under its belt, Apple set its sights much higher. Steve Jobs hired some of the brightest minds in CPU and GPU design and kept them close by. They would influence silicon supplier roadmaps as well as help ensure Apple was on the forefront of performance. Remember that CPU and GPU makers don't just set their own roadmaps, they ask their biggest customers and software vendors what they would like to see. As Apple grew in size, Apple's demands carried more weight.
Unlike the desktop/notebook CPU space, there was no truly aggressive SoC provider. The why is easy to understand. Mobile SoCs sell for $14 - $30, while the desktop and notebook CPUs that Intel invests so heavily in sell for around 10x that, despite being 1 - 4x the physical die size of their cheaper mobile counterparts. In short, most SoC providers felt that no one would be willing to pay for a big, high performance chip, so no one made them. Ultimately this led to a lot of embarassment, with companies like NVIDIA being known for their graphics prowess losing when it came to SoC GPU performance.
Realizing the lack of an Intel-like player in the mobile SoC space, Apple took it upon itself to build the silicon it needed to power the iPhone and iPad. By controlling its own SoC destiny it could achieve a level of vertical integration that no OEM has enjoyed in recent history. Apple would be able to define the experience it wanted, then work with the device, OS, application and SoC teams to deliver that experience. It's a very tempting thing to strive for, the risks are plentiful but the upside is tremendous.
The A4 SoC was Apple's first branded solution, although internally it still leveraged licensed IP blocks from ARM (Cortex A8) and Imagination Technologies (PowerVR SGX 535). Its replacement, the A5, moved to a dual-core Cortex A9 setup with a much beefier GPU from Imagination (PowerVR SGX 543MP2). For the 3rd generation iPad, Apple doubled up GPU core count and built the largest ARM based mobile SoC we've seen deployed.
When I first looked at the A4, I wrote the following:
Apple is not a microprocessor company, nor does Apple want to toss its hat in with the likes of Intel, NVIDIA, Qualcomm and TI as an SoC maker. History has shown us that the only way to be a successful microprocessor company is to be able to subsidize the high cost of designing a powerful architecture over an extremely large install base. That's why x86 survived, and it's why the ARM business model works.
Designing high performance SoCs just for use in the iPad and iPhone just doesn't make sense. In the short term, perhaps, but in the long run it would mean that Apple would have to grow the microprocessor side of its business considerably. That means tons of engineers, more resources that aren't product focused, and honestly re-inventing the wheel a lot.
The fact that the A4 appears to be little more than a 45nm, 1GHz Cortex A8 paired with a PowerVR SGX GPU tells me that Apple isn't off its rocker. I don't exactly know what Apple is doing with all of these CPU and GPU engineers in house, but licensing tech from the companies who have experience in building the architectures is still on the menu.
While I still believe that, long term, Apple will either have to commit to being a full blown chip company or buy processors from whoever ends up dominating the mobile SoC industry it's clear that for the foreseeable future Apple will be a device company that also makes mobile SoCs. Given the state of the mobile SoC space at this point, I can't blame Apple for wanting to build its own chips.
Apple SoC Evolution | |||||||
Apple A4 | Apple A5 | Apple A5r2 | Apple A5X | Apple A6 | |||
Intro Date | 2010 | 2011 | 2012 | 2012 | 2012 | ||
Intro Product | iPad | iPad 2 | iPad 2 | iPad 3 | iPhone 5 | ||
Product Targets | iPad/iPhone 4 | iPad 2/iPhone 4S | iPad 2/iPhone 4S | iPad 3 | ? | ||
CPU | ARM Cortex A8 | 2 x ARM Cortex A9 | 2 x ARM Cortex A9 | 2 x ARM Cortex A9 | 2 x Apple Swift | ||
CPU Frequency | 1GHz/800MHz (iPad/iPhone) | 1GHz/800MHz (iPad/iPhone) | 1GHz/800MHz (iPad/iPhone) | 1GHz | 1.3GHz | ||
GPU | PowerVR SGX 535 | PowerVR SGX 543MP2 | PowerVR SGX 543MP2 | PowerVR SGX 543MP4 | PowerVR SGX 543MP3 | ||
Memory Interface | 32-bit LPDDR2 | 2 x 32-bit LPDDR2 | 2 x 32-bit LPDDR2 | 4 x 32-bit LPDDR2 | 2 x 32-bit LPDDR2 | ||
Manufacturing Process | Samsung 45nm LP | Samsung 45nm LP | Samsung 32nm LP HK+MG | Samsung 45nm LP | Samsung 32nm LP HK+MG |
Apple's A6 is the next step in the company's evolution. Although it continues to license graphics IP from Imagination Technologies (PowerVR SGX 543MP3) and it licenses the ARMv7 instruction set from ARM, it is the first SoC to feature Apple designed CPU cores. The A6 is also the second Apple SoC to be built using Samsung's 32nm LP High-K + Metal Gate transistors. Thanks to UBM Tech Insights and Chipworks we have some great die shots of A6 as well as an accurate die size.
I've updated our die size comparison to put the A6 in perspective:
The new SoC is smaller than the A5 used in the iPhone 4S, but it's built on a newer process which will have some added costs associated with it (at least initially). Over time I would expect A6 pricing to drop below that of the A5, although initially there may not be much (if any at all) cost savings. Note that Apple's 32nm A5r2 is very close in size to the A6, which made it a great test part for Samsung's 32nm process. Apple likely caught the bulk of its process issues on A5r2, making an aggressive ramp for A6 on 32nm much easier than it would have been previously. It's clear that the Apple SoC team benefitted from the practical experience of its members.
Putting the A6 in perspective, we have the usual table we throw in our CPU reviews:
CPU Specification Comparison | ||||||||
CPU | Manufacturing Process | Cores | Transistor Count | Die Size | ||||
Apple A6 | 32nm | 2 | ? | 97mm2 | ||||
Apple A5X | 45nm | 2 | ? | 163mm2 | ||||
Apple A5r2 | 32nm | 2 | ? | 71mm2 | ||||
Apple A5 | 45nm | 2 | ? | 122mm2 | ||||
Intel Ivy Bridge HE-4 (GT2) | 22nm | 4 | 1.4B | 160mm2 | ||||
Intel Ivy Bridge HM-4 (GT1) | 22nm | 4 | ? | 133mm2 | ||||
Intel Ivy Bridge H-2 (GT2) | 22nm | 2 | ? | 118mm2 | ||||
Intel Ivy Bridge M-2 (GT1) | 22nm | 2 | ? | 94mm2 | ||||
Intel Sandy Bridge 4C | 32nm | 4 | 995M | 216mm2 | ||||
Intel Sandy Bridge 2C (GT1) | 32nm | 2 | 504M | 131mm2 | ||||
Intel Sandy Bridge 2C (GT2) | 32nm | 2 | 624M | 149mm2 | ||||
NVIDIA Tegra 3 | 40nm | 4+1 | ? | ~80mm2 | ||||
NVIDIA Tegra 2 | 40nm | 2 | ? | 49mm2 |
Although the A6 is significantly smaller than the mammoth A5X, it's still quite large by mobile SoC standards. At 97mm2 Apple's A6 is slightly larger than a dual-core Ivy Bridge with GT1 graphics. Granted that's not a very impressive part, but it's still a modern chip that Intel sells for over $100. I'm still not sure what the die size sweet spot is for a smartphone/tablet SoC, perhaps something around 120mm2? I just can't see the 200mm2 chips we love on the desktop being the right fit for ultra mobile.
A6 die photo courtesy UBM Tech Insights
Looking at the A6 die we clearly see the two CPU cores, three GPU cores and 2 x 32-bit LPDDR2 memory interfaces. The Chipworks photo shows the GPU cores a bit better:
Apple A6 die photo courtesy Chipworks
Chipworks was first to point out that Apple's custom CPU cores appeared to be largely laid out by hand vs. using automated tools. Not using automated layout for all parts of a CPU isn't unusual (Intel does it all the time), but it is unusual to see in an ARM based mobile SoC. Shortly after the iPhone 5's launch we confirmed that the A6 SoC featured Apple's first internally designed ARM CPU cores. As a recap there are two types of ARM licensees: architecture and processor. A processor license gives you the right to take an ARM designed CPU core and integrate it into your SoC. Apple licensed ARM's Cortex A9 design in the A5/A5X SoCs for example. An architecture license gives you the right to design your own core that implements an ARM instruction set. Marvell and Qualcomm are both examples of ARM architecture licensees.
For years it's been rumored that Apple has held an ARM architecture license. With the A6 we now have conclusive proof. The question is, what does Apple's first custom ARM CPU core look like? Based on Apple's performance claims we know it's more than a Cortex A9. But to find out what the architecture looks like at a high level we had to do a lot of digging.
Decoding Swift
Section by Anand Shimpi
Apple's A6 provided a unique challenge. Typically we learn about a new CPU through an architecture disclosure by its manufacturer, tons of testing on our part and circling back with the manufacturer to better understand our findings. With the A6 there was no chance Apple was going to give us a pretty block diagram or an ISSCC paper on its architecture. And on the benchmarking side, our capabilities there are frustratingly limited as there are almost no good smartphone benchmarks. Understanding the A6 however is key to understanding the iPhone 5, and it also gives us a lot of insight into where Apple may go from here. A review of the iPhone 5 without a deep investigation into the A6 just wasn't an option.
The first task was to know its name. There's the old fantasy that knowing something's name gives you power over it, and the reality that it's just cool to know a secret code name. A tip from a reader across the globe pointed us in the right direction (thanks R!). Working backwards through some other iOS 6 code on the iPhone 5 confirmed the name. Apple's first fully custom ARM CPU core is called Swift.
Next we needed to confirm clock speed. Swift's operating frequency would give us an idea of how much IPC has improved over the Cortex A9 architecture. Geekbench was updated after our original iPhone 5 performance preview to more accurately report clock speed (previously we had to get one thread running in the background then launch Geekbench to get a somewhat accurate frequency reading). At 1.3GHz, Swift clearly ran at a higher frequency than the 800MHz Cortex A9 in Apple's A5 but not nearly as high as solutions from Qualcomm, NVIDIA, Samsung or TI. Despite the only 62.5% increase in frequency, Apple was promising up to a 2x increase in performance. It's clear Swift would have to be more than just a clock bumped Cortex A9. Also, as Swift must remain relevant through the end of 2013 before the next iPhone comes out, it had to be somewhat competitive with Qualcomm's Krait and ARM's Cortex A15. Although Apple is often talked about as not being concerned with performance and specs, the truth couldn't be farther from it. Shipping a Cortex A9 based SoC in its flagship smartphone through the end of 2013 just wouldn't cut it. Similarly, none of the SoC vendors would have something A15-based ready in time for volume production in Q3 2012 which helped force Apple's hand in designing its own core.
With a codename and clock speed in our hands, we went about filling in the blanks.
Some great work on behalf of Chipworks gave us a look at the cores themselves, which Chipworks estimated to be around 50% larger than the Cortex A9 cores used in the A5.
Two Apple Swift CPU cores, photo courtesy Chipworks, annotations ours
Two ARM Cortex A9 cores , photo courtesy Chipworks, annotations ours
Looking at the die shots you see a much greater logic to cache ratio in Swift compared to ARM's Cortex A9. We know that L1/L2 cache sizes haven't changed (32KB/1MB, respectively) so it's everything else that has grown in size and complexity.
The first thing I wanted to understand was how much low level compute performance has changed. Thankfully we have a number of microbenchmarks available that show us just this. There are two variables that make comparisons to ARM's Cortex A9 difficult: Swift presumably has a new architecture, and it runs at a much higher clock speed than the Cortex A9 in Apple's A5 SoC. For the tables below you'll see me compare directly to the 800MHz Cortex A9 used in the iPhone 4S, as well as a hypothetical 1300MHz Cortex A9 (1300/800 * iPhone 4S result). The point here is to give me an indication of how much performance has improved if we take clock speed out of the equation. Granted the Cortex A9 won't see perfect scaling going from 800MHz to 1300MHz, however most of the benchmarks we're looking at here are small enough to fit in processor caches and should scale relatively well with frequency.
Our investigation begins with Geekbench 2, which ends up being a great tool for looking at low level math performance. The suite is broken up into integer, floating point and memory workloads. We'll start with the integer tests. I don't have access to Geekbench source but I did my best to map the benchmarks to the type of instructions and parts of the CPU core they'd be stressing in the descriptions below.
Geekbench 2 | ||||||
Integer Tests | Apple A5 (2 x Cortex A9 @ 800MHz | Apple A5 Scaled (2 x Cortex A9 @ 1300MHz | Apple A6 (2 x Swift @ 1300MHz | Swift / A9 Perf Advantage @ 1300MHz | ||
Blowfish | 10.7 MB/s | 17.4 MB/s | 23.4 MB/s | 34.6% | ||
Blowfish MT | 20.7 MB/s | 33.6 MB/s | 45.6 MB/s | 35.6% | ||
Text Compression | 1.21 MB/s | 1.97 MB/s | 2.79 MB/s | 41.9% | ||
Text Compression MP | 2.28 MB/s | 3.71 MB/s | 5.19 MB/s | 40.1% | ||
Text Decompression | 1.71 MB/s | 2.78 MB/s | 3.82 MB/s | 37.5% | ||
Text Decompression MP | 2.84 MB/s | 4.62 MB/s | 5.88 MB/s | 27.3% | ||
Image Compression | 3.32 Mpixels/s | 5.40 Mpixels/s | 7.31 Mpixels/s | 35.5% | ||
Image Compression MP | 6.59 Mpixels/s | 10.7 Mpixels/s | 14.2 Mpixels/s | 32.6% | ||
Image Decompression | 5.32 Mpixels/s | 8.65 Mpixels/s | 12.4 Mpixels/s | 43.4% | ||
Image Decompression MP | 10.5 Mpixels/s | 17.1 Mpixels/s | 23.0 Mpixels/s | 34.8% | ||
LUA | 215.4 Knodes/s | 350.0 Knodes/s | 455.0 Knodes/s | 30.0% | ||
LUA MP | 425.6 Knodes/s | 691.6 Knodes/s | 887.0 Knodes/s | 28.3% | ||
Average | - | - | - | 37.2% |
The Blowfish test is an encryption/decryption test that implements the Blowfish algorithm. The algorithm itself is fairly cache intensive and features a good amount of integer math and bitwise logical operations. Here we see the hypothetical 1.3GHz Cortex A9 would be outpaced by Swift by around 35%. In fact you'll see this similar ~30% increase in integer performance across the board.
The text compression/decompression tests use bzip2 to compress/decompress text files. As text files compress very well, these tests become great low level CPU benchmarks. The bzip2 front end does a lot of sorting, and is thus very branch as well as heavy on logical operations (integer ALUs used here). We don't know much about the size of the data set here but I think it's safe to assume that given the short run times we're not talking about compressing/decompressing all of the text in Wikipedia. It's safe to assume that these tests run mostly out of cache. Here we see a 38 - 40% advantage over a perfectly scaled Cortex A9. The MP text compression test shows the worst scaling out of the group at only 27.3% for Swift over a hypothetical 1.3GHz Cortex A9. It is entirely possible we're hitting some upper bound to simultaneous L2 cache accesses or some other memory limitation here.
The image compression/decompression tests are particularly useful as they just show JPEG compression/decompression performance, a very real world use case that's often seen in many applications (web browsing, photo viewer, etc...). The code here is once again very integer math heavy (adds, divs and muls), with some light branching. Performance gains in these tests, once again, span the 33 - 43% range compared to a perfectly scaled Cortex A9.
The final set of integer tests are scripted LUA benchmarks that find all of the prime numbers below 200,000. As with most primality tests, the LUA benchmarks here are heavy on adds/muls with a fair amount of branching. Performance gains are around 30% for the LUA tests.
On average, we see gains of around 37% over a hypothetical 1.3GHz Cortex A9. The Cortex A9 has two integer ALUs already, so it's possible (albeit unlikely) that Apple added a third integer ALU to see these gains. Another potential explanation is that the 3-wide front end allowed for better utilization of the existing two ALUs, although it's also unlikely that we see better than perfect scaling simply due to the addition of an extra decoder. If it's not more data being worked on in parallel, it's entirely possible that the data is simply getting to the execution units faster.
Let's keep digging.
MP
Geekbench 2 | ||||||
FP Tests | Apple A5 (2 x Cortex A9 @ 800MHz | Apple A5 Scaled (2 x Cortex A9 @ 1300MHz | Apple A6 (2 x Swift @ 1300MHz | Swift / A9 Perf Advantage @ 1300MHz | ||
Mandlebrot | 223 MFLOPS | 362 MFLOPS | 397 MFLOPS | 9.6% | ||
Mandlebrot MP | 438 MFLOPS | 712 MFLOPS | 766 MFLOPS | 7.6% | ||
Dot Product | 177 MFLOPS | 288 MFLOPS | 322 MFLOPS | 12.0% | ||
Dot Product MP | 353 MFLOPS | 574 MFLOPS | 627 MFLOPS | 9.3% | ||
LU Decomposition | 171 MFLOPS | 278 MFLOPS | 387 MFLOPS | 39.3% | ||
LU Decomposition MP | 348 MFLOPS | 566 MFLOPS | 767 MFLOPS | 35.6% | ||
Primality | 142 MFLOPS | 231 MFLOPS | 370 MFLOPS | 60.3% | ||
Primality MP | 260 MFLOPS | 423 MFLOPS | 676 MFLOPS | 60.0% | ||
Sharpen Image | 1.35 Mpixels/s | 2.19 Mpixels/s | 4.85 Mpixels/s | 121% | ||
Sharpen Image MP | 2.67 Mpixels/s | 4.34 Mpixels/s | 9.28 Mpixels/s | 114% | ||
Blur Image | 0.53 Mpixels/s | 0.86 Mpixels/s | 1.96 Mpixels/s | 128% | ||
Blur Image MP | 1.06 Mpixels/s | 1.72 Mpixels/s | 3.78 Mpixels/s | 119% | ||
Average | - | - | - | 61.6% |
The FP tests for Geekbench 2 provide some very interesting data. While we saw consistent gains of 30 - 40% over our hypothetical 1.3GHz Cortex A9, Swift behaves much more unpredictably here. Let's see if we can make sense of it.
The Mandlebrot benchmark simply renders iterations of the Mandlebrot set. Here there's a lot of floating point math (adds/muls) combined with a fair amount of branching as the algorithm determines whether or not values are contained within the Mandlebrot set. It's curious that we don't see huge performance scaling here. Obviously Swift is faster than the 800MHz Cortex A9 in Apple's A5, but if the A5 were clocked at the same 1.3GHz and scaled perfectly we only see a 9.6% increase in performance from the new architecture. The Cortex A9 only has a single issue port to its floating point hardware that's also shared by its load/store hardware - this data alone would normally indicate that nothing has changed here when it comes to Swift. That would be a bit premature though...
The Dot Product test is simple enough, it computes the dot product of two FP vectors. Once again there are a lot of FP adds and muls here as the dot product is calculated. Overall performance gains are similarly timid if we scale up the Cortex A9's performance: 9 - 12% increase at the same frequency doesn't sound like a whole lot for a brand new architecture.
The LU Decomposition tests factorize a 128 x 128 matrix into a product of two matrices. The sheer size of the source matrix guarantees that this test has to hit the 1MB L2 cache in both of the architectures that we're talking about. The math involved are once again FP adds/muls, but the big change here appears to be the size of the dataset. The performance scales up comparatively. The LU Decomposition tests show 35 - 40% gains over our hypothetical 1.3GHz Cortex A9.
The Primality benchmarks perform the first few iterations of the Lucas-Lehmar test on a specific Mersenne number to determine whether or not it's prime. The math here is very heavy on FP adds, multiplies and sqrt functions. The data set shouldn't be large enough to require trips out to main memory, but we're seeing scaling that's even better than what we saw in the LU Decomposition tests. The Cortex A9 only has a single port for FP operations, it's very possible that Apple has added a second here in Swift. Why we wouldn't see similar speedups in the Mandlebrot and Dot Product tests however could boil down to the particular instruction mix used in the Primarily benchmark. The Geekbench folks also don't specify whether we're looking at FP32 or FP64 values, which could also be handled at different performance levels by the Swift architecture vs. Cortex A9.
The next two tests show the biggest gains of the FP suite. Both the sharpen and blur tests apply a convolution filter to an image stored in memory. The application of the filter itself is a combination of matrix multiplies, adds, divides and branches. The size of the data set likely hits the data cache a good amount.
We still haven't gained too much at this point. Simple FP operations don't see a huge improvement in performance over a perfectly scaled Cortex A9, while some others show tremendous gains. There seems to be a correlation with memory accesses which makes sense given what we know about Swift's memory performance. Improved memory performance also lends some credibility to the earlier theory about why integer performance goes up by so much: data cache access latency could be significantly improved.
Custom Code to Understand a Custom Core
Section by Anand Shimpi
All Computer Engineers at NCSU had to take mandatory programming courses. Given that my dad is a Computer Science professor, I always had exposure to programming, but I never considered it my strong suit - perhaps me gravitating towards hardware was some passive rebellious thing. Either way I knew that in order to really understand Swift, I'd have to do some coding on my own. The only problem? I have zero experience writing Objective-C code for iOS, and not enough time to go through a crash course.
I had code that I wanted to time/execute in C, but I needed it ported to a format that I could easily run/monitor on an iPhone. I enlisted the help of a talented developer friend who graduated around the same time I did from NCSU, Nirdhar Khazanie. Nirdhar has been working on mobile development for years now, and he quickly made the garbled C code I wanted to run into something that executed beautifully on the iPhone. He gave me a framework where I could vary instructions as well as data set sizes, which made this next set of experiments possible. It's always helpful to know a good programmer.
So what did Nirdhar's app let me do? Let's start at the beginning. ARM's Cortex A9 has two independent integer ALUs, does Swift have more? To test this theory I created a loop of independent integer adds. The variables are all independent of one another, which should allow for some great instruction level parallelism. The code loops many times, which should make for some easily predictable branches. My code is hardly optimal but I did keep track of how many millions of adds were executed per second. I also reported how long each iteration of the loop took, on average.
Integer Add Code | ||||||
Apple A5 (2 x Cortex A9 @ 800MHz | Apple A5 Scaled (2 x Cortex A9 @ 1300MHz | Apple A6 (2 x Swift @ 1300MHz | Swift / A9 Perf Advantage @ 1300MHz | |||
Integer Add Test | 207 MIPS | 336 MIPS | 369 MIPS | 9.8% | ||
Integer Add Latency in Clocks | 23 clocks | 21 clocks |
The code here should be fairly bound by the integer execution path. We're showing a 9.8% increase in performance. Average latency is improved slightly by 2 clocks, but we're not seeing the sort of ILP increase that would come from having a third ALU that can easily be populated. The slight improvement in performance here could be due to a number of things. A quick look at some of Apple's own documentation confirms what we've seen here: Swift has two integer ALUs and can issue 3 operations per cycle (implying a 3-wide decoder as well). I don't know if the third decoder is responsible for the slight gains in performance here or not.
What about floating point performance? ARM's Cortex A9 only has a single issue port for FP operations which seriously hampers FP performance. Here I modified the code from earlier to do a bunch of single and double precision FP multiplies:
FP Add Code | ||||||
Apple A5 (2 x Cortex A9 @ 800MHz | Apple A5 Scaled (2 x Cortex A9 @ 1300MHz | Apple A6 (2 x Swift @ 1300MHz | Swift / A9 Perf Advantage @ 1300MHz | |||
FP Mul Test (single precision) | 94 MFLOPS | 153 MFLOPS | 143 MFLOPS | -7% | ||
FP Mul Test (double precision) | 87 MFLOPS | 141 MFLOPS | 315 MFLOPS | 123% |
There's actually a slight regression in performance if we look at single precision FP multiply performance, likely due to the fact that performance wouldn't scale perfectly linearly from 800MHz to 1.3GHz. Notice what happens when we double up the size of our FP multiplies though, performance goes up on Swift but remains unchanged on the Cortex A9. Given the support for ARM's VFPv4 extensions, Apple likely has a second FP unit in Swift that can help with FMAs or to improve double precision FP performance. It's also possible that Swift is a 128-bit wide NEON machine and my DP test compiles down to NEON code which enjoys the benefits of a wider engine. I ran the same test with FP adds and didn't notice any changes to the data above.
Sanity Check with Linpack & Passmark
Section by Anand Shimpi
Not completely trusting my own code, I wanted some additional data points to help understand the Swift architecture. I first turned to the iOS port of Linpack and graphed FP performance vs. problem size:
Even though I ran the benchmark for hundreds of iterations at each data point, the curves didn't come out as smooth as I would've liked them to. Regardless there's a clear trend. Swift maintains a huge performance advantage, even at small problem sizes which supports the theory of having two ports to dedicated FP hardware. There's also a much smaller relative drop in performance when going out to main memory. If you do the math on the original unscaled 4S scores you get the following data:
Linpack Throughput: Cycles per Operation | ||||
Apple Swift @ 1300MHz (iPhone 5) | ARM Cortex A9 @ 800MHz (iPhone 4S) | |||
~300KB Problem Size | 1.45 cycles | 3.55 cycles | ||
~8MB Problem Size | 2.08 cycles | 6.75 cycles | ||
Increase | 43% | 90% |
Swift is simply able to hide memory latency better than the Cortex A9. Concurrent FP/memory operations seem to do very well on Swift...
As the last sanity check I used Passmark, another general purpose iOS microbenchmark.
Passmark CPU Performance | ||||||
Apple A5 (2 x Cortex A9 @ 800MHz | Apple A5 Scaled (2 x Cortex A9 @ 1300MHz | Apple A6 (2 x Swift @ 1300MHz | Swift / A9 Perf Advantage @ 1300MHz | |||
Integer | 257 | 418 | 614 | 47.0% | ||
FP | 230 | 374 | 813 | 118% | ||
Primality | 54 | 87 | 183 | 109% | ||
String qsort | 1065 | 1730 | 2126 | 22.8% | ||
Encryption | 38.1 | 61.9 | 93.5 | 51.0% | ||
Compression | 1.18 | 1.92 | 2.26 | 17.9% |
The integer math test uses a large dataset and performs a number of add, subtract, multiply and divide operations on the values. The dataset measures 240KB per core, which is enough to stress the L2 cache of these processors. Note the 47% increase in performance over a scaled Cortex A9.
The FP test is identical to the integer test (including size) but it works on 32 and 64-bit floating point values. The performance increase here despite facing the same workload lends credibility to the theory that there are multiple FP pipelines in Swift.
The Primality benchmark is branch heavy and features a lot of FP math and compares. Once again we see huge scaling compared to the Cortex A9.
The qsort test features integer math and is very branch heavy. The memory footprint of the test is around 5MB, but the gains here aren't as large as we've seen elsewhere. It's possible that Swift features a much larger branch mispredict penalty than the A9.
The Encryption test works on a very small dataset that can easily fit in the L1 cache but is very heavy on the math. Performance scales very well here, almost mirroring the integer benchmark results.
Finally the compression test shows us the smallest gains once you take into account Swift's higher operating frequency. There's not much more to conclude here other than we won't always see greater than generational scaling from Swift over the previous Cortex A9.
Apple's Swift: Visualized
Section by Anand Shimpi
Based on my findings on the previous pages, as well as some additional off-the-record data, this is what I believe Swift looks like at a high level:
Note that most of those blocks are just place holders as I don't know how they've changed from Cortex A9 to Swift, but the general design of the machine is likely what you see above. Swift moves from a 2-wide to a 3-wide machine at the front end. It remains a relatively small out-of-order core, but increases the number of execution ports from 3 in Cortex A9 to 5. Note the dedicated load/store port, which would help explain the tremendous gains in high bandwidth FP performance.
I asked Qualcomm for some additional details on Krait unfortunately they are being quite tight lipped about their architecture. Krait is somewhat similar to Swift in that it has a 3-wide front end, however it only has 4 ports to its 7 execution units. Qualcomm wouldn't give me specifics on what those 7 units were or how they were shared by those 4 ports. It's a shame that Intel will tell me just how big Haswell's integer and FP register files are 9 months before launch, but its competitors in the mobile SoC space are worried about sharing high level details of architectures that have been shipping for half a year.
Apple's Swift core is a wider machine than the Cortex A9, and seemingly on-par with Qualcomm's Krait. How does ARM's Cortex A15 compare? While the front end remans 3-wide, ARM claims a doubling of fetch bandwidth compared to Cortex A9. The A15 is also able to execute more types of instructions out of order, although admittedly we don't know Swift's capabilities in this regard. There's also a loop cache at the front end, something that both AMD and Intel have in their modern architectures (again, it's unclear whether or not Swift features something similar). ARM moves to three dedicated issue pools feeding 8 independent pipelines on the execution side. There are dedicated load and store pipelines, two integer ALU pipes, two FP/NEON pipes, one pipe for branches and one for all multiplies/divides. The Cortex A15 is simply a beast, and it should be more power hungry as a result. It remains to be seen how the first Cortex A15 based smartphone SoCs will compare to Swift/Krait in terms of power. ARM's big.LITTLE configuration was clearly designed to help mitigate the issues that the Cortex A15 architecture could pose from a power consumption standpoint. I suspect we haven't seen the end of NVIDIA's companion core either.
At a high level, it would appear that ARM's Cortex A15 is still a bigger machine than Swift. Swift instead feels like Apple's answer to Krait. The release cadence Apple is on right now almost guarantees that it will be a CPU generation behind in the first half of next year if everyone moves to Cortex A15 based designs.
Apple's Swift: Pipeline Depth & Memory Latency
Section by Anand Shimpi
For the first time since the iPhone's introduction in 2007, Apple is shipping a smartphone with a CPU clock frequency greater than 1GHz. The Cortex A8 in the iPhone 3GS hit 600MHz, while the iPhone 4 took it to 800MHz. With the iPhone 4S, Apple chose to maintain the same 800MHz operating frequency as it moved to dual-Cortex A9s. Staying true to its namesake, Swift runs at a maximum frequency of 1.3GHz as implemented in the iPhone 5's A6 SoC. Note that it's quite likely the 4th generation iPad will implement an even higher clocked version (1.5GHz being an obvious target).
Clock speed alone doesn't tell us everything we need to know about performance. Deeper pipelines can easily boost clock speed but come with steep penalties for mispredicted branches. ARM's Cortex A8 featured a 13 stage pipeline, while the Cortex A9 moved down to only 8 stages while maintining similar clock speeds. Reducing pipeline depth without sacrificing clock speed contributed greatly to the Cortex A9's tangible increase in performance. The Cortex A15 moves to a fairly deep 15 stage pipeline, while Krait is a bit more conservative at 11 stages. Intel's Atom has the deepest pipeline (ironically enough) at 16 stages.
To find out where Swift falls in all of this I wrote two different codepaths. The first featured an easily predictable branch that should almost always be taken. The second codepath featured a fairly unpredictable branch. Branch predictors work by looking at branch history - branches with predictable history should be, well, easy to predict while the opposite is true for branches with a more varied past. This time I measured latency in clocks for the main code loop:
Branch Prediction Code | ||||||
Apple A3 (Cortex A8 @ 600MHz | Apple A5 (2 x Cortex A9 @ 800MHz | Apple A6 (2 x Swift @ 1300MHz | ||||
Easy Branch | 14 clocks | 9 clocks | 12 clocks | |||
Hard Branch | 70 clocks | 48 clocks | 73 clocks |
The hard branch involves more compares and some division (I'm basically branching on odd vs. even values of an incremented variable) so the loop takes much longer to execute, but note the dramatic increase in cycle count between the Cortex A9 and Swift/Cortex A8. If I'm understanding this data correctly it looks like the mispredict penalty for Swift is around 50% longer than for ARM's Cortex A9, and very close to the Cortex A8. Based on this data I would peg Swift's pipeline depth at around 12 stages, very similar to Qualcomm's Krait and just shy of ARM's Cortex A8.
Note that despite the significant increase in pipeline depth Apple appears to have been able to keep IPC, at worst, constant (remember back to our scaled Geekbench scores - Swift never lost to a 1.3GHz Cortex A9). The obvious explanation there is a significant improvement in branch prediction accuracy, which any good chip designer would focus on when increasing pipeline depth like this. Very good work on Apple's part.
The remaining aspect of Swift that we have yet to quantify is memory latency. From our iPhone 5 performance preview we already know there's a tremendous increase in memory bandwidth to the CPU cores, but as the external memory interface remains at 64-bits wide all of the changes must be internal to the cache and memory controllers. I went back to Nirdhar's iOS test vehicle and wrote some new code, this time to access a large data array whose size I could vary. I created an array of a finite size and added numbers stored in the array. I increased the array size and measured the relationship between array size and code latency. With enough data points I should get a good idea of cache and memory latency for Swift compared to Apple's implementation of the Cortex A8 and A9.
At relatively small data structure sizes Swift appears to be a bit quicker than the Cortex A8/A9, but there's near convergence around 4 - 16KB. Take a look at what happens once we grow beyond the 32KB L1 data cache of these chips. Swift manages around half the latency for running this code as the Cortex A9 (the Cortex A8 has a 256KB L2 cache so its latency shoots up much sooner). Even at very large array sizes Swift's latency is improved substantially. Note that this data is substantiated by all of the other iOS memory benchmarks we've seen. A quick look at Geekbench's memory and stream tests show huge improvements in bandwidth utilization:
Couple the dedicated load/store port with a much lower latency memory subsystem and you get 2.5 - 3.2x the memory performance of the iPhone 4S. It's the changes to the memory subsystem that really enable Swift's performance.
Six Generations of iPhones: Performance Compared
Section by Anand Shimpi
Cross platform smartphone benchmarks are interesting, but they do come with their own sets of issues. Before we get to that analysis however, let's look at how the iPhone's performance has improved over the past six generations. Luckily Brian has a set of all of the iPhones so he was able to run a few tests on all of the devices, each running the latest supported OS.
We'll start with SunSpider 0.9.1, our trusty javascript performance test:
The transition from iPhone to iPhone 3G shows you just how much additional performance you can squeeze out of simply a software change. There's likely even more that could be squeezed out of that ARM11 platform, unfortunately newer versions of Safari/iOS aren't supported on the iPhone 3G so we're left with a runtime that's around 37x the length of a single run on the iPhone 5.
The rest of the devices support and run iOS 6, so we're at least on a level software playing field. The performance boost from one generation to the next is quite significant still. Going by this chart alone, the best balance of minimal upgrades and maximum perceived improvement would be from the original iPhone to the 3GS then again from the 3GS to the 5.
The BrowserMark results tell a similar story. The jump from the ARM11 based iPhone/iPhone 3G to the 3GS running iOS 6 is huge. Both the 4S and 5 offer doublings in performance, albeit for different reasons. The 4S delivered a doubling thanks to a doubling of core count and a move to the Cortex A9, while the iPhone 5 doubled performance through a much higher clock speed and microarchitectural improvements.
Finally we have Geekbench 2, which only runs on the iOS 6 supported devices so we say goodbye to the original iPhone and iPhone 3G:
None of the jumps looks as dramatic as the move to the iPhone 5, but we already know why. The Swift CPU architecture does a great job improving memory performance, which shows up quite nicely in a lot of the Geekbench 2 subtests.
On the PC side we often talk about 20% performance improvements from one generation to the next being significant. It's clear that the mobile SoC space is still operating along a hyper Moore's Law curve. The rate of progress will eventually slow down, but I don't see that happening for at least another couple generations. The move to ARM's Cortex A15 will be met with another increase in performance (and a similarly large set of power challenges), and whatever comes next will push smartphones into a completely new category of performance.
General Purpose Performance
Section by Anand Shimpi
Apple's philosophy on increasing iPhone performance is sort of a mix between what Microsoft is doing with Windows Phone 7/8 and what the high-end Android smartphone makers have been doing. On the software side Apple does as much as possible to ensure its devices feel fast, although I notice a clear tendency for newer iOS releases to pretty much require the latest iPhone hardware in order to maintain that speedy feel over the long haul. When it comes to hardware, Apple behaves very much like a high-end Android smartphone vendor by putting the absolute fastest silicon on the market in each generation of iPhone. The main difference here is that Apple controls both the software stack and silicon, so it's able to deliver a fairly well bundled package each year. It's a costly operation to run, one that is enabled by Apple's very high profit margins. Ironically enough, if Apple's competitors would significantly undercut Apple (it doesn't cost $599 - $799 to build a modern smartphone) I don't know that the formula would be able to work for Apple in the long run (Apple needs high margins to pay for OS, software and silicon development, all of which are internalized by Apple and none of which burden most of its competitors).
Good cross platform benchmarks still don't really exist on smartphones these days. We're left describing experience with words and trying to quantify performance differences using web based benchmarks, neither of which is ideal but both of which will have to do for now. The iPhone 5 experience compared to the 4S is best explained as just being snappier. Apps launch faster, scrolling around iOS Maps is smoother, web pages take less time to load and the occasional CPU/ISP bound task (e.g. HDR image processing) is significantly quicker. If you're the type of person who appreciates improvements in response time, the iPhone 5 delivers.
How does it compare to the current crop of high-end Android smartphones? I would say that the 5 generally brings CPU performance up to par with the latest and greatest in the Android camp, and in some cases surprasses them slightly. It's difficult making cross platform comparisons because of huge differences in the OSes as well as separating out tasks that are CPU bound from those that simply benefit from a higher rendered frame rate.
I took a cross section of various web based benchmarks and looked at their performance to help quantify where the iPhone 5 stands in the world. First up are the RIABench focus tests, these are javascript benchmarks that focus on various compute bound tasks. I used Chrome for all Android devices to put their best foot forward.
This first test shows just how slow the 800MHz Cortex A9s in the iPhone 4S were compared to the latest and greatest from Qualcomm and NVIDIA. At roughly half the clock speed of those competitors, the 4S was just much slower at compute bound tasks. Apple was able to mask as much of that as possible with smooth UI rendering performance, but there was obviously room for improvement. The iPhone 5 delivers just that. It modernizes the iPhone's performance and inches ahead of the Tegra 3/Snapdragon S4 platforms. Only Intel's Atom Z2460 in the Motorola RAZR i is able to outperform it.
Next up is Kraken, a seriously heavy javascript benchmark built by Mozilla. Kraken focuses on forward looking applications that are potentially too slow to run in modern browsers today. The result is much longer run times than anything we've seen thus far, and a very CPU heavy benchmark:
The standings don't change much here. The iPhone 4S is left in the dust by the iPhone 5, which steps ahead of the latest NVIDIA/Qualcomm based Android devices. The Apple advantage here is just over 10%. Once again, Intel's Atom Z2460 pulls ahead with the clear lead.
In our iPhone 5 Performance Preview we looked at Google's V8 javascript test as an alternative to SunSpider. The more data points the merrier:
Here the iPhone 5 manages to hold onto its second place position, but just barely. Once more, the Atom based RAZR i maintains the performance lead.
Google's Octane benchmark includes all 8 of the V8 tests but adds 5 new ones including a PDF reader, 3D bullet physics engine and portable 3D game console emulator all built in javascript.
The 5 pulls ahead of the HTC One X here and maintains a healthy 31% lead, but once again falls short of the RAZR i.
We of course included our SunSpider and BrowserMark tests, both of which show the iPhone 5 very favorably:
Performance obviously depends on workload, but it's clear the iPhone 5 is a big step forward from the 4S and tends to outperform the latest ARM based Android smartphones. As the rest of the ARM based SoC players move to Cortex A15 designs they should be able to deliver faster devices in the first half of 2013.
Intel's current position when it comes to CPU performance is interesting. A move to a dual-core design could be enough to remain performance competitive with 2013 ARM based SoCs. Remembering that Atom is a 5 year old CPU core that performs at the level of a 10 year old mainstream notebook CPU puts all of this progress in perspective. Intel's biggest issue going forward (other than getting Atom into more tier 1 phone designs) is going to be improving GPU performance. Luckily it seems as if it has the roadmap to do just that with the Atom Z2580.
GPU Analysis/Performance
Section by Anand Shimpi
Understanding the A6's GPU architecture is a walk in the park compared to what we had to do to get a high level understanding of Swift. The die photos give us a clear indication of the number of GPU cores and the width of the memory interface, while the performance and timing of release fill in the rest of the blanks. Apple has not abandoned driving GPU performance on its smartphones and increased the GPU compute horsepower by 2x. Rather than double up GPU core count, Apple adds a third PowerVR SGX 543 core and runs the three at a higher frequency than in the A5. The result is roughly the same graphics horsepower as the four-core PowerVR SGX 543MP4 in Apple's A5X, but with a smaller die footprint.
As a recap, Imagination Technologies' PowerVR SGX543 GPU core features four USSE2 pipes. Each pipe has a 4-way vector ALU that can crank out 4 multiply-adds per clock, which works out to be 16 MADs per clock or 32 FLOPS. Imagination lets the customer stick multiple 543 cores together, which scales compute performance linearly.
SoC die size however dictates memory interface width, and it's clear that the A6 is significantly smaller in that department than the A5X, which is where we see the only tradeoff in GPU performance: the A6 maintains a 64-bit LPDDR2 interface compared to the 128-bit LPDDR2 interface in the A5X. The tradeoff makes sense given that the A5X has to drive 4.3x the number of pixels that the A6 has to drive in the iPhone 5. At high resolutions, GPU performance quickly becomes memory bandwidth bound. Fortunately for iPhone 5 users, the A6's 64-bit LPDDR2 interface is a good match for the comparatively low 1136 x 640 display resolution. The end result is 3D performance that looks a lot like the new iPad, but in a phone:
Mobile SoC GPU Comparison | |||||||||||
Adreno 225 | PowerVR SGX 540 | PowerVR SGX 543MP2 | PowerVR SGX 543MP3 | PowerVR SGX 543MP4 | Mali-400 MP4 | Tegra 3 | |||||
SIMD Name | - | USSE | USSE2 | USSE2 | USSE2 | Core | Core | ||||
# of SIMDs | 8 | 4 | 8 | 12 | 16 | 4 + 1 | 12 | ||||
MADs per SIMD | 4 | 2 | 4 | 4 | 4 | 4 / 2 | 1 | ||||
Total MADs | 32 | 8 | 32 | 48 | 64 | 18 | 12 | ||||
GFLOPS @ 200MHz | 12.8 GFLOPS | 3.2 GFLOPS | 12.8 GFLOPS | 19.2 GFLOPS | 25.6 GFLOPS | 7.2 GFLOPS | 4.8 GFLOPS |
We ran through the full GLBenchmark 2.5 suite to get a good idea of GPU performance. The results below are largely unchanged from our iPhone 5 Performance Preview, with the addition of the Motorola RAZR i and RAZR M. I also re-ran the iPad results on iOS 6, although I didn't see major changes there.
We'll start out with the raw theoretical numbers beginning with fill rate:
The iPhone 5 nips at the heels of the 3rd generation iPad here, at 1.65GTexels/s. The performance advantage over the iPhone 4S is more than double, and even the Galaxy S 3 can't come close.
Triangle throughput is similarly strong:
Take resolution into account and the iPhone 5 is actually faster than the new iPad, but normalize for resolution using GLBenchmark's offscreen mode and the A5X and A6 look identical:
The fragment lit texture test does very well on the iPhone 5, once again when you take into account the much lower resolution of the 5's display performance is significantly better than on the iPad:
The next set of results are the gameplay simulation tests, which attempt to give you an idea of what game performance based on Kishonti's engine would look like. These tests tend to be compute monsters, so they'll make a great stress test for the iPhone 5's new GPU:
Egypt HD was the great equalizer when we first met it, but the iPhone 5 does very well here. The biggest surprise however is just how well the Qualcomm Snapdragon S4 Pro with Adreno 320 GPU does by comparison. LG's Optimus G, a device Brian flew to Seoul, South Korea to benchmark, is hot on the heels of the new iPhone.
When we run everything at 1080p the iPhone 5 looks a lot like the new iPad, and is about 2x the performance of the Galaxy S 3. Here, LG's Optimus G actually outperforms the iPhone 5! It looks like Qualcomm's Adreno 320 is quite competent in a phone. Note just how bad Intel's Atom Z2460 is, the PowerVR SGX 540 is simply unacceptable for a modern high-end SoC. I hope Intel's slow warming up to integrating fast GPUs on die doesn't plague its mobile SoC lineup for much longer.
The Egypt classic tests are much lighter workloads and are likely a good indication of the type of performance you can expect from many games today available on the app store. At its native resolution, the iPhone 5 has no problems hitting the 60 fps vsync limit.
Remove vsync, render at 1080p and you see what the GPUs can really do. Here the iPhone 5 pulls ahead of the Adreno 320 based LG Optimus G and even slightly ahead of the new iPad.
Once again, looking at GLBenchmark's on-screen and offscreen Egypt tests we can get a good idea of how the iPhone 5 measures up to Apple's claims of 2x the GPU performance of the iPhone 4S:
Removing the clearly vsync limited result from the on-screen Egypt Classic test, the iPhone 5 performs about 2.26x the speed of the 4S. If we include that result in the average you're still looking at a 1.95x average. As we've seen in the past, these gains don't typically translate into dramatically higher frame rates in games, but games with better visual quality instead.
Increased Dynamic Range: Understanding the Power Profile of Modern SoCs
Section by Anand Shimpi
The iPhone 4S greatly complicated the matter of smartphone power consumption. With the A5 SoC Apple introduced a much wider dynamic range of power consumption to the iPhone than we were previously used to. Depending on the workload, the A5 SoC could either use much more power than its predecessor or enjoy decreased overall energy usage. I began our battery life analysis last time with some graphs showing the power savings realized by a more power hungry, faster CPU.
The iPhone 5 doesn't simplify things any more. I believe the days of us having straightforward discussions about better/worse battery life are long gone. We are now firmly in the era of expanded dynamic range when it comes to smartphone power consumption. What do I mean by that? The best way to explain is to look at some data. The graphs below show total device power consumption over time for a handful of devices running the Mozilla Kraken javascript benchmark. Kraken is multithreaded and hits the CPU cores fairly well. The power profile of the benchmark ends up being very similar to loading a very js-heavy web page, although for a longer period of time. All of the device displays were calibrated to 200 nits, although obviously larger displays can consume more power.
Let's start out by just looking at the three most recent iPhone generations:
The timescale for this chart is just how long the iPhone 4 takes to complete the Kraken benchmark. The iPhone 4/4S performance gap feels a lot bigger now going back to the 4 than it did when the 4S launched, but that's how it usually seems to work. Note how tight the swings are between min and max power consumption on the iPhone 4 during the test. As a standalone device you might view the iPhone 4 as being fairly variable when it comes to power consumption but compared to the 4S and 5 it might as well be a straight line.
The 4S complicated things by consuming tangibly more power under load than the 4, but being fast enough to complete tasks in appreciably less time. In the case of this Kraken run, the 4S consumes more power than the 4, however it's able to go to sleep quicker than the 4 and thus draw less power. If we extended the timeline for the iPhone 4 significantly beyond the end of its benchmark run we'd see the 4S eventually come out ahead in battery life as it was able to race to sleep quicker. The reality is that with more performance comes increased device usage - in other words, it's highly unlikely that with a 50% gain in performance users are simply going to continue to use their smartphone the same way as they would a slower device. Usage (and thus workload) doesn't remain constant, it's somewhat related to response time.
The iPhone 5 brings new meaning to device level power consumption. With a larger display and much more powerful CPU, it can easily draw 33% more power than the 4S under load, on average. Note the big swings in power consumption during the test. The A6 SoC appears to be more aggressive in transitioning down to idle states than any previous Apple SoC, which makes sense given how much higher its peak power consumption can be. Looking at total energy consumed however, the iPhone 5 clearly has the ability to be more power efficient on battery. The 5 drops down to iPhone 4 levels of idle power consumption in roughly half the time of the iPhone 4S. Given the same workload that doesn't run indefinitely (or nearly indefinitely), the iPhone 5 will outlast the iPhone 4S on a single charge. Keep the device pegged however and it will die quicker.
Out of curiosity I wanted to toss in a couple of other devices based on NVIDIA and Qualcomm silicon to see how things change. I grabbed both versions of the HTC One X:
The Tegra 3 based One X actually performs very well in this test, but its peak power consumption is significantly worse than everything else. It makes sense given the many ARM Cortex A9 cores built on a 40nm G process running at high clock speeds on the Tegra 3.
The 28nm Snapdragon S4 (dual-core Krait) based One X gives us some very interesting results. Peak power consumption looks identical to the iPhone 5, however Apple is able to go into deeper sleep states than HTC can with its S4 platform. Performance is a little worse here but that could be a combination of SoC and software/browser. I used Chrome for all of the tests so it should be putting Android's best foot forward, but the latest update to Safari in iOS 6 really did boost javascript performance to almost untouchable levels.
At the end of the day, the power profile of the iPhone 5 appears to be very close to that of a modern Snapdragon S4 based Android smartphone. Any battery life gains that Apple sees are strictly as a result of software optimizations that lead to better performance or the ability to push aggressively to lower idle power states (or both). It shouldn't be very surprising that these sound like a lot of the same advantages Apple has when talking about Mac battery life as well. Don't let the CPU cores go to sleep and Apple behaves similarly to other device vendors, but it's really in idle time or periods of lighter usage that Apple is able to make up a lot of ground.
There's one member of the modern mobile SoC market that we haven't looked at thus far: Intel's Medfield. The data below isn't directly comparable to the data above, my measurement methods were a little different but the idea is similar - we're looking at device level power consumption over time while Kraken runs. Here I'm only focusing on the latest and greatest, the Atom based Motorola RAZR i, the Snapdragon S4 based Droid RAZR M and the iPhone 5. The RAZR i/M are nearly identical devices making this the perfect power profile comparison of Atom vs. Snapdragon S4. The RAZR i is also the first Atom Z2460 based part to turbo up to 2.0GHz.
Very interesting. Atom is the only CPU that can complete the Kraken benchmark in less time than Apple's Swift. Peak power consumption is definitely higher than both the Qualcomm and Apple devices, although Intel's philosophy is likely that the added power usage is worth it given the quicker transition to idle. Note that Atom is able to drive to a slightly lower idle level than the Snapdragon S4, although the Swift based iPhone 5 can still go lower.
At least based on this data, it looks like Intel is the closest to offering a real competitor to Apple's own platform from a power efficiency standpoint. We're a couple quarters away from seeing the next generation of mobile SoCs so anything can happen next round, but I can't stress enough that the x86 power myth has been busted at this point.
I will add that despite Intel's performance advantage here, I'm not sure it justifies the additional peak power consumption. The RAZR i ends up being faster than the iPhone 5 but it draws substantially more power in doing so, and the time savings may not necessarily offset that. We'll see what happens when we get to our battery life tests.
Battery Life
Section by Anand Shimpi
At the start of our iPhone 4S battery life analysis I mentioned that I wasn't happy with the current state of our battery life benchmarks. The first incarnation of our smartphone battery life suite was actually a port of what I created to test battery life on Mac notebooks years ago. The Mac suite has evolved over time, and we've made similar evolutions to the smartphone suite - just on a less aggressive pace. The data on the previous page showed just how good Apple is at driving down idle power consumption, and through some software optimization it got very good at winning in our battery life tests. The data was accurate, but stopped being representative of reality.
Going into the iPhone 5 review I knew we needed to change the suite. After testing a number of options (and using about 16.5GB of cellular data in the process) we ended up on an evolution of the battery life test we deployed last year for our new tablet suite. The premise is the same: we regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits as always). The differences between this test and our previous one boil down to the amount of network activity and CPU load.
On the network side, we've done a lot more to prevent aggressive browser caching of our web pages. Some caching is important otherwise you end up with a baseband test, but it's clear what we had previously wasn't working. Brian made sure that despite the increased network load, the baseband still had the opportunity to enter its idle state during the course of the benchmark.
We also increased CPU workload along two vectors: we decreased pause time between web page loads and we shifted to full desktop web pages, some of which are very js heavy. The end result is a CPU usage profile that mimics constant, heavy usage beyond just web browsing. Everything you do on your smartphone ends up causing CPU usage peaks - opening applications, navigating around the OS and of course using apps themselves. Our 5th generation web browsing battery life test should map well to more types of smartphone usage, not just idle content consumption of data from web pages.
As always we test across multiple air interfaces (3G, 4G LTE, WiFi), but due to the increased network load we actually find that on a given process technology we see an increase in battery life on faster network connections. The why is quite simple to understand: the faster a page is able to fully render, the quicker all components can drive down to their idle power states.
The downside to starting with a new battery life test is that we don't have a wealth of older data to compare to. I did my best to run whatever we had access to at the time, but there simply aren't that many devices in these charts compared to our older ones. The data below may not look like a lot, but it's the result of running over 300 hours of battery life tests to not only understand how these devices do under load but also to find out the best test to use as the cornerstone of our new suite.
We'll start the investigation on WiFi. Where supported we used 5GHz 802.11n, otherwise 2.4GHz:
The iPhone 5 manages to match Apple's estimates, just breaking the 10 hour barrier. HTC's One X based on the Snapdragon S4 comes very close however. Although the One X is equipped with a larger battery, it also has a bigger screen and slightly more power hungry SoC to feed as well.
The iPhone 4S is measurably worse here. Keep in mind that the workload between all of the devices here is constant, if you use the faster performance on the iPhone 5 to browse more web pages or use your apps quicker then you may not see an improvement here. Worst case, you may even see a regression in battery life. That's the downside to this increased dynamic range in power consumption that we've been getting for two generations now.
Although this isn't the place for an Intel/Qualcomm comparison, it is important to note that the Atom Z2460 based RAZR i manages to last 17% longer on a single charge than the nearly identical, but Qualcomm S4 based RAZR M.
Next let's look at battery life when we switch to the cellular networks:
The non-LTE phones see a sharp drop in battery life. At least at 28nm the slower air interfaces simply have to remain active (and drawing power) for longer, which results in measurably worse battery life. Again, the thing to be careful of here is there's usually a correlation between network speed and how aggressive you use the device. With a workload that scaled with network speed you might see closer numbers between 3G and 4G LTE.
HTC's One X continues to do very well on LTE, coming the closest to the iPhone 5. I believe what we're seeing here is really Apple's idle power management building up a small but tangible advantage.
On 3G the iPhone 5 actually dies slightly quicker than the iPhone 4S, although run to run variance can cause the two to flip around in standings. Our iPhone 4 datapoint featured an older battery (both the 4S and 5 batteries were < 30 days old) so it's unclear how a brand new 4 would compare.
The RAZR i does quite well here on 3G. Despite being on a slower network, Intel's platform appears to do a good job of aggressively pushing down to idle. Once again Intel maintains about a 19% battery life advantage over the S4 based RAZR M. The RAZR i and the HTC One X do better than the iPhone 5 on 3G, which supports our theory of idle power consumption being a big reason the iPhone 5 does so well on faster networks.
While our new web browsing battery life tests do a good job of showing the impact of network, display and CPU on battery life, they do little to stress the GPU. Thankfully our friends at Kishonti gave us a shiny new tool to use: GLBenchmark 2.5.1 features a GPU rundown battery life test. The standard tests run Egypt and Egypt HD indefinitely until the battery life dies. We standardized on using Egypt HD at the device's native resolution with a 30 fps cap. All of the displays were calibrated to 200 nits as usual.
Here the iPhone 4S has a tangible advantage in battery life over the 5. The move to 32nm can only do so much, with many more transistors switching at a higher frequency the A6 SoC ends up drawing tangibly more power than the A5 in the iPhone 4S and delivers a shorter run on battery. The gap isn't huge (the 5 delivers 92% of the battery life of the 4S), but it's still a regression. The iPhone 5 does comparatively quite well here, despite being faster it's able to outlast the S4 and Tegra 3 based devices. The explanation is rather simple: capped to only 30 fps the iPhone 5's GPU likely has the ability to drop down to an idle state for a brief period in between rendering frames. The other devices can't hit 30 fps in this test and as a result have to run at full tilt throughout the entire benchmark. The RAZR i is the only exception to the rule, but it is considerably slower than everything else here (averaging below 8 fps) which could explain the very high result.
Moving on we have our WiFi hotspot test, which measures how long the device can last acting as a hotspot for a wirelessly tethered notebook. Our wireless hotspot test is entirely network bound by its definition. Here I'm including two sets of results, our most up to date LTE hotspot battery life tests as well as the chart we included in our latest iPad review. In both cases the iPhone 5 does relatively well, lasting just over 5 hours as an LTE hotspot on a single charge. In these tests, devices with significantly larger batteries come in very handy.
Our final battery life test is our standard call time test. In this test we're playing audio through two phones (one of which is the phone being tested) and measure the call time until the battery is completely drained. The display is turned off, simulating an actual call.
The iPhone 5 falls just short of the 4S in our call time test. There's really no major improvement here as far as we can tell, although it's not clear how much additional work the iPhone 5 is doing with its additional noise cancelling features. If talk time is of the utmost importance to you, you'll want to look at some of the phones with much larger batteries. The Droid RAZR MAXX remains the king of the hill as far as talk time is concerned.
Battery Life Conclusions
If we take a step back and look at the collection of results from our battery life tests, the iPhone 5 can last anywhere between 3.15 and 10.27 hours on a single charge. Do a lot of continuous data transfers and you'll see closer to 5 hours, but if you've got reasonably periodic idle time you can expect something in the 8 - 10 hour range. In short, if you use your device a lot, don't be too surprised to see it lose about 10 - 15% of its battery life for every hour of use.
Now the question is how does the iPhone 5 compare to other devices? Compared to previous iPhones, the 5 has the ability to use a lot more power. If you're doing the exact same, finite length CPU/network intensive task on the iPhone 5 and any previous iPhone, chances are that the iPhone 5 will be able to complete the task and get to sleep quicker - thus giving you a better overall energy usage profile. Where things get complicated is if you use the faster CPU, GPU and network connectivity to increase your usage of the device. In that case you could see no improvement or even a regression in battery life.
Compared to other modern platforms the iPhone 5 should be competitive in day to day power usage, even compared to devices with somewhat larger batteries (~7Wh). The trick to all of this of course is whatever performance advantage that the iPhone 5 has coupled with lower idle power. Being able to complete tasks quicker and/or drop to aggressively low idle power states are really the only secret to the iPhone's battery life.
I feel like the days of ever increasing smartphone battery life are behind us. Instead what we'll likely see going forward is continued increase in dynamic range. Battery life could be better or worse and it'll likely depend heavily on your workload. Much like how we saw notebooks cluster around the same 2 - 5 hour battery life for years, I suspect we'll see something similar here with smartphones. The major difference this time around is the burden of a really large battery isn't as big as it is in a notebook. The RAZR MAXX is the perfect example of a still very portable smartphone that comes equipped with a huge (by today's standards) battery.
Lightning 9-pin: Replacing the 30-pin Dock Connector
Section by Brian Klug
With the iPhone 5 and the corresponding iPod lineup refresh, Apple has moved away from the venerable 30-pin dock connector and onto a new 9-pin Lightning connector. The Lightning connector announcement caused a considerable amount of chatter in the Apple ecosystem primarily because of just how ubiquitous 30-pin accessories became in the years that Apple used that as the primary interface for everything iPod, iPad, and iPhone.
Over years of iDevice upgrades I wager most people have built up a considerable inventory of both 30-pin dock cables, chargers, and Made For i (MFi) accessories. Moving to a completely new interface warrants at the very least the purchase of new cables. Even in my own case this is a friction point, as I managed to snag an extra long 30-pin dock cable Apple uses in their displays for use on my nightstand, and there’s no equivalent at the moment for Lightning. At bare minimum I require three cables — one for the nightstand charger, one for in the car, and one for connecting to a computer. I’m willing to bet most other users are the same. In the days right after the iPhone 5 launch Lightning to USB cables were hard to come by both at carrier stores and Apple stores (one Verizon store told me their entire Lightning cable stock had been recalled), but by now stock of more Lightning to USB cables is getting better but still rather limited.
The new connector is both considerably smaller in overall volume than the old 30-pin, and fully reversible as well. On the Lightning male connector there are 8 exposed gold pads, with the metal support serving as the 9th pin and ground. As best I can tell, these are mapped in a rotational fashion not through the connector but rather so that the bottom left pin maps to the top right pin if looking top down. As an aside, I’ve seen people refer to the 9-pin as 8-pin because of this ground, which is puzzling, in spite of Apple even calling it a 9-pin internally (eg. “IOAccessoryDock9Pin”=1). The old 30-pin pinout had 7 pins dedicated to ground, yet everyone resisted calling it a 23-pin, but I digress.
Lightning of course does away with lots of the signaling that went unused on the older 30-pin adapter that previously accommodated the older iPod touch lineup. Things like 12 volt FireWire charging and data that went away a long time ago, and older analog video out compliance.
Apple calls Lightning an “adaptive” interface, and what this really means are different connectors with different chips inside for negotiating appropriate I/O from the host device. The way this works is that Apple sells MFi members Lightning connectors which they build into their device, and at present those come in 4 different signaling configurations with 2 physical models. There’s USB Host, USB Device, Serial, and charging only modes, and both a cable and dock variant with a metal support bracket, for a grand total of 8 different Lightning connector SKUs to choose from. At present by USB over Lightning I mean USB 2.0.
With Lightning, Apple officially has no provision for analog audio output, analog video output, or DisplayPort. That said special 3rd party MFi members will no doubt eventually get (or may already have) access to a Lightning connector for DisplayPort since obviously video out over a wired interface must continue. For audio output, Lightning implements USB audio output which looks like the standard USB audio class. This has been supported for a considerable time on the old 30-pin adapter, though most accessory makers simply chose to use analog out for cost reasons. I originally suspected that analog line-out would come over the 3.5mm headphone jack at the bottom of the iPhone (thus all the dockable interfaces at the bottom), but the iPod Nano 7G effectively threw that prediction out the window with its headphone jack placement.
Thus, the connector chip inside isn’t so much an “authenticator” but rather a negotiation aide to signal what is required from the host device. Lightning implicitly requires use of one of these negotiation components, and in addition Apple still requires authentication hardware using certificates for every accessory that uses iAP (iPod Accessory Protocol). With Lightning Apple introduced iAP2 which is a complete redesign of iAP, the protocol which allows for playback control, communication with iOS applications, launching corresponding iOS apps, GPS location, iPod out, and so forth.
When it comes to the physical layer of Lightning there’s very little information out there regarding whether the Lightning chip is doing conversion from some other protocol or simply negotiating USB, Serial, or so forth, and then waiting for the host device to route those I/Os over the cable. You can imagine that with DisplayPort there will need to be some active component that multiplexes USB, DisplayPort, and supplies power over the 9 pins, so I suspect some other protocol on top of all this.
The new connector of course necessitates a new cable and new line of accessories. Probably the biggest inconvenience is that with the iPhone 5 there’s now even less of a chance you can snag a quick charge at a friend’s house or in a friend’s car unless they too have an iPhone 5. While that’s not an entirely fair criticism, the reality of smartphone battery life at the moment means that charging whenever or wherever you can is an important factor, and in ecosystems other than iOS land I’m spoiled by the ubiquity of microUSB. Another consideration is what happens in the case where a household has both devices with Lightning and the 30-pin connector — at present it looks like the solution is either multiple cables for the car charger or an adapter.
That brings me to the microUSB to Lightning adapter, which, like the microUSB to 30-pin dock adapter that came before it isn’t available in the USA but is available in Europe and elsewhere. At present the only way to get one of these in the states is to pay considerable markup and buy on eBay or have a friend ship one from abroad (I opted for the latter option, thanks to our own Ian Cutress). It’s unfortunate that Apple won’t sell you one of these stateside but rather forces you into buying cables. The Lightning to microUSB adapter supports both charging and sync/data functionality. I can’t understate that the Lightning to microUSB adapter is tiny, absolutely tiny. I thought the microUSB to 30-pin adapter was small and always at risk of becoming lost in the aether, well the Lightning equivalent is even smaller.
The reason for this disparity is that the EU mandated a common external power supply standard which implements the USB charging specification and uses microUSB as the connector. To skirt this requirement Apple made the original 30-pin dock connector available, and this time around has made a Lightning adapter available as well. The somewhat important and oft-overlooked context here is that Apple had standardized the 30-pin dock connector and its own 5 volt charging signaling before the GSM Association, International Telecommunication Union, or EU decided to implement the USB charging spec, and before even the USB-IF finished the charging spec. There’s a tangent here that’s worth discussing, and it’s how these two differ in signaling that a USB port has more than the standard 500 mA at 5V available from a USB 1.x or 2.0 port.
In the case of the USB Battery Charging 1.2 specification, signaling is actually superficially pretty simple, and boils down to sticking a 200 Ohm resistor across the D- and D+ pins. You can do this yourself and test with an external power supply, it works with almost every new device intended to be used with USB chargers. Apple however needed a 5V charging specification before the industry implemented it, and went with what boils down to two simple voltage dividers that drive 2.8 and 2.0 volts across D- and D+ respectively. If you go shopping around for USB charging controllers, you’ll see this referred to in the open as the Apple voltage divider. Anyways, my long winded point is that the microUSB to 30-pin and Lightning adapters contain a circuit of some kind to accommodate the difference in charging specification and deliver more than the 500 mA at 5V you’d get otherwise. What’s curious to me is that this time around using the Lightning to USB adapter plugged into a simple circuit simulating a USB BC 1.2 charger, I get the same current draw (around 0.8 A at 5V at maximum) as I do with Lightning to microUSB to the circuit.
Of course for accessories with dock connectors that aren’t on a fast replacement cycle (for example cars and AV receivers) users can opt to buy the 30-pin adapter for legacy dock accessories. This adapter of course includes a number of active components to talk with Lightning. While I haven’t tested this myself due to availability reasons, I’ve heard that it works fine with devices from the iPod 4th Generation days with serial iAP, no authentication chip, and analog audio. While video out isn’t supported on the 30-pin to lightning adapter, it sounds like the adapter does handle analog and USB audio out alongside charge and USB data.
Finally the last important angle is what happens for accessories that need to accommodate both older 30-pin devices and those with the new Lightning port. Apple’s guidance is pretty simple, it plainly disallows accessories from including both connectors, and instead wants manufacturers to adopt a modular plug assembly that presents one or the other at a time. The other solution is to simply use USB and the corresponding cable, but for docks that isn’t really a practical solution.
The reality of the 30-pin dock connector from 2003 is that it has been destined for a more modern, compact replacement for some time now. If you look at the actual pinout, a shocking number are devoted to I/O that simply wasn’t used anymore, and inspecting your average dock to USB connector and counting how many pins were actually there really drove home the reality that Apple was wasting a lot of space at the bottom of its devices. Volume gains from Lightning are really what enabled Apple to both redesign the speakerphone acoustic chamber, bottom microphone, and relocate the headphone jack on the iPhone 5.
Display: Now 16:9 with full sRGB Coverage
Section by Brian Klug
A huge part of the story of what’s new in the iPhone 5 is obviously the display. At a high level what’s different is pretty simple sounding: aspect ratio is now 16:9, resolution is 1136 x 640, gamut coverage is now almost exactly sRGB, and the digitizer is now in-cell as opposed to on-cell. Let’s go through those changes.
iPhone 5 (left), iPhone 4S (right)
Since the original iPhone days, aspect ratio for the phone has been an immutable 3:2, and later on the iPad adopted an aspect ratio of 4:3. All the while, the aspect ratio wars for content and media have been warring, and by now it’s obvious that 16:9 has won. YouTube changed over to 16:9 just a year after the original iPhone launch, and since then other sources of content have moved that way as well. We saw 16:9 take over as the dominant HD format, and like it or not the same has played out on the PC with a march from 4:3 to 16:10 to 16:9. The move to 16:9 for the iPhone now enables most modern video content to play back without (or with very little) letterboxing, and simultaneously expands the viewport a considerable amount for other applications. A huge number of iOS applications are essentially a list view or a tabbed view with a list down below, and thus are immediately suited in portrait rotation to take advantage of more vertical space. I spend most of my time in portrait mode with rotation locked on iOS, and increasingly it seems as though landscape rotation is a marginalized view for application developers, so this seems to be a sound path forward.
The route Apple chose to get the iPhone to 16:9 is now widely known. They kept width the same at 640 pixels and roughly 5 cm, and instead opted to make the display taller, at 1136 pixels and roughly 8.85 cm, up from 960 pixels and 7.39 cm on the iPhone 4/4S. Interestingly enough 1136 isn’t exactly 16:9, a closer target would be 1137, but we’re talking about one or two pixels that simply get cropped out on video decode and display for most media.
I’ve talked about how Android is now an almost entirely 16:9 camp, and that really frames my thoughts about the iPhone 5’s display size change. In the past, switching back and forth between iOS which was 3:2 previously and modern Android handsets that were 16:9 never felt very extreme. There was a noticeable difference in overall size, sure, but aspect ratio never quite made that big of an impression on me considering the differences in OS. After spending a lot of time with the iPhone 5’s 4-inch, 16:9 display, switching back to the original iPhone format’s 3.5-inch, 3:2 display is a downright jarring experience. It’s readily apparent just how much the platform needed this change in both aspect ratio and size, if nothing else to compete with increasingly larger and larger consumer expectations for display size. It’s interesting as well how discussion about thumb radius sweeping a semicircle out from the bottom corners of the display also quieted down with the change. We’ve talked in the past about how the typical smartphone grip isn’t really centered around the bottom but rather shifted up slightly. I don’t find that the iPhone 5’s larger display changes or diminishes one-handed use significantly at all.
Apple has of course made changes to iOS to accommodate the change in aspect ratio, and those first party applications take advantage of the 176 extra vertical pixels. For starters, the landscape keyboard gets wider keys but doesn’t quite fill up the whole 1136 wide canvas. There’s also another row for applications on homescreens, and another row inside folders.
For third party applications however that road to 16:9 for the iPhone 5 and newest iPod Touch display requires some tweaking, and a trip through the App Store approval process, otherwise you end up with letterboxing. There’s really nothing else that Apple could’ve done besides letterboxing to accommodate older apps that either aren’t updated or will never be updated, but the downside of this centered letterboxed experience is that it shifts portrait apps and the keyboard up by 88 pixels.
A great example of where this is jarring is the IM application I use, imo.im, which hasn’t been updated as of this writing to take advantage of the new viewport size. As a result, typing on this 88-pixel-shifted keyboard requires repositioning one’s grip. This is a temporary grievance though that will go away in time as developers update things, but still warrants mentioning. It’s similar to but not identical to the same kind of friction we saw with the path to retina-enabled apps with the iPhone 4 launch.
In-Cell
The next major improvement is in-cell touch. The iPhone 5 isn’t the first smartphone to include an in-cell touch LCD, but perhaps the first where we’ve seen lots of talk about it. Part of getting to even thinner form factors is either eliminating or reducing the thickness of everything in the z direction. In addition, increasing the light throughput of the display stack (which means both filters and everything between the backlight LEDs and your eye) is a huge driver for overall battery life, since the display is still by far the largest consumer of precious milliwatts in a smartphone.
One of those things is the digitizer, which previously sat on top of the LCD as a separate layer incurring both additional thickness and back reflections. While successive generations of both iPhone display stacks (and the smartphone platform in general) have eliminated a lot of back reflections by reducing the number of air-glass interfaces with optical-grade adhesive lamination (and thus 4 percent Fresnel reflections that go along with each of them), ultimately these glass-adhesive interfaces still incur some path loss and still have a z profile. The only way to reduce these further is to go to in-cell touch, which really is a fancy way of saying that the digitizer is then integrated into the LCD-TFT gating itself, and thus into the cells of each pixel, rather than as a discrete layer atop the stack after color filters.
Getting to this level of integration requires cooperation between both the display driver and touch sensor, and herein lies the challenging engineering problem that in-cell touch poses. Touch sensing has to be time multiplexed with display driving otherwise the touch signal might be entirely lost in noise. At the same time, touch sensing is often around double the frequency (120–175 Hz) of display drawing (60 Hz), so this has to be done carefully during quiet periods, and thus that required communication and integration. The iPhone 5 uses a combination of TI and Broadcom controllers to do display controller and touch sensing, where previous generations of iPhone simply just used a single chip TI solution. In future generations this will come back down to just being a single-chip solution.
Subjectively thus far I haven’t detected any change in tracking quality or performance with the iPhone 5’s in-cell solution, which is great. To end users the difference seems to be totally transparent.
In addition to just the air-adhesive interfaces introducing thickness and unavoidable Fresnel reflections, there’s also the traces from transparent conductors in the digitizer as well to think about. At present that material is Indium Tin Oxide (ITO), which is one of very few known transparent conductors and used inside every LCD. Because Indium is a relatively expensive rare earth metal, ITO traces are only laid down where they need to be on top of and below the glass substrate (for both transmit and receive layers of the digitizer), and the areas inbetween those traces are then filled with an index-matching space fill material to diminish their visibility. How well this space fill is done and how close the index is to ITO’s is one of the quality metrics of a digitizer to begin with, and often these rows and columns are visible under direct illumination either outdoors or with good eyes indoors. Often you can tell a lot about how much value an OEM placed on its entire digitizer just by how distracting these are outdoors, but the big benefit with in-cell is that they go away entirely, which is a huge gain I rarely see people talking about in the context of in-cell improvements.
Horizontal lines on the iPhone 4S (right) from the digitizer (easier to see at full size)
That change leads to what I would consider a huge improvement in outdoor visibility, since these lines are now totally gone on the iPhone 5. In addition, there’s no longer a contrast-diminishing set of back reflections from the extra glass layer when outdoors. This is very visible in the photos I’ve taken showing outdoor viewing behavior on both the iPhone 4S and iPhone 5.
Significantly less blue haze on the iPhone 5 (left) than iPhone 4S (right)
Display Quality
Our own Chris Heinonen already did an excellent job characterizing the iPhone 5 display using our new CalMAN 5 based test suite he put together, and I’d encourage everyone to read that for a much more comprehensive version of an iPhone 5 display analysis. There’s really not a whole lot for me to add other than some results from the two other main smartphone displays I’ve tested with this new workflow, and some graphs with data from other phones. Chris has better instrumentation than I do with an i1Pro, but we’ve tweaked the workflow slightly so I ran the iPhone 5 and 4S through the test. In addition Apple has multiple suppliers for the iPhone 5 display so there are bound to be some differences in devices.
Subpixel geometry and size is still the same on the iPhone 5, meaning this is still a “retina” display and all the usual discussion about angular subtense and visual acuity still applies. You can see this under the microscope (all these images are at the same magnification, focus is a bit different though given the different optical path length thanks to that in-cell touch) — both the geometry and pixel pitch are unchanged between the 4S and 5.
To start are our brightness and contrast graphs which are measured at 100 percent brightness. The iPhone is even brighter than the iPhone 4 and 4S displays, at just over 600 nits on my unit. I saw some variance back between the iPhone 4 and 4S in brightness, so depending on where you’re coming from this can be a noticeable jump. Apple started off with good consistency when the iPhone 4 came out, but I saw lots of white point and luminance variance with that form factor display as time progressed and we moved from the 4 GSM to 4 CDMA to 4S.
When it comes to gamut, Apple announced that the iPhone 5 display has full sRGB coverage. The iPad 3 was actually first in Apple’s lineup to advertise roughly full sRGB coverage, but it appears that the iPhone 5 is even closer to being spot on. Using our new CalMAN workflow we can easily measure and compare the overall saturations for primaries and secondaries, the ideal values of which are represented by white boxes. There’s a whole lot of measuring required for each phone, so I pared it down to just the iPhone 5, 4S, HTC One X, and Galaxy S 3 for the moment.
Saturations and Gamut
The iPhone 5 is, like Chris said already, the closest smartphone I’ve seen to sRGB to date. It’s really clear to me that Apple puts a strong emphasis on its suppliers to both deliver a display capable of hitting that gamut, and then bothers to do some factory level calibration to get reasonably close. I’ve seen this drift over time but for the time being the iPhone 5 is quite close to being ideal all things considered.
The GretagMacbeth ColorChecker card test colors are next up, and it isn’t surprising here to see some variance, but the values from the iPhone 5 are very close to the intended colors compared to the competition and its predecessor.
GMB Color Checker
Grayscale and gamma represents our steps of 5 percent grey from 0 to 100 and we get a report for contrast, the white and black levels, color temperature, gamma, and average Delta E 2000 here.
Grayscale and Gamma
My values differ from Chris’ slightly, but my instrumentation and phone are both different from his which may explain some of the differences. The high level story is the same though, the iPhone 5 tracks closer to the ideal than any of the other devices. I’ve also gone ahead and made a table with the average Delta E from each step.
CalMAN Display Comparison | |||||||
Metric | iPhone 5 | iPhone 4S | HTC One X | Samsung Galaxy S 3 | |||
Grayscale 200nits Avg dE2000 | 3.564 | 6.162 | 6.609 | 4.578 | |||
CCT Avg (K) | 6925 | 7171 | 5944 | 6809 | |||
Saturation Sweep Avg dE2000 | 3.591 | 8.787 | 5.066 | 5.460 | |||
GMB ColorChecker Avg dE2000 | 4.747 | 6.328 | 6.963 | 7.322 |
Last up is the indoor viewing angles comparison between the iPhone 5 and the 4S, which are essentially unchanged. Even at extreme angles I can’t detect any major differences in viewing angle between the 4S and the 5, which is a good thing since there isn’t really anything to complain about.
Camera: Thinner, Faster, Better Low-Light
Section by Brian Klug
The iPhone 4S represented probably the single biggest leap in camera performance for Apple in the progression of iPhone. The combination of an 8 MP CMOS with 1.4 micron square pixels, F/2.4 optics, and Apple’s own ISP resulted in a great overall performer for its class. We made the prediction early on that optical performance would remain roughly the same with the next iPhone, and that largely turned out to be the case. With the iPhone 5, Apple’s major design guideline seems to have been reducing z-profile, and probably one of the biggest obstacles to getting to that goal was reducing the thickness of the entire camera system. In fact, getting to a thinner optical system with the same performance characteristics is quite a challenge.
Superficially the iPhone 5 camera specifications are almost unchanged. We’re still talking about 1.4 micron pixels (roughly two waves in the red), not the smaller 1.1 micron pixels that are in the cards for the future. F-number remains at 2.4, and total pixel count is still 8 MP. Focal length is shorter, as expected (this is a thinner system, after all), resulting in a slightly wider field of view. I’ve made a table with the relevant specifications for the iPhone 5 cameras.
iPhone 4, 4S, 5 Cameras | ||||
Property | iPhone 4 | iPhone 4S | iPhone 5 Rear | iPhone 5 Front |
CMOS Sensor | OV5650 | IMX145 | IMX145-Derivative | OmniVision |
Sensor Format | 1/3.2" (4.54 x 3.42 mm) | 1/3.2" (4.54 x 3.42 mm) | 1/3.2" | ~1/6" (~2.6 x 1.6 mm) |
Optical Elements | 4 Plastic | 5 Plastic | 5 Plastic | ? |
Pixel Size | 1.75 µm | 1.4 µm | 1.4 µm | 1.75 µm |
Focal Length | 3.85 mm | 4.28 mm | 4.10 mm | 2.2 mm |
Aperture | F/2.8 | F/2.4 | F/2.4 | F/2.4 |
Image Capture Size | 2592 x 1936 (5 MP) | 3264 x 2448 (8 MP) | 3264 x 2448 (8 MP) | 1280 x 960 (1.2 MP) |
Average File Size | ~2.03 MB (AVG) | ~2.77 MB (AVG) | ~2.3 MB (AVG) | ~420 KB (AVG) |
That’s not to say that there haven’t been changes made to the rear facing camera performance, however. Apple talked about dramatically improved low light sensitivity thanks to a low light boost mode. As we’ll show later this does make a big change in overall sensitivity thanks to the combination of 2x2 pixel binning at ISO 3200 to keep noise under control, and better fixed pattern noise rejection on their ISP.
Apple claims that this is an entirely new ISP, oddly enough I found that the interface is still named the same (AppleH4CamIn ISP) as what I found on the 4S, but I don’t doubt that there have been at least some tweaks, though unfortunately this is still relatively opaque without lots of digging. During the keynote, Apple claimed that image capture is now 44 percent faster than the 4S thanks to this improved ISP. The improvement is actually hard to measure, the iPhone 5 doesn’t have a burst mode, instead capturing quickly requires tapping as fast as you can. Shot to shot latency is essentially zero on the iPhone 5, gated only by how fast I can tap. I put together a short video comparing the 4S and 5.
The camera launches faster, HDR images capture quicker, all around the iPhone 5 camera experience is just smoother and faster, which isn't a surprise.
Camera Performance Comparison | ||||
Property | iPhone 3GS | iPhone 4 | iPhone 4S | iPhone 5 |
Camera Launch Time (seconds) | 2.8 | 2.3 | 1.4 | 1.2 |
HDR Capture Time (seconds) | - | 4.9 | 3.2 | 1.6 |
Working Distance (cm) | - | 7.0 | 6.5 | 6.1 |
Apple has once again gone with a Sony CMOS for the rear camera, though this time (thanks to Chipworks) we know it ditched the IMX145 markings. Apple is frequently able to have its suppliers make specific one-offs with changes just for themselves, it’s highly likely that’s the case here. I have almost no doubt that the changes made to IMX145 accommodate this extremely high ISO (3200 is almost unheard of for that size pixels due to noise) with some tweaks to the amplifier in each active pixel (which is what we really mean when we talk about a CMOS versus a CCD). Either way I have no doubt time will tell that this is an IMX145 derivative with some tweaks to both aide the low light boost mode, and also possibly get to Apple’s desired chief ray angle if IMX145 couldn’t do it already. Unfortunately like so much in the smartphone space there’s very little in the way of open documentation for IMX145.
Apple claims that they’re aligning optical elements with even tighter tolerances now, which does make a big difference in the kinds of optical design tolerances available. There’s also a sapphire front window in the place of what was previously just optical glass. Sapphire of course has an extremely high surface hardness second only to diamond, in addition to excellent chemical resistance, and good transmittance. The real advantage here is again one of thickness — you can run thinner sapphire windows in the place of standard glass windows and get better transmittance. Sapphire windows in optical systems are colorless and chemically composed of single crystal aluminum oxide (Al2O3). Upon inspection the iPhone 5 sapphire does indeed appear to have an antireflection coating as well.
The story on the front facing camera is actually one of dramatic improvement. The iPhone 4 was the first iPhone to include a VGA front facing camera, which remained the same on the iPhone 4S as well. That system was arguably good enough for FaceTime which seemed its original reason for existing, but finally gets updated to 1280 x 960 on the iPhone 5. That particular CMOS is an OmniVision part with 1.75 micron pixels and topped with an F/2.4 optical system. Images captured on the front facing camera are dramatically better, and video is 720p now.
Before we talk about image quality though I’d like to make brief mention about user experience on the iPhone 5 camera. I touched on how the interface is even faster than the 4S thanks in part to faster A6 silicon and improved ISP onboard, and Apple continues to make things very minimalist with virtually no options for changing shooting modes manually or configuring ISO. In addition, every image taken on any Apple camera is at the highest resolution and compression mode, always. Essentially all of this functionality is abstracted away from the user leaving the shooting experience fully auto all the time. This includes the low light boost mode which kicks in below a certain threshold.
What’s puzzling about the iPhone 5 user experience is that the aspect ratio of the viewport no longer matches the aspect ratio of the CMOS or the end images. This is done purely for aesthetic reasons as far as I can tell, because of the extreme letterboxing that would happen with a 16:9 viewport and 4:3 image size. Instead of going ahead and giving you that letterboxed but 100 percent field of view preview, Apple instead crops off the top and bottom and presents a roughly 3:2 image in the preview screen.
I encountered this somewhat unexpectedly while taking images of the ISO12233 chart and trying to align the chart with the 4:3 CMOS, only to be thoroughly confused to the point of questioning my basic math skills until I realized the preview was a crop of the real image area. As of this writing in iOS 6.0 there is no way to double tap on the preview and get an aspect-correct, 100 percent preview with letterboxing like you could do with 16:9 video in video capture mode. Instead, you’re just always locked to an absurd 3:2 center crop of a 4:3 image. This makes absolutely no sense to me and will always result in image composition in the preview screen that looks nothing like the end result. I sincerely hope an update adds a double tap gesture that will give an actual 4:3 preview. Another new thing I noticed on the iPhone 5 is that if you let the phone get too hot it will disable the LED flash until the device cools down. I have never seen this behavior on the 4 or 4S.
Lastly, iOS 6 adds a panorama mode to both the iPhone 5 and iPhone 4S which actually has been lurking around hidden in iOS for some time now. Panorama mode in iOS continually integrates over the field of view, first for the full field, later over a small center strip until you reach the end. The mode produces results that are at maximum 10800 pixels wide and around 2590 pixels tall depending on if you swept through the horizontal field of view without any shift. In addition the mode supports portrait panoramas if you rotate the camera 90 degrees and scan upwards.
I stuck my iPhone 4S, iPhone 5, HTC One X and Samsung Galaxy S 3 in the dual camera bracket and took a number of panoramas for comparison purposes. There’s a surprising amount of difference between the approaches I see handset vendors taking for panorama. The One X takes a few exposures and stitches them together, Galaxy S 3 does continual integration but produced strangely blocky results. iOS continually stitches a small center strip together as I mentioned already.
Still Image Quality Evaluation
To evaluate still image quality we turned to our standard set of tests which seems to keep growing. That consists of a scene in a lightbox with constant controlled illumination of 1000 lux taken using the front and rear cameras with as close to the same field of view as possible, images of a distortion grid, GretagMacbeth ColorChecker card for white balance checking, and an ISO12233 test chart for gauging spatial resolution in an even more controlled manner. Because I’ve moved houses and lighting will never ever be exactly the same, I have decided to move the three test charts into my lightbox as opposed to putting them on a wall and illuminating them with studio lights. This warrants a completely new set of comparison images, hence the smartphone 2012 camera bench for the three charts and front facing camera.
Let’s start with what’s most objective first, the tangential and saggital spatial frequency crops. You can really see here that Apple’s camera design team kept performance roughly the same between the 4S and 5, I can count up to roughly 16.5 (100x lines per image height) on both devices. Samsung Galaxy S 3 appears to also be around 16, along with the HTC One X. The iPhone 4 and Galaxy Nexus are at a huge disadvantage with their 5 MP CMOS, I can see up to 15 or so before there’s a contrast reversal from us crossing through MTF of 0. The PureView 808 actually outresolves the test chart at full size and at the 8 MP on-device oversample, obviously you can’t beat that device with just an 8 MP CMOS.
The tangential frequency crops basically tell the same story which isn’t a surprise. It’s shocking how close the iPhone 4S and 5 are here. I strongly suspect that team was basically ordered to keep MTF the same and just reduce the thickness of the 4S-era optical system. You can look at the 100 percent size versions of the tangential and saggital crops as well rather than these which are resized to 600 pixels wide maximum to fit online.
From the rest of the test charts we can see the iPhone 5 has slightly more pincushion distortion than the 4S in the distortion chart, but not a whole lot. It is also evident from the GMB chart and other photos that I’ve taken over my time with the iPhone 5 that the revised ISP also has better auto white balance.
The remainder of the well-lit tests really tell the same story. In outdoors lighting (with both cameras automatically selecting the same ISO and exposure time) I can’t find any major difference in camera performance between the 5 and 4S, they’re very close. Apple also changed the LED diffuser design with iPhone 5, it is visibly different and now results in a much more even field of illumination in the lights-off lightbox test.
On the front facing camera the increase in resolution and overall quality is dramatic, however.
It is in low light performance that the 4S and 5 radically diverge thanks to the low light boost mode which kicks in automatically at a preset threshold on the iPhone 5. You can tell when this happens just looking at the preview since there’s a dramatic shift all of a sudden in exposure. The iPhone 5 does a 2x2 pixel bin, then upscales that image to the same full size resolution as normal 8 MP capture (3264 x 2448). The result is a tradeoff in spatial resolution for lower noise and an exposure without integration time that’s inordinately long. According to EXIF, the iPhone 4 will do a maximum ISO of 1000 and 1/15th of a second, the 4S will do a maximum of ISO 800 and 1/15th of a second, and finally the iPhone 5 will do between ISO 1600 and ISO 3200 and 1/15th of a second. The difference is quite dramatic as expected.
I took samples in the lightbox at a controlled 4 lux with the phones I selected for shooting with the test target and selected low light modes wherever possible in the camera UI. On the PureView 808 I manually forced the maximum ISO of 1600 since there is no low light preset. The resulting image from the PureView isn’t a mistake, that’s what it actually looks like in full and PureView modes. It’s interesting to see how the different cameras handle this extreme low light. The Samsung Galaxy S 3 shoots at ISO 640 and integrates over a full half second to produce its result, PureView 808 takes ISO 1600 which I set and integrates over a full second (according to EXIF), the One X goes to ISO 1200 and doesn’t report how long it integrates, meanwhile all three iPhones select a maximum exposure time of 1/15th of a second and their respective maximum ISOs. Considering the exposure times of some of those cameras are far too long to hand hold (I use a tripod for these comparisons) I would say that Apple setting a maximum 1/15th of a second makes a lot of sense.
Before we depart still image quality entirely I think it’s worth visiting the evolution of all the iPhones, from the original generation of iPhone, to the latest and greatest iPhone 5. We’ve come along way in a short time since 2007, from 2 MP cameras that basically crammed a webcam module into a smartphone to 8 MP shooters with custom optics and ISP that are now arguably good enough to take the place of a point and shoot. Things haven’t entirely plateaued yet either.
iPhone Cameras | ||||||
Property | iPhone | iPhone 3G | iPhone 3GS | iPhone 4 | iPhone 4S | iPhone 5 |
Focal Length | ? | ? | 3.9 mm | 3.85 mm | 4.28 mm | 4.10 mm |
Aperture | F/2.8 | F/2.8 | F/2.8 | F/2.8 | F/2.4 | F/2.4 |
Image Capture Size | 1600 x 1200 (2 MP) | 1600 x 1200 (2 MP) | 2048 x 1536 (3.1 MP) | 2592 x 1936 (5 MP) | 3264 x 2448 (8 MP) | 3264 x 2448 (8 MP) |
Average File Size | ~650 KB | ~700 KB | ~1.2 MB | ~2.03 MB (AVG) | ~2.77 MB (AVG) | ~2.3 MB (AVG) |
Test Image Full Size | Link | Link | Link | Link | Link | Link |
Purple Haze
The final thing I’d like to talk about regarding still image capture on the iPhone 5 is the so called “purple haze” purple glare which sometimes appears with a bright light source placed just outside the field of view of the camera. When this started getting public attention many assumed that it was the result of light somehow picking up purple from the sapphire cover glass. I guess this assumption was the one people settled on quickly because most sapphire gemstones are purple in appearance? Regardless, the reality as I touched on earlier is that optical grade sapphire windows for either expensive wristwatches or camera systems impart no color on light passing through them. In fact, when I saw this I actually immediately tweeted that this was merely a matter of some stray light bouncing around inside the camera module and probably picking up a purple cast probably from some magnesium fluoride (Mag-Flouride is a very common AR coating choice that looks purple) or other antireflection coating. Note that these coatings are designed to work on a limited range of acceptance angles, from some angles they can indeed reflect, in spite of the name.
The iPhone 5 does exhibit this a bit more than the 4S, but such is to be expected given the wider field of view and larger chief ray angle. Most photographers are used to using either a lens hood or simply shielding cameras with their hand to block stray light from reflecting around inside an optical system and creating this type of glare, and obviously in the case of a smartphone there really isn’t any possible way people are going to attach either baffles or a lens hood (maybe there’s a market for that, though). Note that it is not correct to call this a chromatic aberration, insofar as it is just light that has picked up some color.
The two circular purple artifacts are clearly reflections
I captured two great photos which to me conclusively prove this is an internal reflection of some kind (in case you don’t believe the Apple support statement which parrots what I’ve said already). The first photo shows the purple flare that most see, in the second photo I’ve tilted the phone down slightly and the purple now shows up as two circles which to my optical engineering eyes instantly look like two visible reflections. I was actually going to set up an optical bench and track the angle until I stumbled upon this while playing with the camera during a late night trip to CVS. Again, all of this is easily mitigated by blocking the stray light.
Video: Finally High Profile H.264
Section by Brian Klug
There are a few things different with video capture on the iPhone 5 thanks to improvements to both the ISP inside Apple’s A6 SoC, and also software UI changes. First off, because the iPhone 5 display is now 16:9, there’s no cropped view by default or aspect-correct view with letterboxing for video capture. Instead the iPhone 5 video capture window takes an iPad-like approach with transparent UI elements for preview and shooting video.
What’s new is the ability to take still images at 1920x1080 while recording video by tapping a still image capture button that appears while recording. This is a feature we’ve seen onboard a ton of other smartphones and works the same way here. Note that you can’t magically get a wider field of view or the whole CMOS area while shooting video, it’s essentially dumping one frame from video capture as a JPEG instead of into an H.264 container.
In addition the iPhone 5’s tweaked Sony CMOS still uses a smaller center region for video capture. The difference in field of view is pretty big, but nothing that users haven’t already dealt with in the past.
The iPhone 5 brings two main things to video capture. The first is improved electronic image stabilization tweaks and improvements to ISP. The difference is visible but not too dramatic unless you know what you’re looking for. I would wager most users won’t notice a huge step forward from the 4S but if you’re using an iPhone 4 this will be a marked improvement.
The other improvement is video encoding. The iPhone 5 now shoots rear facing 1080p30 video at 17 Mbps H.264 high profile with CABAC. This is a huge step in encoding from the relatively absurd 22–24 Mbps baseline H.264 that the iPhone 4S would shoot at 1080p30. The result is vastly more quality per bit on the iPhone 5, for a big reduction in storage space per minute of video. I did some digging around and found that the A6 uses an Imagination Technologies PowerVR VXE380 for encoding and VXD390 for decoding, which is what I thought was in the previous SoC as well but perhaps wasn’t clocked high enough for encode at high profile. This brings the iPhone 5’s encoder on paper up to match what I see other smartphones running their 1080p video at as well (17 Mbps high profile).
On the front facing camera Apple is shooting 720p30 at 11 Mbps H.264 baseline, as opposed to the VGA at 3.5 Mbps that the 4S shot. Interestingly enough both front and rear shooting modes still are just mono audio, 64 kbps AAC. I would’ve liked to see stereo here since almost all the competition is shooting stereo, and it’d put those 3 microphones to use.
To get a feel for video quality, I stuck my iPhone 4S and iPhone 5 in my dual camera bracket with pistol grip and made a series of three videos. I then combined them and put them side by side for ease of comparison. I’ve uploaded the result to YouTube, but you can also grab the original videos (548 MB zip) if you’d like from the site directly without the transcode.
Overall the most dramatic improvement is the front facing camera, which is obviously night and day. Better image stabilization is noticeable while I’m walking around being intentionally shaky, but nothing hugely dramatic. The main rear facing video improvement seems to be an increase in sharpness (watch the power lines and wires in the native resolution version) and slightly wider field of view. That’s to say nothing of the fact that this quality comes at a bitrate that’s lower than the previous version but with better encode settings.
Apple's First LTE iPhone
Section by Brian Klug
If a third of the iPhone 5 story is the A6 SoC, and the other third is the change in display and industrial design, the final third is undoubtably cellular connectivity, specifically 4G LTE (Long Term Evolution). The iPad 3 was Apple’s first LTE-enabled consumer electronic, and the iPhone 5 is now Apple’s first LTE-enabled smartphone. The road to LTE for Apple’s iPhone is a long and interesting one, and unsurprisingly Apple waited for a second generation of LTE-enabled, 28nm basebands before making the move to LTE.
If you’ve already read our piece on the iPhone 5 SVLTE and SVDO situation a lot of this will already be familiar, as I pretty much gave all the background for what I wanted to talk about regarding the iPhone 5 and LTE in that piece at a high level.
MDM9615 and RTR8600 in iPhone 5 - Courtesy iFixit
At the core of the iPhone 5’s cellular architecture is Qualcomm’s MDM9615, which we’ve been predicting would be the solution Apple would use for some time now. This is a 28nm 2nd generation LTE baseband that has at its core the same modem IP block as what’s in MSM8960 and has already shipped in a bunch of phones. MDM9615 supports a host of air interfaces: Category 3 LTE FDD/TDD (Frequency and Time Division Duplexing), 3GPP Release 8 DC-HSPA+ (42.2 Mbps HSDPA/5.76 Mbps HSUPA), TD-SCDMA (4.2/2.2 Mbps), GSM/GPRS/EDGE, 1x-Advanced, and EVDO Rev.A and B. Of course, it depends on the individual OEM to implement the appropriate RF path for these features, but that’s at the maximum what MDM9615 supports.
Apple iPhone - Cellular Trends | ||||||
Release Year | Industrial Design | Cellular Baseband | Cellular Antennas | |||
iPhone | 2007 | 1st gen | Infineon S-Gold 2 | 1 | ||
iPhone 3G | 2008 | 2nd gen | Infineon X-Gold 608 | 1 | ||
iPhone 3GS | 2009 | 2nd gen | Infineon X-Gold 608 | 1 | ||
iPhone 4 (GSM/WCDMA) | 2010 | 3rd gen | Infineon X-Gold 618 | 1 | ||
iPhone 4 (CDMA) | 2011 | 3rd gen | Qualcomm MDM6600 |
2 (Rx diversity, No Tx diversity) |
||
iPhone 4S | 2011 | 3rd gen | Qualcomm MDM6610 (MDM6600 w/ ext. trans) |
2 (Rx/Tx diversity) |
||
iPhone 5 | 2012 | 4th gen | Qualcomm MDM9615 w/RTR8600 ext. trans |
2 (Rx/Tx diversity, 2x1MIMO for LTE) |
In addition, MDM9615 is the first of Qualcomm’s LTE basebands to be natively voice enabled. MDM9600/9200 was originally designed as a data card solution primarily, but could work with voice if paired with a Qualcomm SoC in a so-called “Fusion” scenario. This was the major design caveat which made MDM9x00 an unlikely choice for anything but a smartphone platform based around a Qualcomm SoC but why it was suited to a platform that doesn’t need voice like the iPad 3. With MDM9x15 these barriers have come down, and along with it we get a smaller overall package size (from 13x13 mm down to 10x10 mm) and lower power consumption thanks to the move from 45nm TSMC to 28nm TSMC. The reality is that to implement cellular connectivity with any baseband you also need a PMIC (power management IC) and transceiver for downconversion after filters to I/Q data that gets shot into the appropriate port on the baseband itself. In this case the PMIC that works with MDM9x15 is PM8018 and the transceiver is either the 65nm RTR8600 in the case of the iPhone 5, or the 28nm WTR1605 that is just now emerging in some other phones. More on that last part in a minute.
Apple iPhone 5 Models | ||||||
iPhone 5 Model | GSM/EDGE Bands | WCDMA Bands | CDMA 1x/EVDO Rev.A/B Bands | LTE Bands (FCC+Apple) | ||
A1428 "GSM" | 850/900/1800/1900 MHz | 850/900/1900/2100 MHz | N/A | 2/4/5/17 | ||
A1429 "CDMA" | 850/900/1800/1900 MHz | 850/900/1900/2100 MHz | 800/1900/2100 MHz | 1/3/5/13/25 | ||
A1429 "GSM" | 850/900/1800/1900 MHz | 850/900/1900/2100 MHz | NA | 1/3/5 (13/25 unused) |
Apple has already announced two hardware models for the iPhone 5: A1428 and A1429, of which one has two different provisioning configurations (A1429 comes in both a “CDMA” and “GSM” flavor). There are physical hardware differences between the two handsets, specifically differences in both the LTE power amplifiers and switches. The two hardware variants support different LTE bands, but the same set of WCDMA and GSM/EDGE bands. All three configurations of iPhone 5 support WCDMA with HSDPA Cat. 24 (DC-HSPA+ with 64QAM for up to 42 Mbps on the downlink) and HSUPA Cat. 6 (5.76 Mbps up). Only the A1429 “CDMA” configuration supports CDMA2000 1x and EVDO, and interestingly enough even supports EVDO Rev.B which includes carrier aggregation, though no carrier in the USA will ever run it. In addition the FCC reports include 1xAdvanced testing and certification for CDMA Band Classes 0 (800 MHz), 1 (1900 MHz), and 10 (Secondary 800 MHz).
Apple iPhone LTE Band Coverage | |||||
E-UTRA (LTE) Band Number | Applicable iPhone Model | Commonly Known Frequency (MHz) | Bandwidths Supported | ||
1 | A1429 | 2100 | 20, 15, 10, 5 MHz (?) | ||
2 | A1428 | 1900 | 20, 15, 10, 5, 3, 1.4 MHz | ||
3 | A1429 | 1800 | 20, 15, 10, 5, 3, 1.4 MHz(?) | ||
4 | A1428 | 1700/2100 | 20, 15, 10, 5, 3, 1.4 MHz | ||
5 | A1428, A1429 | 850 | 10, 5, 3, 1.4 | ||
13 | A1429 | 700 Upper C | 10, 5 | ||
17 | A1428 | 700 Lower B/C | 10, 5 | ||
25 | A1429 | 1900 | 20, 15, 10, 5, 3, 1.4 |
The difference in LTE bands is a bit more complicated, and both models appear to support more LTE bands than laid out if you simply inspect the iPhone 5 specs page. If we turn to the FCC documentation (which is concerned only with transmitters on regulated bands in the USA) we can glean that there are indeed more LTE bands supported. What’s interesting about this is that Apple did the same thing with the iPad 3 as well, supported a number of LTE bands above and beyond what was simply given on the spec page. I’m willing to bet that’s both a function of Apple wanting to cover as many possible configurations with as few hardware models as possible, and partly because with the right set of filters and PAs it’s entirely possible thanks to the fact that Qualcomm’s transceivers have ports that are created equal. That doesn’t explain why we don’t have WCDMA on AWS on A1428 considering LTE support for band 4, but I’ll admit I don’t know every exacting detail there. Anyhow, I’m presenting the two tables I made for the previous piece with what bands each model covers.
It was touched on in the keynote, but the iPhone 5 likewise inherits the two-antenna cellular design that was touted from the 4S. This is the original mitigation for iPhone 4 “deathgrip” which was introduced somewhat quietly in the iPhone 4 (CDMA), and carried over to the 4S with one additional improvement – the phone included a double pole, double throw switch which allowed it to change which antenna was used for transmit as well to completely quash any remaining unwarranted attenuation. While receive diversity was a great extra for the 4S that drastically improved cellular performance at cell edges, in LTE 2-antenna receive diversity is now mandatory, leaving the base LTE antenna configuration a two-antenna setup (two Rx, one shared for Tx). Thankfully, Apple already had that antenna architecture worked out with the 4S, and carried it over to the iPhone 5.
The two iPhone models have slightly different antenna gains
Apple mentioned that it actually improved this even further, and after a lot of discussions with the right people and digging, I’ve learned that the iPhone 5 actually is a 3 Rx, 1 Tx design. There are two antennas, but three Rx paths required for when the iPhone 5 is in an LTE MIMO or combining diversity mode and also required to listen to the CDMA 1x paging channel for an incoming call. This is the interesting edge case that needed to be tackled in a design without two transmit chains for CDMA and LTE networks like Verizon and Sprint.
The other repercussion is of course no simultaneous voice and data for CDMA2000 and LTE networks that aren’t running VoLTE without that second transmit chain. VoLTE is absolutely the way of the future, and Verizon has repeatedly stated their 2013 target for VoLTE. The big question is whether or not the iPhone 5 will be updated at some point to support VoLTE even though at this point it doesn’t support it. The simplest way to state things is a bit of speculation — while it’s entirely possible to update the platform to do it, there is a fair amount of overhead required (another trip through the FCC, more carrier testing, and a reworked software stack), but MDM9x15 supports it. It definitely isn’t impossible, but at the same time it’s always unwise to buy a piece of hardware with the unmade promise of some future feature being added (both Apple and Verizon won’t comment on any VoLTE updates for the iPhone 5).
The part I didn’t address in my VoLTE piece was the 2.6 GHz and TD-SCDMA China situation. This is partly speculation but I still suspect we will see at least one more hardware model surface. Already we’ve seen rumors of an A1442 for China, and clearly TD-SCDMA support has to be in the cards at some point.
The second part of the situation is transceiver. MDM9x15’s recommended configuration from what I can tell is with WTR1605, the 28nm flagship transceiver replacement for 65nm RTR8600, which is what’s inside the iPhone 5 as it exists today. So much of Apple’s component choice is driven by sheer volume, and I suspect that both design cycle and availability concerns forced Apple to use RTR8600 instead of WTR1605 which is just now starting to show up in other MSM8960 and MDM9x15 based devices. The difference is in the number of “ports” (paired up and down) supported between these two. With RTR8600 that’s 5 total, 2 below 1 GHz, 3 above 1 GHz, for a total of 5. With WTR1605 that changes to both a different RF lithography (28nm) and also adds two more ports, 3 below 1 GHz, 3 above 1 GHz, and 1 very high frequency port around 2.5 or 2.6 GHz. To support 2.6 GHz LTE this would no doubt need to be included, and I suspect the phones we’re seeing advertising 2.6 GHz LTE support today include transceiver.
Implementation and Testing
At present the iPhone 5 gracefully does the hard handover from LTE to CDMA 1x for calls, and then quickly hands back up to LTE on the Verizon handset I tested. It happens extremely quickly and I’ve yet to see it glitch out or refuse to hand back up. In addition testing verifies that there’s no simultaneous voice and data for LTE or EVDO, as expected.
Verizon iPhone 5 showing no call and data
Outside of the errant LTE data use when connected to WiFi network glitch that was patched with the 13.1 carrier bundle, I haven’t seen any unexpected behavior on the Verizon iPhone 5.
On WCDMA/GSM and LTE carriers, the iPhone 5 implements circuit-switched fallback (CS-FB). Quite literally the phone hands down from 4G LTE to 3G WCDMA for the call (where voice and data are already multiplexed) and then back up to LTE when the call is over. In practice like all LTE handsets I’ve seen implementing CS-FB there can be a wait on the order of minutes before the handset hands back up from WCDMA to LTE depending on signal and the network. There’s nothing one can really do to expedite this process but toggle airplane mode or LTE in settings and hope for the best.
Thankfully Apple included an LTE toggle with iOS 6 on the iPhone 5, which I saw on both Verizon and AT&T. I managed to unlock my personal AT&T-provisioned iPhone 5 and see a 3G toggle with a T-Mobile SIM inserted as well and briefly tested the iPhone 5 with T-Mobile UMTS1900 in my market, which worked perfectly.
iOS 6 on iPhone 5 also includes the familiar FieldTest.app which can be accessed using the ever familiar dialer code (*3001#12345#*). Oddly enough FieldTest.app is clearly updated for the iPhone 5 but includes letterboxing. I’m overjoyed that Apple didn’t remove this like they did inexplicably with the iPhone 4. In addition to the same EVDO and UMTS engineering menus that I saw on the iPhone 4S, the iPhone 5 adds what is undeniably the best set of LTE field test informatics out there on any handset right now. Under appropriate menus are the LTE channel bandwidth, band number, RSRP, and RSRQ. RSRQ refers to the Reference Signal Received Quality (dB), and RSRP refers to Reference Signal Received Power (dBm). RSRQ ranges from –3 dB in excellent conditions to around –20 dB in poor conditions. Meanwhile RSRP will generally range from –75 dBm in excellent conditions to –120 in poor conditions. If you switch your signal indicator to numerics in FieldTest.app, that value which gets reported is RSRP, not RSSI on LTE.
Performance Testing
So the iPhone 5 includes what boils down to really the latest and greatest cellular connectivity possible at the moment, and naturally we wanted to put this to the test. To do that we turned to our usual set of tests, which consists of running lots of tests in various channel conditions using Ookla’s speedtest.net application, then batching up the data and making some pretty histograms from it. I had Anand test in his AT&T 5 MHz FDD LTE market and a few others during his travels, I tested in an AT&T 10 MHz FDD LTE market and my own AT&T market which is still just running WCDMA, and finally I borrowed a Verizon iPhone 5 and ran as many tests as I could.
Before we talk about all the results, let’s touch on the maximum achievable performance for LTE for a moment. LTE supports a variety of different channel bandwidths, and total throughput both up and down goes as a function of the total number of resource blocks available for coding data on top, which is a function of channel bandwidth. Stated another way, wider LTE channel bandwidth, more resource capacity. The limiting factor is how many resource blocks your modem can handle, and for the UE Category 3 MDM9615 that maximum translates to a maximum downlink throughput of 100 Mbps on 20 MHz channels. In the USA AT&T runs 10 MHz channels in some markets, and 5 MHz in others, for a maximum downstream throughput of 73 Mbps and 37 Mbps respectively. Verizon runs a solid 10 MHz everywhere, thus 73 Mbps down at maximum. I mentioned maximum achievable performance since these numbers include overhead already, I’ve seen some readers hitting close to these numbers already.
Note 20 MHz speeds are shown for Category 4 UEs
The iPhone 5 also supports DC-HSPA+ as I touched on earlier, although only T-Mobile is running it in the US, so there’s no way for us to test that at this point. AT&T has no plans to run DC-HSPA+ at all, and in my market only runs up to 16QAM HSDPA 14.4 Mbps down.
First off, Anand’s 5 MHz FDD LTE results get impressively close on the downstream to the maximum realizable throughput of 37 Mbps, at 32.77 Mbps. Upstream also comes pretty close to the maximum of 18 Mbps at 14.6 Mbps. In my 10 MHz testing in Phoenix I tried but couldn’t get as close as I would’ve liked to 73 Mbps, nevertheless an average of 18.41 Mbps is nothing to sneeze at. I’ve been very impressed with WCDMA throughput on the iPhone 5 as well, which regularly gets me results just above 12 Mbps on HSDPA 14.4 in my area, these are numbers I couldn’t see on the 4S even with my APN configured for the 4G Unlimited plan. Verizon LTE is also still speedy in my home market where I tested, though I would’ve enjoyed the opportunity to run even more data than the 66 tests I have for Verizon 4G LTE.
At present these speeds should just give an idea for LTE throughput and how much of a huge leap this is over the iPhone 4S. For CDMA subscribers on Verizon especially getting off of EVDO and onto LTE will be night and day levels of performance difference, and it’s really that changeover that will be the most dramatic.
GNSS: Subtle Improvements
Section by Brian Klug
Like the iPhone 4S and the iPhone 4 CDMA before it, Apple has gone with the GNSS (Global Navigation Satellite System) leveraging both GPS and Russian GLONASS which lives entirely on the Qualcomm baseband. In the case of the iPhone 4S and 4 CDMA, that was onboard MDM6610 and MDM6600 respectively, both of which implemented Qualcomm’s gpsOneGen 8 with GLONASS tier. Going to on-baseband GNSS is really the way of the future, and partially the reason why so many of the WLAN, BT, and FM combos don’t include any GNSS themselves (those partners know it as well). In this scheme GNSS simply uses a dedicated port on the transceiver for downconversion, additional filtering (on RTR8600), and then processing on the baseband. The advantage of doing it all here is that often it eliminates the need for another dedicated antenna for GNSS, and also all of the assist and seed information traditionally needed to speed up getting a GPS fix already exists basically for free on the baseband. We’re talking about both a basic location seed, precision clock data, in addition to ephemeris. In effect with all this already existing on the baseband, every GPS start is like a hot start.
There was a considerable bump in both tracking accuracy and time to an assisted GPS fix from the iPhone 4 which used a monolithic GPS receiver to the 4 CDMA and 4S MDM66x0 solution. I made a video last time showing just how dramatic that difference is even in filtered applications like Maps.app. GLONASS isn’t used all the time, but rather when GPS SNR is either low or the accuracy of the resulting fix is poor, or during initial lock.
With MDM9615 now being the baseband inside iPhone 5, not a whole lot changes when it comes to GNSS. MDM9615 implements gpsOneGen 8A instead of just 8, and I dug around to figure out what all has changed in this version. In version 8A Qualcomm has lowered power consumption and increased LTE coexistence with GPS and GLONASS, but otherwise functionality remains the same. MDM9x25 will bring about gpsOneGen 8B with GLONASS, but there aren’t any details about what changes in that particular bump.
I spent a lot of time playing with the iPhone 5 GNSS to make sure there aren’t any issues, and although iOS doesn’t expose direct NMEA data, things look to be implemented perfectly. Getting good location data is now even more important given Apple’s first party turn by turn maps solution. Thankfully fix times are fast, and getting a good fix even indoors with just a roof between you and clear sky is still totally possible.
WiFi Now 2.4 and 5 GHz with 40 MHz Channels
Section by Brian Klug
WiFi connectivity on mobile devices is something that has steadily moved forward, along with continual iterative inclusion of the latest Bluetooth standards for pairing with accessories. For a while now, we’ve seen more and more smartphones include 5 GHz connectivity alongside 2.4 GHz. Apple famously started the 5 GHz mobile device push with the iPad 1, but has taken its time bringing dual band WiFi to the iPhone while numerous other smartphones have included it. Thankfully, the wait is over and the iPhone 5 now includes single stream 2.4 and 5 GHz WiFi support. On 2.4 GHz, Apple continues to only let you use up to 20 MHz channels to improve Bluetooth coexistence, but has this time enabled short guard interval rates for a PHY of up to 72 Mbps. On 5 GHz side the iPhone 5 can support up to 40 MHz channels for a PHY of 150 Mbps. We will touch on real-world performance testing in a minute.
150 Mbps rate showing for 802.11n on 5 GHz
As we originally predicted, this connectivity comes courtesy of Broadcom’s BCM4334 802.11a/b/g/n, Bluetooth 4.0 + HS, FM radio combo chip which is built on the 40nm RF CMOS process. There are different ways you can buy a BCM4334, and for smartphones one of the most common is a ceramic package with the RF front end, all the filters, all the power amplifiers, and so forth in one ready-to-use package, which is what we see with the iPhone 5.
Apple iPhone - WiFi Trends | ||||||||
Release Year | WiFi + BT Support | WiFi Silicon | Antenna Gain | |||||
iPhone | 2007 | 802.11 b/g, BT 2.0+EDR | Marvell W8686, CSR BlueCore | - | ||||
iPhone 3G | 2008 | 802.11 b/g, BT 2.0+EDR | Marvell W8686, CSR BlueCore | - | ||||
iPhone 3GS | 2009 | 802.11 b/g, BT 2.1+EDR | Broadcom BCM4325 | - | ||||
iPhone 4 | 2010 | 802.11 b/g/n (2.4GHz), BT 2.1+EDR | Broadcom BCM4329 | -1.89 dBi | ||||
iPhone 4S | 2011 | 802.11 b/g/n (2.4GHz), BT 4.0+EDR | Broadcom BCM4330 | -1.5 dBi | ||||
iPhone 5 | 2012 | 802.11 b/g/n (2.4+5 GHz), BT 4.0+LE | Broadcom BCM4334 |
2.4 GHz: -1.4 dBi 5 GHz: 0.14 to -2.85 dBi |
I’ve written before about the BCM4334 versus the 65nm BCM4330 which came before it and was in the iPhone 4S and numerous other devices. For a while now Apple has used Broadcom combos exclusively for iPhones and iPads, so BCM4334 isn’t a big surprise at all. The new module again offers a significant reduction in power consumption over the previous generation, all while making dual-band compatibility a baseline feature. We’ve already seen BCM4334 in a host of other smartphones as well. A lot of people had asked about BCM4335 and 802.11ac support, but it’s simply too soon for that part to have made it into this iPhone.
Adding 5 GHz WiFi support might sound like a minor improvement to most people, however its inclusion dramatically improves the reliability of WiFi in challenging environments where 2.4 GHz is either completely overloaded or full of other interferers. There have been many times at conferences and crowded urban locales where I’ve seen 2.4 GHz congested to the point of being unusable, and that will only continue getting worse. The far greater number of non-overlapping channels on 5 GHz, and propagation characteristics of that band, mean on average less interference at least for the time being.
As expected, in the WiFi Settings pane there is no mention of what channel the SSID you’re going to connect to is on. This follows Apple’s minimalist configuration modus operandi that has always existed for iOS — there’s no band preference 3 option toggle to be found in iOS like I’m used to seeing in Android for selecting Automatic, 2.4 GHz Only, or 5 GHz Only. Apple’s ideal WiFi use case is, unsurprisingly, exactly what the Airport base stations guide you into during standard setup — a single SSID for both 2.4 and 5 GHz networks. This way the client WiFi device uses its own handover thresholds to decide which one is best. If you’re running a dual band access point and intend to use an iPhone 5 with it, this is the ideal band plan Apple is not-so-subtly nudging you towards for the best user experience.
Nailing those thresholds is a hugely important implementational detail, one that I’ve seen many smartphones do improperly. Set incorrectly, the client WiFi device will endlessly chatter between 2.4 and 5 GHz at some places in the coverage profile, resulting in an extremely frustrating experience and lack of connectivity. Thankfully Apple has implemented this threshold very well based on lots of prior experience with the 2.4 and 5 GHz WiFi in iPad 1, 2, and 3. There’s enough hysteresis that the iPhone 5 isn’t constantly chattering back and forth, and in my testing the handover point as you move from near the AP on 5 GHz to spots far away where 2.4 GHz gets you better propagation is virtually impossible to detect.
iPhone 5's WiFi+BT antenna, encircled in red
Before we finally get to throughput testing it’s worth noting the evolution in both that combo solution and antenna design plus gain that the iPhone has gone through in the last three generations. The iPhone 4 used the leftmost external notch antenna for WiFi, Bluetooth, and GPS. Famously, this wasn’t a very ideal design due to capacitive loading for both cellular and WiFi detuning the whole thing. Thus, the Verizon iPhone 4 and 4S this changed to an internal planar inverted F antenna (PIFA) which is extremely common in the smartphone space. The iPhone 5 continues this PIFA and internal choice but redesigns it once more. Apple is required to report gain and output power as part of their FCC filing, and we can see that 2.4 GHz gain is slightly improved on the iPhone 5, while gain on the 5 GHz band varies wildly across the various bands (which have different regulatory constrains).
In my not especially scientific testing watching the numeric signal strength reported in the place of the WiFi bar indicator, I saw the iPhone 5 routinely report the same number in the same place alongside the 4S when on 2.4 GHz. This isn’t surprising considering how close gains are between the two. Both also finally dropped my WiFi network at almost the same spot walking away from my dwelling.
When it comes to actual throughput I turned to testing WiFi using an iOS port of iPerf and measuring throughput from my server to the iPhone. I tested the iPhone 4, 4S, and 5 in this manner at 5 locations in my dwelling, 6 if you count to and from my office where 5 GHz is strongest.
Starting in my Office where I have my Airport Extreme (5th generation) setup, we show throughput at 95.7 Mbps on the new iPhone 5. This is on a 40 MHz channel on 5 GHz, whereas the other iPhones are obviously on 2.4 GHz 20 MHz channels and both show almost the same throughput. The second location is my smaller hallway slash connecting room, where the iPhone 5 already hands over from 5 GHz to 2.4 GHz, from here on out results are on 2.4 GHz. As we move away (living room couch, bedroom, and in the kitchen on my lightbox) throughput decreases but the iPhone 5 still improves on the previous generation thanks to improvements made each generation to the entire stack. I immediately ran a test upon returning to office to illustrate the difference in adaptation time for each iPhone generation as they change MCS (Modulation Coding Scheme) for 802.11n. The iPhone 5 takes quite a while (on the order of minutes) to hand back up to 5 GHz upon returning to a region with strong 5 GHz signal.
If we look at how the iPhone 5 compares in the best case testing graph I perform for all smartphones that cross my desk (using iperf), we can see that the iPhone 5 does pretty favorably. It still can’t unseat the MSM8960 based devices which use the onboard WLAN baseband in conjunction with WCN3660 (EVO 4G LTE and One X AT&T), but does beat other BCM4334 devices like the two Galaxy S IIIs.
The story here is almost entirely one of what interface is used. Obviously MSM8960 has an advantage with being entirely on-chip. Meanwhile iPhone 5 uses BCM4334 over HSIC which is analogous USB 2.0, and the other BCM4334 devices use SDIO from what I’ve learned. This is primarily why we see such a strong clustering of results around some values.
Overall the iPhone 5 offers an even bigger improvement over its predecessor than the 4S did when it comes to the WiFi and Bluetooth connectivity side. Inclusion of 5 GHz WiFi support has essentially become the new baseline for this current crop of smartphones, and I’m glad to see the iPhone include it.
Speakerphone and Noise Suppression
Section by Brian Klug
When Apple changed up the dock connector with the smaller Lightning port, it afforded a chance to also redesign the speakerphone at the bottom of the iPhone 5. Between the iPhone 4 and 4S we saw minimal changes, and performance was acceptable if a bit on the quiet side. With the iPhone 5 Apple advertised a big gain in audio quality with both changes to the speaker and a 5 magnet transducer design.
We normally test speakerphones by calling the local ASOS test number at max volume in front of an Extech sound data logger sampling continually and then averaging over one readout of the weather report. This gives a good feel for the overall loudness of the speakerphone on a call, but doesn’t tell us anything about quality unless you’re standing in front of it. In this case, the iPhone 5 is quite loud and comes in near the top of our scale at 81.8 dBA. It isn’t chart topping but a definite improvement over the relatively quiet 4/4S speakerphone design.
I decided to do something different though after getting a lot of questions and emails asking for a better quality comparison between the iPhone 5 speaker and its predecessor. People are interested in using their smartphones to listen to music when speakers or a dock aren’t available, and I thought this worth investigating. To get to the bottom of this I used my Blue Yeti USB microphone and arm which I use for podcasting to record the speakerphone output at 90 percent volume on the iPhone 4S and iPhone 5.
I started with the test call, which sounds surprisingly different between the iPhone 4S and iPhone 5. The 4S saturates and has obvious distortion even at 90 percent volume. This is quite noticeable in the recording. It then occurred to me that because I’m calling over AT&T (narrowband AMR) and to a PSTN (public switched telephone network) that the call would be limited to 4 kHz thanks to filters along the route, therefore anything above 4 kHz is undesirable. I took a spectral analysis (which shows power spectral density), and instantly it became obvious just how much energy there is on the 4S above the 4kHz limit from overmodulation or saturation/clipping on that speakerphone design. Meanwhile the iPhone 5 has much less energy over that 4 kHz maximum for a PSTN call.
I did the same for two songs recorded start to finish, and the differences are quite noticeable even in my rather quick and dirty recording plus spectogram comparison. I cropped the two recordings of the songs quite a bit and stuck them on soundcloud.
My overall impressions with the iPhone 5 speakerphone is that it has much less of a tendency to clip and saturate than the 4S, all while being noticeably louder. It’s still a smartphone speakerphone but to my ears the difference is pretty dramatic.
Noise Suppression
We’ve known for a while now that Apple changed up the slot for noise cancellation in the iPhone 5. The story goes something like this — Audience was a discrete chip on the iPhone 4, an IP block on the A5 SoC for iPhone 4S, and also an IP block onboard the A6 SoC for iPhone 5. The difference is that with iPhone 5 Audience isn’t enabled, and the company knows this because final characterization with both the final industrial design, microphones, and performance was never performed. Audience claims it met all of its deliverables and targets with the new IP block it built and gave Apple for inclusion on the A6. There’s obviously a lot of speculation about exactly why Apple chose to go with its own soultion versus Audience.
We’ve seen Audience earSmart noise suppression processors in a number of smartphones to date, and carriers often have their own specification for noise suppression to get certified. In addition, with wideband voice (AMR-WB in the case of the iPhone 5) noise suppression is even more important, to say nothing of how big of a topic this is with voice recognition based features like Siri. While we’re on the subject of AMR-WB, this feature is enabled on the iPhone 5 but I’m unable to test it on any of the carriers in the USA at the moment.
Anyhow, the iPhone 5 uses an Apple-specific solution with three total microphones all around the device. One primary microphone at the bottom of the device where you’d expect the primary microphone to reside, a secondary microphone between the LED flash and camera module, and a third right next to the earpiece for use with earpiece active noise cancelation. Apple’s solution is a beamformer (Apple said so in the keynote) and from what I can tell thus works entirely in the time domain. I’ve spent a lot of time playing around with the iPhone 5 trying to characterize its noise rejection, and the system appears to have a number of modes it will kick into after a 5–10 second adaptation time.
Earpiece noise rejection is something I talked about on the podcast right after the iPhone 5 hands on, and is one of those modes that the iPhone 5 will kick into if you’re in a loud environment. It isn’t active all the time, but when it does transition into working it provides a noticeable suppression of ambient noise and the same kind of pressure I’ve felt with other active noise canceling closed ear headphones. The improvement isn’t overwhelming, but it’s a sensation and feature I haven’t experienced on any other smartphones to date.
The rest of the noise suppression story is really one of testing. To test, I ran our normal test suite which consists of me ramping ambient music to 94 dBA while speaking into the device under test and recording the other side of the call. I’ve gotten ahold of the industry standard babble and distractor tracks used by carriers for testing noise suppression and will move to those in the future, for now though the test track I currently play is something I understand pretty well.
The comparison here is between the iPhone 4S and iPhone 5 to see just how ambient noise suppression has changed with this change in solution. The difference in technique is pretty apparent, and again indicates to me that the iPhone 5 is working in the time domain. You can see visible cut in and cut out on the waveform. The Audience solution has some hiss that develops at the absolute highest background noise volume on the 4S, but it also does an excellent job suppressing noise. The Apple solution doesn’t have this hiss, but passes both background noise and vocals from the music passed through the call at maximum volume. I hate to call this a regression, but the difference in technique means that there’s audible background noise that gets passed on while I’m talking on the iPhone 5. I think for normal callers the difference won’t be readily apparent, but with close inspection it is dramatic.
Final Words
With a device vendor that also happens to moonlight as an SoC vendor and software developer we really need three parts to this conclusion. I’ll begin with the silicon, as that’s ultimately what enables the overall experience.
A6 SoC Conclusions
With the iPhone 4S Apple made a conscious decision to shift its annual smartphone release to the latter half of the year. At the same time, Qualcomm’s 28nm schedule put its new SoCs in phones at the earlier part of the year. All indications point to the first Cortex A15 based designs showing up similarly in the first half of 2013. All of this conspires to set the market up for some very interesting leap frogging. Without a major delay that impacts either Apple or its Android competitors, it looks like in the Spring we’ll see Android devices leapfrogging the iPhone and Apple responding in kind in the fall. The staggered release cadence won’t continue forever, but at least for the next generation it appears that will be how it plays out.
I don’t know which refresh cycle makes the most sense (new architectures earlier in the year vs. later) from a business standpoint. I suspect there’s something to be said about hitting the holiday buying season but I’m not much of a financial analyst.
Apple’s decision to introduce a custom ARM core in the A6 SoC is very unique. I would normally have expected Apple to go for a Cortex A15 based design in the next round however I’m no longer sure. Investing in your own CPU design isn’t usually something you do once and then just drop the next generation. I would assume Apple has plans to continue to evolve this architecture in a very power focused way. We’ve mentioned time and time again that ARM’s Cortex A15 was first conceived as an architecture for more power hungry systems (servers and other PC competitors). The Cortex A15 moved down the totem pole and will likely rely on smaller cores to keep power consumption in check when running lighter workloads. It wouldn’t be a stretch to envision a slightly larger (~20%?) Swift core next generation built at Samsung or TSMC, adding some architectural features to drive performance up.
Swift appears to offer better power/performance efficiency than any other currently available ARM architecture. Its architecture, performance and power profile most closely resemble Qualcomm’s 28nm Krait/Snapdragon S4, although through hardware or software optimizations it appears to be able to come out slightly ahead. Apple complicates the gains in performance by increasing clock speed dramatically over the iPhone 4S, which makes the architecture look more revolutionary than evolutionary in nature. A good part of the 2x increase in CPU performance compared to the iPhone 4S boils down to clock speed, however there are significant improvements to the memory subsystem that really help things. ARM’s Cortex A15 will likely pull ahead of Swift in CPU performance, although it remains to be seen how it will compare in power and what Apple will respond with in late 2013.
Intel’s Atom core remains very competitive with the best of the ARM world. A single core Atom still ends up being the only CPU that can regularly outperform Apple’s Swift, however it does so while seemingly consuming more power. Whether the performance advantage can make up for the peak power deficit remains to be seen as we lack a good, high-end Atom based LTE smartphone to really test that theory against.
On the GPU side Apple continues to be at the forefront of innovation, however Qualcomm appears to have learned quickly as the Adreno 320 manages to give the PowerVR SGX 543MP3 a run for its money. I suspect next year we’ll see similarly competitive performance from Intel and NVIDIA as well (finally!). Having an aggressive player in the silicon space always makes for good competition once everyone adjusts their release cadence.
Overall the silicon story in the iPhone 5 is a very good one. Performance increases by the biggest margin since the move from the iPhone 3G to 3GS. If you are the type of user who can appreciate improved response time, the iPhone 5 definitely delivers.
The market is becoming increasingly competitive however. Just as Google responded aggressively to close the UI performance gap between Android and iOS, the SoC vendors appear to be doing the same with their silicon. Going forward it is going to become more difficult to maintain significant performance advantages similar to those Apple has enjoyed in previous generations. It’s really going to boil down to software and ecosystem in the not too distant future.
iPhone 5 Device Conclusions
As a device, the iPhone 5 is a solid evolution of its predecessors. The larger 4-inch screen doesn’t fundamentally change the device, but it definitely modernizes it. Going back to the old 3:2 aspect ratio iPhones feels extremely claustrophobic now. After using the iPhone 5 for weeks, picking up an older iPhone feels a lot like switching between the iPhone 4 and iPhone 3G did.
The new device feels appreciably thinner and lighter, although a tradeoff is the delicate feel of the anodized aluminum on the 5. Although you’d normally assume moving away from glass would make the device feel more rugged, the fear of scratching the very thin anodized coating quickly supplants any relief you might have had. The iPhone 5 lacks the heft that gave the 4/4S that jewel-like feel, but it does so without feeling overly cheap and ends up being an improvement in overall portability. Design is subjective, but I do believe that Apple has managed to move forward in this regard as well.
The iPhone 5 marks Apple’s first integration of LTE into a smartphone, and the process went relatively smoothly. Handovers between 3G and LTE are relatively seamless, although we did notice the occasional dropped hotspot connection when calls came in while we were tethered over WiFi and on LTE. For those of you who haven't yet enjoyed the world of LTE on a smartphone, LTE support delivers one of the bigger performance improvements you’ll see with the iPhone 5.
At a high level, the iPhone 5’s cameras appeared to be some of the least unchanged elements of the new device however in practice the improvements are significant. The front facing camera now delivers much higher quality photos and the rear facing camera's low light performance is a major step forward compared to the iPhone 4S. Given the amount of smartphone photography that happens in poorly lit conditions, I suspect the improvements Apple made to the 5's rear facing camera will be quite noticeable in regular use.
Although the bulk of the discussion on display had to do with its taller aspect ratio and mildly increased resolution, it's really the move to in-cell touch (reducing reflections/glare) and the increase in quality that are most revolutionary here. The iPhone 5's display now offers full sRGB coverage and much better accuracy than its predecessor. Although a subtle improvement in day to day use, it's good to see Apple continuing to push the envelope here.
Battery life, as we mentioned earlier, is really a mixed bag. Depending on your workload you'll either see improved battery life due to the faster platform and a quick rush to sleep or you'll see a regression if you put the new silicon to heavier use as a result of its speed. The days of relatively static battery life progressions are behind us at this point, welcome to the world of increased dynamic range of power consumption. It's a tradeoff that has to be made in pursuit of ever increasing performance unfortunately. Apple seems to maintain good idle power characteristics in light of everything.
The big question is of course whether or not you should upgrade to the iPhone 5. The move to LTE alone is a big enough reason to upgrade for any heavy user of mobile data. The larger/improved display, much faster SoC and 5GHz WiFi support are all icing on the cake - and this is one well iced cake. If you have a subsidized upgrade available via your carrier, I'd say the upgrade is a no brainer. If however you've got to pay full price you have to take into consideration what's coming on the horizon. A faster version will likely hit in late 2013, and we'll potentially see a move to 20nm silicon in late 2014 (paving the way for an improvement in power profile). If you're on a 2 year upgrade cycle, buying the 5 now and upgrading again in 2014 wouldn't be a bad idea.
iOS 6 Conclusions
Finally, we have to conclude our look at the iPhone 5 with a discussion of its operating system. As I’ve mentioned in articles/videos past, Apple and Microsoft find themselves in somewhat similar positions when it comes to the role their smartphones play in the market. Both companies run quite profitable businesses selling Macs and PCs respectively, and as a result their smartphones can act as companion devices rather than primary computing devices. Unless there’s a dramatic change in Chrome OS, Google has to rely on Android smartphones and tablets to get their share of computing dollars (and search results). Apple and Microsoft differ a bit when it comes to tablets as Apple uses the iPad to compete with low-cost PCs.
I believe understanding the motives helps explain the software decisions that are made. As the iPad has to serve as a cheap PC alternative, iOS ends up inheriting some productivity focused enhancements - just not as many as a full blown OS would.
As it applies to the iPhone, iOS serves as a companion platform, a true pocket digital assistant. Its goal is to be an appliance, not a full blown computing device, and as such it looks and feels very different than Android.
With the latest update to iOS, Apple continues to evolve the platform. Similar to the iterative improvement in iPhone hardware, Apple’s yearly iOS cadence is largely responsible for its solid footing in the market. Although a clear regression in iOS Maps quality plagued this latest update, the platform is still a good one for those who it resonates with.
iOS remains very appliance-like in its behavior. Despite the added complexity that iOS has inherited over the years, Apple has been able to retain much of the simplicity that drew users to the platform in the early days. The iPhone/iOS platform is unique in that, unlike OS X, it started as a mainstream platform rather than a computing platform that had to try and become more consumer friendly over time.
If you’re after something fundamentally different, you’ll have to look elsewhere as I don’t believe iOS is destined for a dramatic departure from the current model anytime soon. Thankfully there are good alternatives from both Google and Microsoft if iOS isn’t what you’re looking for. Pressure from Apple on both the hardware and software side will help ensure that those platforms continue to progress as well - competition is alive and well in the mobile industry, that’s for sure.
For existing iOS fans and/or iPhone users, the latest iteration of iOS on the iPhone 5 (perhaps with the exception of Maps) does nothing to alienate you. Apple has done a good job of dutifully maintaining the platform and remaining true to its roots. As a significant source of Apple’s revenue, you would expect no less.