Section by Ryan Smith

M1 GPU Performance: Integrated King, Discrete Rival

While the bulk of the focus from the switch to Apple’s chips is on the CPU cores, and for good reason – changing the underlying CPU architecture if your computers is no trivial matter – the GPU aspects of the M1 are not to be ignored. Like their CPU cores, Apple has been developing their own GPU technology for a number of years now, and with the shift to Apple Silicon, those GPU designs are coming to the Mac for the very first time. And from a performance standpoint, it’s arguably an even bigger deal than Apple’s CPU.

Apple, of course, has long held a reputation for demanding better GPU performance than the average PC OEM. Whereas many of Intel’s partners were happy to ship systems with Intel UHD graphics and other baseline solutions even in some of their 15-inch laptops, Apple opted to ship a discrete GPU in their 15-inch MacBook Pro. And when they couldn’t fit a dGPU in the 13-inch model, they instead used Intel’s premium Iris GPU configurations with larger GPUs and an on-chip eDRAM cache, becoming one of the only regular customers for those more powerful chips.

So it’s been clear for some time now that Apple has long-wanted better GPU performance than what Intel offers by default. By switching to their own silicon, Apple finally gets to put their money where their mouth is, so to speak, by building a laptop SoC with all the GPU performance they’ve ever wanted.

Meanwhile, unlike the CPU side of this transition to Apple Silicon, the higher-level nature of graphics programming means that Apple isn’t nearly as reliant on devs to immediately prepare universal applications to take advantage of Apple’s GPU. To be sure, native CPU code is still going to produce better results since a workload that’s purely GPU-limited is almost unheard of, but the fact that existing Metal (and even OpenGL) code can be run on top of Apple’s GPU today means that it immediately benefits all games and other GPU-bound workloads.

As for the M1 SoC’s GPU, unsurprisingly it looks a lot like the GPU from the A14. Apple will have needed to tweak their design a bit to account for Mac sensibilities (e.g. various GPU texture and surface formats), but by and large the difference is abstracted away at the API level. Overall, with M1 being A14-but-bigger, Apple has scaled up their 4 core GPU design from that SoC to 8 cores for the M1. Unfortunately we have even less insight into GPU clockspeeds than we do CPU clockspeeds, so it’s not clear if Apple has increased those at all; but I would be a bit surprised if the GPU clocks haven’t at least gone up a small amount. Overall, A14’s 4 core GPU design was already quite potent by smartphone standards, so an 8 core design is even more so. M1’s integrated GPU isn’t just designed to outpace AMD and Intel’s integrated GPUs, but it’s designed to chase after discrete GPUs as well.

A Educated Guess At Apple GPU Specifications
ALUs 1024
(128 EUs/8 Cores)
Texture Units 64
ROPs 32
Peak Clock 1278MHz
Throughput (FP32) 2.6 TFLOPS
Memory Clock LPDDR4X-4266
Memory Bus Width 128-bit

Finally, it should be noted that Apple is shipping two different GPU configurations for the M1. The Mac Mini and MacBook Pro get chips with all 8 GPU cores enabled. Meanwhile for the Macbook Air, it depends on the SKU: the entry-level model gets a 7-core configuration, while the higher-tier model gets 8 cores. This means the entry-level Air gets the weakest GPU on paper – trailing a full M1 by around 12% – but it will be interesting to see how the shut-off core influences thermal throttling on that passively-cooled laptop.

Kicking off our look at GPU performance, let’s start with GFXBench 5.0. This is one of our regular benchmarks for laptop reviews as well, so it gives us a good opportunity to compare the M1-based Mac Mini to a variety of other CPU/GPU combinations inside and outside the Mac ecosystem. Overall this isn’t an entirely fair test since the Mac Mini is a small desktop rather than a laptop, but as M1 is a laptop-focused chip, this at least gives us an idea of how M1 performs when it gets to put its best foot forward.

GFXBench 5.0 Aztec Ruins Normal 1080p Offscreen

GFXBench 5.0 Aztec Ruins High 1440p Offscreen

Overall, the M1’s GPU starts off very strong here. At both Normal and High settings it’s well ahead of any other integrated GPU, and even a discrete Radeon RX 560X. Only once we get to NVIDIA’s GTX 1650 and better does the M1 finally run out of gas.

The difference compared to the 2018 Intel Mac Mini is especially night-and-day. The Intel UHD graphics (Gen 9.5) GPU in that system is vastly outclassed to the point of near-absurdity, delivering a performance gain over 6x. And even other configurations such as the 13-inch MBP with Iris graphics, or a PC with a Ryzen 4700U (Vega 7 graphics) are all handily surpassed. In short, the M1 in the new Mac Mini is delivering discrete GPU-levels of performance.

As an aside, I also took the liberty of running the x86 version of the benchmark through Rosetta, in order to take a look at the performance penalty. In GFXBench Aztec Ruins, at least, there is none. GPU performance is all but identical with both the native binary and with binary translation.

Futuremark 3DMark Ice Storm Unlimited - Graphics

Taking one last quick look at the wider field with an utterly silly synthetic benchmark, we have 3DMark Ice Storm Unlimited. Thanks to the ability for Apple Silicon Macs to run iPhone/iPad applications, we’re able to run this benchmark on a Mac for the first time by running the iOS version. This is a very old benchmark, built for the OpenGL ES 2.0 era, but it’s interesting that it fares even better than GFXBench. The Mac Mini performs just well enough to slide past a GTX 1650 equipped laptop here, and while this won’t be a regular occurrence, it goes to show just how potent M1 can be.

BaseMark GPU 1.2.1 - Medium

BaseMark GPU 1.2.1 - High

Another GPU benchmark that’s been updated for the launch of Apple’s new Macs is BaseMark GPU. This isn’t a regular benchmark for us, so we don’t have scores for other, non-Mac laptops on hand, but it gives us another look at how M1 compares to other Mac GPU offerings. The 2020 Mac Mini still leaves the 2018 Intel-based Mac Mini in the dust, and for that matter it’s at least 50% faster than the 2017 MacBook Pro with a Radeon Pro 560 as well. Newer MacBook Pros will do better, of course, but keep in mind that this is an integrated GPU with the entire chip drawing less power than just a MacBook Pro’s CPU, never mind the discrete GPU.

Rise of the Tomb Raider - Value

Rise of the Tomb Raider - Enthusiast

Finally, putting theory to practice, we have Rise of the Tomb Raider. Released in 2016, this game has a proper Mac port and a built-in benchmark, allowing us to look at the M1 in a gaming scenario and compare it to some other Windows laptops. This game is admittedly slightly older, but its performance requirements are a good match for the kind of performance the M1 is designed to offer. Finally, it should be noted that this is an x86 game – it hasn’t been ported over to Arm – so the CPU side of the game is running through Rosetta.

At our 768p Value settings, the Mac Mini is delivering well over 60fps here. Once again it’s vastly ahead of the 2018 Intel-based Mac Mini, as well as every other integrated GPU in this stack. Even the 15-inch MBP and its Radeon Pro 560 are still trailing the Mac Mini by over 25%, and it takes a Ryzen laptop with a Radeon 560X to finally pull even with the Mac Mini.

Meanwhile cranking things up to 1080p with Enthusiast settings finds that the M1-based Mac Mini is still delivering just shy of 40fps, and it’s now over 20% ahead of the aforementioned Ryzen + 560X system. This does leave the Mini well behind the GTX 1650 here – with Rosetta and general API inefficiencies likely playing a part – but it goes to show what it takes to beat Apple’s integrated GPU. At 39.6fps, the Mac Mini is quite playable at 1080p with good image quality settings, and it would be fairly easy to knock down either the resolution or image quality a bit to get that back above 60fps. All on an integrated GPU.

Update 11-17, 7pm: Since the publication of this article, we've been able to get access to the necessary tools to measure the power consumption of Apple's SoC at the package and core level. So I've gone back and captured power data for GFXBench Aztec Ruins at High, and Rise of the Tomb Raider at Enthusiast settings.

Power Consumption - Mac Mini 2020 (M1)
  Rise of the Tomb Raider (Enthusiast) GFXBench Aztec
Package Power 16.5 Watts 11.5 Watts
GPU Power 7 Watts 10 Watts
CPU Power 7.5 Watts 0.16 Watts
DRAM Power 1.5 Watts 0.75 Watts

The two workloads are significantly different in what they're doing under the hood. Aztec is a synthetic test that's run offscreen in order to be as pure of a GPU test as possible. As a result it records the highest GPU power consumption – 10 Watts – but it also barely scratches the CPU cores virtually untouched (and for that matter other elements like the display controlller). Meanwhile Rise of the Tomb Raider is a workload from an actual game, and we can see that it's giving the entire SoC a workout. GPU power consumption hovers around 7 Watts, and while CPU power consumption is much more variable, it too tops out just a bit higher.

But regardless of the benchmark used, the end result is the same: the M1 SoC is delivering all of this performance at ultrabook-levels of power consumption. Delivering low-end discrete GPU performance in 10 Watts (or less) is a significant part of why M1 is so potent: it means Apple is able to give their small devices far more GPU horsepower than they (or PC OEMs) otherwise could.

Ultimately, these benchmarks are very solid proof that the M1’s integrated GPU is going to live up to Apple’s reputation for high-performing GPUs. The first Apple-built GPU for a Mac is significantly faster than any integrated GPU we’ve been able to get our hands on, and will no doubt set a new high bar for GPU performance in a laptop. Based on Apple’s own die shots, it’s clear that they spent a sizable portion of the M1’s die on the GPU and associated hardware, and the payoff is a GPU that can rival even low-end discrete GPUs. Given that the M1 is merely the baseline of things to come – Apple will need even more powerful GPUs for high-end laptops and their remaining desktops – it’s going to be very interesting to see what Apple and its developer ecosystem can do when the baseline GPU performance for even the cheapest Mac is this high.

Benchmarks: Whatever Is Available SPEC2006 & 2017: Industry Standard - ST Performance
Comments Locked


View All Comments

  • Spunjji - Tuesday, November 17, 2020 - link

    @halo37253 I suspect you're largely correct based on what we're seeing in the benchmarks here.

    Of course, the answer to why Apple would do it is clear: they love vertical integration. They'll eventually be able to translate this into power/performance advantages that will be difficult to assail with apps written specifically for their platform.
  • mdriftmeyer - Friday, November 20, 2020 - link

    Apple will have to modify their future M1s to accomodate PCIe because a large portion of the Audio Video Professional world needs it--in fact we all rely on DMA over PCI for Thunderbolt to reduce latency, and nothing like throwing away a $5k-$25k stack of Audio Interface, Mic Pres and more just because Apple wants to drop that, or just simply dump Apple and move back to Windows and deal with DLLs. I hate Windows but I sure as hell won't drop expensive gear tied with Dante Ethernet and TB3 interfacing with various Audio Interfaces and rack mount hardware because Apple thinks the Pro market only needed the Mac Pro one off before dropping us off a cliff.

    No one in the world of Professional Music uses Logic Pro stock plugins and the average track has any where between 80-200 channel strips to manage one mix. If you think the M1 or its predecessors with this type of tightly joined unified memory system will satisfy people are just not familiar to how many resources making professional music or film production require.

    Let's not even talk about 3D Modeling for F/X in Films or full blown PIXAR style film shorts, never mind full length motion pictures. Working in 8k and soon 16k film to have real-time scrubbing will demand new versions of the Mac Pro's Afterburner and upgraded Xeons [or if they were smart, Zens] but definitely not M series SoCs.
  • Spunjji - Monday, November 23, 2020 - link

    @mrdriftmeyer - I don't see that any of the requirements you've mentioned here would preclude Apple producing an M1 successor that would be capable of fulfilling them. In particular you mentioned 8K video scrubbing, which the M1 can already do better than the average Xeon. I doubt they'd throw away the audio market entirely over this switch - I guess we'll just have to wait and see what the next chips look like.
  • varase - Wednesday, November 25, 2020 - link

    Most people are looking at these first Apple Silicon Macs wrong - these aren't Apple's powerhouse machines: they're simply the annual spec bump of the lowest end Apple computers with DCI-P3 displays, Wifi 6, and the new Apple Silicon M1 SoC.

    They have the same limitations as the machines they replace - 16 GB RAM and two Thunderbolt ports.

    These are the machines you give to a student or teacher or a lawyer or an accountant or a work-at-home information worker - folks who need a decently performing machine who don't want to lug around a huge powerhouse machine (or pay for one for that matter). They're still marketed at the same market segment, though they now have a vastly expanded compute power envelope.

    The real powerhouses will probably come next year with the M1x (or whatever), rumored to have eight Firestorm and four Icestorm cores. Apple has yet to decide on an external memory interconnect and multichannel PCIe scheme, if they decide to move in that direction.

    Other CPU and GPU vendors and OEM computer makers take notice - your businesses are now on limited life support. These new Apple Silicon models can compete up through the mid-high tier of computer purchases, and if as I expect Apple sells a ton of these many will be to your bread and butter customers.

    In fact, I suspect that Apple - once they recover their R&D costs - will be pushing the prices of these machines lower while still maintaining their margins - while competing computer makers will still have to pay Intel, AMD, Qualcomm, and nVidea for their expensive processors, whereas Apple's cost per SoC goes down the more they manufacture. Competing computer makers may soon be squeezed by Apple Silicon price/performance on one side and high component prices on the other. Expect them to be demanding lower processor prices from the above manufacturers so they can more readily compete, and processor manufacturers may have to comply because if OEM computer manufacturers go under or stop making competing models, the processor makers will see a diminishing customer base.

    I believe the biggest costs for a chip fab are startup costs - no matter what processor vendors would like you to believe. Design and fab startup are _expensive_ - but once you start getting decent yields, the additional costs are silicon wafers and QA. The more of these units Apple can move, the lower the per unit cost and the better the profits.

    The real threat to OEM computer and processor makers are economic - and that fact that consumer publications like Consumer Reports will probably _gush_ over the improvements in battery life and performance.

    Most consumers are not Windows or macOS or ChromeOS fanboys - the just want a computer which is affordable and has decent build quality and gets the job done. There are aspirational aspects of computer purchases, and M1 computers shoot waaayyy above their peers. This can mean a potential buyer _doesn't_ have to buy way up the line for capabilities he or she may want sometime during their ownership window, and these computers will last a long long time and will not suffer slowdowns due to software feature creep.
  • Eric S - Tuesday, November 17, 2020 - link

    Remember that this is designed to be Apple’s lowest end Mac chip. Their Intel i3. Wait until the big chips come out next year.
  • BushLin - Wednesday, November 18, 2020 - link

    ... Your speculation may or may not be correct but next year will see 5nm zen 4 which is actually announced rather than rumors.
  • jospoortvliet - Wednesday, November 18, 2020 - link

    Sure, and 3nm m2. Different generation with different processes etc. But today, M1 has the best single core and at lower power comes close to octacores despite only 4 fast and 4 slow cores. I wish I could buy it with Linux on it...
  • dysonlu - Sunday, February 21, 2021 - link

    "makes we wonder why Apple is so willing to fracture their already pretty small Mac OS fanbase"

    You have it upside down. It is exactly BECAUSE it has a small fanbase that it can afford to do this kind of migration. (The large and heterogenous "fanbase" in Windows is the big achilles' heel for Microsoft when it comes to making any significant change.) There will be very little "fracture" of Apple's fanbase, if any at all. The fans will gladly move to Mx CPUs given the advantages over Intel.
  • adriaaaaan - Thursday, November 19, 2020 - link

    People are giving apple too much credit here, this is only impressive because of the process advantage which has nothing to do with apple.

    People are forgetting that Mac's have a tiny market share and that's not likely to change any time soon. You wouldn't knows it because journos tend to use Mac's therefore they think everyone does.

    If anything I hope this kicks AMD into gear they are still releasing gcn designs. Let's see who's boss when they release 5nm rDNA 2
  • Spunjji - Thursday, November 19, 2020 - link

    "this is only impressive because of the process advantage"

    False. A crap core on a high-tech process will still produce bad results; you only have to look at the last bunch of Zhaoxin CPUs based on the old Via tech.

    If this were just about process node you'd expect to see lower power but with limited performance. As it is, they manage both extremely low power *and* very competitive performance. Beating Intel is no small feat, even in their current incarnation.

Log in

Don't have an account? Sign up now