Hexagon 780: A Whole new IP for AI & DSP

Every year Qualcomm likes to talk about its new Hexagon DSPs, with the last few generations also adding to the table new Tensor Accelerators dedicated for ML inferencing. This year’s Snapdragon 888 design also hypes up the new Hexagon 780 design, with the difference being that this time around the generational improvements are truly humongous.

The new Hexagon 780 accelerator IP truly deserves a large increment in its IP numbering scheme, as it’s essentially a ground-up redesign of the company’s existing DSP with scalar and vector execution engines, and the recent Tensor Accelerators. Previously all these execution engines were acting as discrete independent blocks within the Hexagon 600 series family, but that’s now changed in the new IP design.

The new IP block fuses together all the scalar, tensor, and vector capabilities into a single monolithic IP, vastly increasing the performance and power efficiency of workloads that make a use of all the mixed capabilities of the design.

In terms of performance uplifts, scalar execution capabilities are said to be increased by 50%, while tensor execution throughput has doubled. The vector extension units seem to have remained the same this generation, but actual performance of workloads will still have been increased thanks to the new memory architecture of the new IP block.

Qualcomm states that they’ve increased the on-chip SRAM dedicated to the block 16-fold, allowing for larger machine-learning inference models to fit within the block’s memory, greatly accelerating their performance. This larger memory pool also allows for coherency between the scalar, vector and tensor units, allowing for the vastly increased workload handoff time between the different execution engines. I asked about the actual size of this new memory, but the company wouldn’t disclose any further details, just stating that it’s significant.

The company’s engineers were extremely hyped up about the new design, stating that the performance and flexibility of the new design is well beyond that of what other companies can achieve through disaggregated DSP and ML inference engines, sometimes even from different IP vendors.

The most important figure for the new design is the 3x performance per watt claim, which is just a massive generational improvement that you rarely see in the industry.

As is usual for Qualcomm, the company doesn’t actually state the per-block performance increases, but instead opts to showcase an aggregate computational throughput figure shared amongst all of the SoC’s IP blocks, including CPU, GPU, and the new Hexagon accelerator block. This new figure lands in at 26TOPs for the Snapdragon 888, which is 73% higher than the 15TOPs figure of the Snapdragon 865. Given that we’ve seen significant changes in all IP blocks this generation, I won’t attempt a breakdown estimate as it’s likely going to be wrong and off-the-mark anyhow.

The Adreno 660 - A 35% faster GPU

Amongst the improvements which lead up to that 26TOPs figure is a new vastly improved GPU in the form of the new Adreno 660.

Qualcomm still holds architectural details of their GPUs very close to their chest and thus doesn’t go disclose very much about the new GPU design and what has actually changed, but one thing they did talk about is the addition of new mixed-precision dot product as well as FP16/FP32 wave matrix-multiply instructions, which allow the new GPU to increase AI performance by up to 43%.

We’re also seeing the addition of variable rate shading (VRS) onto the Adreno GPU architecture, allowing for coarser pixel shading onto larger pixel blocks for object and screen areas which don’t require as much detail or the native resolution shading wouldn’t be noticeable. This is also a major feature that’s being introduced in the console and new-generation PC graphics cards and GPUs, which should bring greater performance uplift for new gaming titles which take advantage of the new features. It’s great to see Qualcomm bringing this to the mobile space along with the rest of the industry.

For graphics workloads, the new GPU is advertised as being able to increase performance by up to 35%, which is a very major generational performance leap.

Such a performance jump would actually signify that Qualcomm may very well regain the gaming performance crown this generation, having lost it to Apple’s SoCs over the last two generations. Apple’s latest A14 has seen rather conservative gains on the GPU side this year, so a 35% performance gain over the Snapdragon 865 should very much allow the new Snapdragon 888 to retake the leadership position.

A 35% performance increase with a 20% power efficiency increase would indicate that the new SoC would achieve the higher performance at cost of a little higher power consumption, but given the Snapdragon 865’s excellent power characteristics of below 4W, Qualcomm does have a little leeway to increase power this generation.

3200MHz LPDDR5

The new Snapdragon 888 moves from a hybrid memory controller to one that focuses on LPDDR5, and also increases the frequency support for new LPDDR5 to 3200MHz (Or LPDDR5-6400).

For the Snapdragon 865 Qualcomm was rather unenthusiastic about the LPDDR5 switch, saying that it didn’t bring all to great improvements to performance or power efficiency – something which we actually did test out and come to the same conclusion in our review of the two OnePlus 8 phones, where the LPDDR4X variant ended up being no slower and seemingly actually more efficient to us. Apple this year also kept on using LPDDR4X on their A14 and M1 SoCs – pointing out that the benefits aren’t all that great.

For the Snadpragon 888 however, Qualcomm’s engineers seemed more upbeat about LPDDR5 and the new SoC actually being able to utilise the increase memory bandwidth this generation. Without going into details, the company also stated that they’ve improved the overall design of the memory subsystem, improving aspects such as latency.

On the part of the memory subsystem, Qualcomm still employs a 3MB system-level-cache in front of the memory controllers, with the ability of all SoC IP blocks to take advantage of this cache.

The Snapdragon 888: Back to monolithic SoC on 5nm Triple ISPs: Concurrent Triple-Camera Usage
Comments Locked

123 Comments

View All Comments

  • jaj18 - Thursday, December 3, 2020 - link

    It will come with adreno 7××🤔
  • StormyParis - Wednesday, December 2, 2020 - link

    "This year although we’re not reporting from Hawaii". Heh heh. I'd feel sorry for you if I wasn't jealous for all the other years ? ;-p
  • Krysto - Wednesday, December 2, 2020 - link

    No AV1 decode support in 2021? Really?
  • tuxRoller - Thursday, December 3, 2020 - link

    I'm more interested in accelerated encode at this point.
    We've not had industry wide buy-in of a new lossy codec since jpeg, and hevc haven't quite achieved the ubiquity that h.264 managed after the same time in market.
  • GeoffreyA - Thursday, December 3, 2020 - link

    While hardware AV1 encode would be quite nice to see, there's a possibility it will lose much of software AV1's gains over software HEVC (that is, one might encode quickly but end up with less compression than x265). Also, leaving aside the Slough of Patents for a moment, VVC will have to be taken into account once x266 comes out. If the studies are right, the reference VVC encoder (not x266) already shows better compression and speed than AV1. Hopefully, it won't inherit HEVC's less than pleasing picture too (to my eyes at least).
  • tuxRoller - Friday, December 4, 2020 - link

    That's a great point. In my haste to mention the lack of encoding ability I'd forgotten about the actual implementation of such a complicated codec. Which of the 30 or so tools, and their combinations, provide the most bit savings per mm²?
    Iirc, vvc owes a lot of its gains via integration with ml (there's at least one commercial av1 implementation that does this as well to, supposedly, great effect). IOW, I'm uncertain how much easier vvc will be too implement in hardware. Otoh, EVC looks quite interesting.
  • GeoffreyA - Friday, December 4, 2020 - link

    Oh, yes, it will probably make their heads spin implementing this thing in hardware, and when they do, which they will, they're going to make it a marketing point (even if, in practice, it fell behind x265).

    Yesterday I was experimenting with libaom-av1 on FFmpeg and discovered a useful parameter: -cpu-used. Controls compression/encoding speed and takes values between 0 and 8. 0 being the slowest, 1 the default, and 8 fastest. To my surprise, 8 brought encoding speed to reasonable levels: about 10x slower than x265, if I remember right, which isn't half bad. I was using a video shrunk to 360p though.

    As for VVC, can't wait to give it a go. Hopefully, it'll deliver and be of AVC's calibre. I wasn't familiar with EVC but took a look at it now, and it does appear to be quite an interesting concept.
  • tuxRoller - Saturday, December 5, 2020 - link

    You might be interested in the doom9 forums (https://forum.doom9.org/forumdisplay.php?f=17). In the av1 thread you'll often see people posting updates about the various av1 en/decode implementations, new settings and, in general, some interesting thoughts from folks in the industry.
    BTW, starting from this post (https://forum.doom9.org/showthread.php?p=1929560#p... there's an interesting discussion regarding qcom & their interest in not pushing av1.
    Regarding fast encoders, I'm assuming you've tried svt-av1? That's supposed to have nearly caught up with aom's encoder quality but is still a good deal faster.
    Lastly, thanks for the paper. It looks interesting and a quick skim didn't reveal any mention of ml enhanced transform, or even a new entropy code(!); they seem to be continuing to iterate on h.264->h.265. However, only started reading it and realized I'm not getting through that tonight:)
  • GeoffreyA - Saturday, December 5, 2020 - link

    Thanks for those doom9 threads. Looks like a treasure trove of information on AV1 there. As for SVT-AV1, yes, I have tried it. While the speed was good, the picture didn't seem that impressive. Anyhow, I'll have a crack at it again and see how it stacks up against libaom, now that I've got the latter running faster.

    You're right. I remember getting the impression that this was similar to how HEVC improved over H.264. Mostly, extending techniques already laid down. Yet another reason to tip one's hat to the MP3 of video.
  • GeoffreyA - Saturday, December 5, 2020 - link

    I found this some weeks ago. It goes into some lower-level details of VVC.

    https://www.cambridge.org/core/services/aop-cambri...

Log in

Don't have an account? Sign up now