Qualcomm revealed the name of its newest SoC, the Snapdragon 835, at its Snapdragon Technology Summit in New York today. The new SoC replaces the Snapdragon 820/821 at the top of its lineup. While Qualcomm is not yet ready to disclose the specifics about what’s inside the Snapdragon 835, it did confirm one important detail.


Keith Kressin (left) and Ben Suh (right) holding Snapdragon 835, the first 10nm SoC

Keith Kressin, Senior Vice President of Product Management at Qualcomm, took the stage with Ben Suh, Senior Vice President of Foundry Marketing at Samsung Systems-LSI, to announce that the Snapdragon 835 will use Samsung’s 10nm "10LPE" FinFET manufacturing node. We do not know the Snapdragon 835’s power or performance numbers yet, but according to Samsung its 10nm process “allows up to a 30% increase in area efficiency with 27% higher performance or up to 40% lower power consumption.” The switch from 14nm to 10nm, along with other changes, give the Snapdragon 835 a smaller die size than the Snapdragon 820 SoC, and should also help improve battery life.

The Snapdragon 835 is already in mass production and on schedule to appear in commercial devices during the first half of 2017.

Comments Locked

65 Comments

View All Comments

  • lilmoe - Friday, November 18, 2016 - link

    "but what we've seen is that bog standard arm cores are the fastest (per clock) and most energy efficient mobile core that android can use"

    Easy there cowboy. Don't take this as a personal insult, but you (and lots of others in comment sections, and authors of many blogs, sadly), are what's wrong with this community.

    "My feeling regarding mongoose"
    How many do you believe care about your gut feeling? What matters is facts, numbers and real life scenarios. Not your (and others') personal hate relationship with Samsung.

    The Linux community is pissed at Sammy because they don't release sources and drivers. I hope they would, but oh well.

    "the a72 made Qualcomm look particularly bad"
    Huh? Didn't Qualcomm look particularly bad when they DID use ARM reference designs? SD 810 anyone? What Qualcomm "didn't know" at the time (which Sammy "did") is that you need to FIX ARM reference designs AND fix Android before you release an ARM reference design chip, either by software or brute force (process node).

    Objectively speaking, ARM's reference designs have had many flaws (Exynos 5410, SD 810, lots of Tegras, etc...), only fixed by Samsung's custom designs (that being custom cores, fabric, or inter-connects), and lately mitigated by ARM itself on Chinese silicon. I won't give ARM a pass because they JUST recently fixed their issues. You shouldn't either.

    Kirin (and Helio) SoCs don't compete in the same segment because they've always lacked in key areas, the most prominent being the GPU. When you have such a small GPU, then OF COURSE you'll have more thermal headroom for the CPU. That CPU-first design, (which relatively bodes well with Android's rendering inefficiencies), falls short in lots of key aspects. That being said, even with these "handicaps" (combined with quite larger batteries, lower resolution displays, and more *physical* thermal headroom due to larger, less feature dense phones), Exynos powered devices have always reigned supreme "overall". Samsung has just as much access, if not more, to ARM reference designs, just like they do with IT's PowerVR.

    It's EXTREMELY difficult to have fast CPUs AND GPUs both pushing lots of compute in mobile. Even Apple caps the CPU when you're running an intense 3D game (proof being their physics scores in games); but you can't do that on Android because 1) it isn't as efficient, and 2) do they have the same control over hardware and software like Apple.

    Unlike Chinese manufacturers, Samsung and Qualcomm have to deal with a LOT of pro-Apple media and individuals, (such as yourself), or former ones, who have no idea how Apple designs work (nor-what bodes better with Android vs iOS). They have to load their SoCs PACKED with features that Android can never use, unless they customize the crap out of it. Their devices have to have double the screen resolution (at least) to "compete". But then again, people like you call them out because they're not using "pure" Android. Try powering a 1440p screen on these Kirin processors. Yea, they'll fall flat on their face. Try comparing photos and videos processed by a Kirin processor with those processed by Exynos... yea....

    "[performance] per clock"
    Pfft. What matters in mobile is performance PER WATT, and efficiency in *average* workloads. Would love to see all those SoCs compared on a common real-life-scenario workload, and Apple's SoCs thrown in the mix for the heck of it (though non-indicative). Oh wait, Anandtech dropped that featured article............. darn it...

    You can't tout "benchmarks", then quickly retract when these benchmarks don't present data that back your petty arguments. You shouldn't constantly look around the web for blogposts and other BS articles to support your claims. Just stop. Look at things more objectively and put your bias aside.

    Geekbench (the benchmark you personally tout all the time) was never a legitimate "cross platform" benchmark, but its numbers are digestible when comparing CPUs on the same platform (Android). Same goes with browser benchmarks. Go to their website and tell me which is the fastest ANDROID smartphone SoC in BOTH single and multi-threaded performance in their latest Geekbench 4 benchmark.

    here's link:
    https://browser.primatelabs.com/android-benchmarks

    You keep replying to my comments on many threads and asking me for sources to back my arguments. Are you really interested in the truth or any form of objective facts? I'm pissed because I NO ONE wants to write a decent article objectively. I'm pissed because Anandtech that's not how the "business" works.
  • lilmoe - Friday, November 18, 2016 - link

    "I'm pissed because I NO ONE wants to write a decent article objectively. I'm pissed because Anandtech that's not how the "business" works"

    *** correction:
    I'm pissed because no one wants to write a decent article objectively. I'm pissed because that's not how the business works, even for Anandtech.
  • Andrei Frumusanu - Friday, November 18, 2016 - link

    Not sure what warrants your rant here. The A72 is both higher IPC than Kryo and lower power. A72 is about equal IPC to Mongoose but at like 50% more power thus equally less efficient at peak perf.

    Kirin does not have higher CPU headroom because they have a smaller GPU, in fact it's the opposite, they have the smallest CPU headroom out of all vendors.

    And quit the nonsense about GB4 not being a legitimate cross-platform benchmark, it's like saying SPEC cross-platform is illegitimate because technically there's exactly 0 differences between the two in terms on how things are being ported.
  • lilmoe - Friday, November 18, 2016 - link

    "Not sure what warrants your rant here".

    I don't know. Misinformation, inconsistency, lack of data, lack of valid comparisons. You name it. Back when I argued that higher trees screens were bad for performance and battery life, you all blew in my face. Now you're trying to tell my that geekbench is legit when they even admitted that and ""
  • lilmoe - Friday, November 18, 2016 - link

    Hit submit by mistake.

    And "updated their datasets" to match the workload for all platforms in the latest version. We should all accept this as true now? Can you check and test that ported code?

    Would you also be kind to check on the compiler optimizations?

    "in fact it's the opposite, they have the smallest CPU headroom out of all vendors"
    Do elaborate on this please. I'm very interested to see just HOW a smaller GPU wouldn't give the cpu more power and thermal headroom to work with one a limited 2-4 watt chip.
  • Andrei Frumusanu - Friday, November 18, 2016 - link

    GB4 info is publicly available in the white sheet.

    It's in the Mate 8 review. The 950's CPU TDP is much less than say the 8890's even though the latter has double the GPU perf. Just having a smaller GPU doesn't magically allow the CPU to clock higher and use more power. And having a bigger GPU doesn't limit it either, you just limit the CPU to a lower frequency in those scenarios.
  • tuxRoller - Saturday, November 19, 2016 - link

    Just wanted you to know I read this and nodded right along.
    I'm not sure how you are determining some of these things, and I certainly don't see how that first paragraph can be read as not being a personal insult, but I'll respond to this, but I want to do so quite carefully.
    Btw, I'm not sure what point you are trying to address with that GB link. Maybe you have me confused with someone else? I have serious issues with any closed source benchmark, as should any serious hardware site with an interest in comparative performance. However, it's one of the better, native, cross platform tests we have that is also widely used and not gpu focused.
    Nearly as good though, imho, are the JavaScript tests as long as you compile the browser (from the same commit number) for each device. This won't be cross-platform, however.
  • skavi - Friday, November 18, 2016 - link

    /r/FuckQualcomm
  • watzupken - Thursday, November 17, 2016 - link

    Makes me wonder how they come up with the naming convention. Its been 800, 810, 820, then all of a sudden you get an 835... I know there are 805, but those are typically mid cycle refresh, i.e. clockspeed bumps.
  • Meteor2 - Thursday, November 17, 2016 - link

    I think random model numbers is an eastern Asia thing, e.g. incomprehensible TV model numbers.

Log in

Don't have an account? Sign up now