Comments Locked

19 Comments

Back to Article

  • HardwareDufus - Friday, December 6, 2019 - link

    This would be a very interesting review. You should request one. Especially if you approach it as a developer platform and don't go nuts criticizing its IPC etc in relation to XEONs and EPYC in server duty.
  • extide - Friday, December 6, 2019 - link

    Oh but please do this would be a rather unique opportunity to see some specint/fp comparisons between the two platforms...
  • jospoortvliet - Friday, December 6, 2019 - link

    Sadly it is the last gen... not that impressive I would expect.
  • rahvin - Monday, December 9, 2019 - link

    There hasn't been an impressive ARM server grade CPU produced yet.
  • Wilco1 - Monday, December 9, 2019 - link

    Besides ThunderX2, Neoverse N1, A64FX you mean? Each of these beat whatever Intel has.
  • voiceofunreason - Wednesday, December 11, 2019 - link

    Much as I too might want ARM to be a viable competitor in the server space don't let that blind you to its faults: single threaded performance is still way behind x86 (yes, even on servers single thread performance still matters if you care about latency).

    The only place ARM definitively "beats whatever Intel has" is in cost/core. But even then, when heavy effort has gone into x86 optimisation it looks even worse for ARM (eg x264 encoding on a 96 core ThunderX is bested by a 4 core Ryzen 3 1300). It doesn't matter if ARM is theoretically capable of more, I'm going take the software ecosystem as it is right now when looking at TCO.
  • ameliajessi - Monday, December 9, 2019 - link

    Sounds good, I feel so good to read this review about latest arm server CPU looking dashing. I am doing working to sale coins online and my website https://rpseitzancientcoins.com/ has huge data so for this I am thinking to buy a server PC maybe I will buy this one arm server CPU.
  • ProDigit - Saturday, December 14, 2019 - link

    You're much better off, buying 2 Xeon E5 2650L V2 systems.
    Both of them are running 75% performance of Ampere's single CPU, at 120% of the power consumption.
    While those numbers may seem large, you'll get 40 threads at 1,9Ghz instead of 32 threads at 3Ghz; and your electric bill will be $20 more expensive per year.
    But the purchase cost of a xeon E5 2650L V2 is only $60, and a Chinese motherboard that supports it, goes for $80-100. DDR3 is also much cheaper.
    You'll pocket $1.5-2k at the initial purchase easily!
    It'll be hard to justify buying ampere; and if you need extra computing power, just add a few more Xeon systems.
  • GreenReaper - Friday, December 6, 2019 - link

    To be honest I think they've rather been overtaken by events in the x86 sphere; however, it is good to see options, and it may well be relevant to those developing for embedded or low-power platforms.
  • yannigr2 - Friday, December 6, 2019 - link

    TDP seems too high for an ARM based CPU, even at 16nm. Aren't ARM cores much more energy efficient?
  • Dolda2000 - Friday, December 6, 2019 - link

    There's very little that would make ARM inherently more efficient than x86, it's all about the implementation and where on the voltage/frequency curve they are used in the end.

    Even then, though, this seems quite efficient at 125 W for 32 cores, so I'm not really sure what you'd have to complain about.
  • Kangal - Saturday, December 7, 2019 - link

    No he's right.
    We don't know what type of microarchitecture they're using, but at the least efficient side we would be looking at Cortex A72 (at best it's Cortex A76). We know it isn't using a Cortex A32, A53, or A55 simply based on the clockspeed. And it actually doesn't have anything like a monstrous large cache or something to hint it's a custom larger core like Samsung's Mongoose or Apple's Lightning cores.

    And I'd be willing to bet they're using 16nm node, whilst they could be using the latest 7nm fab or anything in-between like 10nm. And power characteristics of 16nm are pretty decent. So we can safely assume that in a "worst case scenario" they're running 32x 900mW x 3 / 2GHz = 43.2 Watts. That's still 45W which is in the ballpark of a thicker laptop, or a Micro Desktop/Console.
    https://www.anandtech.com/show/9878/the-huawei-mat...

    Remember, the Ryzen 3950X is running 16-cores, that's half as many, at the same TDP range. And that's a big fat x86 design, and not like ARM cpus.

    So either this is using an outdated design (28nm? Cortex A57?) and it's running unoptimised... or the company has purposely chosen a VERY conservative TDP for ventilation/warranty purposes. For 125W TDP range, I would instead expect a 14nm Cortex A73 design running at 3.0Ghz... with a whopping 128-CORES!!
  • milli - Friday, December 13, 2019 - link

    This is using a custom core designed for server usage so you can't compare it to any Cortex core used in a phone.
    On top of that you're negligently forgetting the huge memory controller, the server-class RAS features, the dozens of pci-e lanes,... And the list goes on. These things alone take a big chunk of the power envelope.
    They're using 16nm because this chip was launched last year and was sampling in 2016.
  • MrSpadge - Friday, December 6, 2019 - link

    Probably by not much if they're as fast as x86 cores and are part of a bigger system (many cores, big caches, several memory channels, many I/O ports etc.).
  • Santoval - Saturday, December 7, 2019 - link

    The CPU has 32 cores, and these are not mobile ARM cores. They are designed for servers, so they are fatter and deeper than ARM's reference designs. That node is also very long in the tooth by now. If it was fabbed at 7nm its TDP would probably be in the 65W to 70W region at the same clock.
  • Santoval - Saturday, December 7, 2019 - link

    Last gen (or two+ gens) before in just about everything : fabrication node (1.5 half gens), PCIe version, ARM ISA (two and very soon three gens; this is the original ARMv8 ISA) and USB version (two gens). The DRAM frequency is also somewhat low, but that is not an issue due to the 8 memory channels (4 cores per memory channel is quite healthy).
  • jwittich - Sunday, December 8, 2019 - link

    The CPU in this is the eMAG 8180, which is a 16nm 32-core Arm v8 custom core that was released last year. With gcc 8.2 -ofast, lto, jemalloc, SPECint_rate2017 = 59.
  • ProDigit - Saturday, December 14, 2019 - link

    Intel never really released their 128 core CPU, until after Larrabee came out. They named it the 'Xeon Phi'.
    Their Xeon Phis would run on Larrabee's PCIE format too, and was what I would say, the closest thing to deep learning, AI, massive computing (up to 64 cores, 256 threads).
    ARM has lost the battle against GPUs.
    Even the most powerful $+2k ARM CPU can't compete in data crunching to a $160 GPU running 1280 Cores at 1,4-1,9Ghz.

    Nvidia and AMD basically stomped out ARM. ARM will remain small, and data center CPUs should remain big. Since they're very efficient now on 7nm nodes (and getting smaller), it makes no sense to try to make a CPU that tries to imitate a GPU.

    Instead CPU makers should focus on increasing frequency per core; as nodes become smaller usually forces those cores to operate at lower frequencies.
    This is important for feeding GPUs the data they need. Nvidia tries to lock 1 CPU core per GPU, as currently you'll need 3+Ghz per core, for most of the top end GPUs (RTX series GPUs).

    We've recently made a huge leap in computing performance, and GPUs will now combine multiple dies on one GPU card, removing the need to occupy more than 2 or 3 PCIE 16x slots.
  • speculatrix - Tuesday, December 17, 2019 - link

    I'd like to see a full review of this, with benchmarks.
    However, if you want native Arm development, you'd be hard pressed to find better value than the PineBookpPro.

Log in

Don't have an account? Sign up now