Nvidia’s Orin SoC chipset had been on Nvidia’s roadmaps for over 2 years now, and last December we got the first new details of the new automotive oriented silicon chip, revealing characteristics such as it containing 12 core cores of Arm’s newest “Hercules” microarchitecture (A77 successor).

Orin is meant to be the heart of Nvidia’s upcoming DRIVE automotive platforms, and today the company is ready to reveal a few more important details such as the scalability of the SoC and the different DRIVE solutions.

NVIDIA ARM SoC Specification Comparison
  Orin Xavier Parker
CPU Cores 12x Arm "Hercules" 8x NVIDIA Custom ARM "Carmel" 2x NVIDIA Denver +
4x Arm Cortex-A57
GPU Cores Ampere iGPU
(?? cores)
Xavier Volta iGPU
(512 CUDA Cores)
Parker Pascal iGPU
(256 CUDA Cores)
INT8 DL TOPS 200 TOPS 30 TOPS N/A
FP32 TFLOPS ? 1.3 TFLOPs 0.7 TFLOPs
Manufacturing Process 7nm? TSMC 12nm FFN TSMC 16nm FinFET
TDP ~5-45W 30W 15W

Specifications wise, the newest revelations about the Orin design is that it features Nvidia’s newest Ampere architecture as its integrated GPU. Generally, this shouldn’t come as too much of a surprise given the timeline of the SoC.

Nvidia still doesn’t disclose exact configurations of the GPU, but if the mock-up die-shot of the chip is anything to go by, we’ll be seeing a 32SM configuration – which fits nicely with the claimed peak 200 INT8 DL TOPs that Nvidia claims for the chip.

Manufacturing wise, we again don’t have exact details, but we’re assuming a 7nm-class process node. One interesting disclosure today however was the fact that Orin is supposed to scale from 5W to up to 45W platforms, which is a very wide range.

The 5W platforms claims up to TOPs inference performance, and it’s meant for ADAS solutions as depicted above, designed to fit behind a windshield. Nvidia being able to scale Orin down to a 5W TDP is extremely interesting, but undoubtedly will have the chip disable much of its capabilities, or clock down to very low frequencies to achieve this power envelope.

The chip is also offered in an L2+ automotive solution, enabling the full power of Orin at up to 45W. Here we see the full 200TOPs of inference performance that Nvidia had disclosed back in November. We're seeing 8 DRAM chips on the depicted board, likely pointing out to a 128-bit memory controller setup.

Finally, Nvidia is bringing out the biggest guns in its DRIVE line-up for the robotaxi solution, an L5 automotive solution is meant to power fully autonomous robotaxi vehicles.

The platform here has two Orin SoCs paired with two Ampere GPUs for total power envelope of 800W and up to 2000 TOPs of performance. The GPU here, judging by its size and form-factor with HBM memory is seemingly the newest GA100 Ampere GPU. Nvidia disclosed that this GPU alone scales up to 400W in the SXM form-factor. Clocking two of these slightly lower and adding two 45W Orin SoCs gets us to the massive 800W power envelope.

Related Reading:

Comments Locked

32 Comments

View All Comments

  • Alistair - Thursday, May 14, 2020 - link

    I hope this forms the basis for the next Nintendo home console. I'll take that 45W version for the home, and the 15W form for a portable version. :)
  • Sttm - Thursday, May 14, 2020 - link

    Would be very interesting if Nvidia could turn out a high performance ARM/Ampere console chip for Nintendo. RTX Mario Kart!
  • nandnandnand - Thursday, May 14, 2020 - link

    The next Nintendo Switch could use Samsung + AMD graphics.

    https://hothardware.com/news/nintendo-samsung-exyn...
  • Alistair - Thursday, May 14, 2020 - link

    That would be cool also. I def. Nintendo was the first to move to modern hardware. nVidia, ARM, solid state storage.
  • FaaR - Thursday, May 14, 2020 - link

    The Switch was anything but modern though, even when it launched; it's based on an outdated SoC (which they further nerfed/disabled IIRC).

    Why would having flash and ARM CPU cores and an old NV GPU arch make it "modern" anyway? ARM architecture dates back to the mid-1980s, and phones have used ARM chips and flash since the 1990s. Other consoles also had onboard flash games storage before the Switch launched including models from Sony, MS and...Nintendo themselves. :P Also, NV maxwell GPU was old hat by the time Switch launched.

    Strange reasoning! :)
  • BenSkywalker - Friday, May 15, 2020 - link

    You realize x86 came out in the 70s, right?

    Also, what SoC made the x1 outdated in 2016? It throttled the A10 and it wasn't until the 845 in '18 that Qualcomm was competitive. For a portable system the Switch was quite powerful for when it came out.
  • Tams80 - Friday, May 15, 2020 - link

    I mean, if you're going to be really picky, the Game Boy Advance used an ARM SoC.
  • Alistair - Thursday, July 30, 2020 - link

    ARM enables higher performance at low power. nVidia's GPUs are top. Flash storage is a must and Nintendo was first to switch to it. And Nintendo was first to release a NO CD DRIVE NO SPINING RUST HARD DRIVE system. All places that Microsoft and Sony are heading to. Hence, MODERN.
  • Tams80 - Friday, May 15, 2020 - link

    Well considering what was available and how prices of many components had fallen at the time of the Switch's development, of course it was going to have modern hardware. It wouldn't have made sense not to.

    And of course the fact that the body of the Switch is pretty much an Nvidia Shield tablet... I think it's pretty likely Nvidia were eager to clear some inventory and make something back from their investment into the X1.
  • Yojimbo - Thursday, May 14, 2020 - link

    It could, but would Nintendo really want to give up all the nice NVIDIA support they've included that has brought in 3rd party developers to the system? The support and optimizations from Qualcomm for a scaled down RDNA architecture are just not likely to have anywhere near the same maturity. Besides, the initial plan was for a long-term partnership between NVIDIA and Nintendo. I also doubt that Qualcomm will achieve the same energy efficiency at a similar power threshold by using the RDNA architecture, not to mention that AMD is still behind in memory bandwidth efficiency as well. It's just not easy to replace NVIDIA in that space at the moment.

Log in

Don't have an account? Sign up now