Test Bed and Setup - Compiler Options

For the rest of our performance testing, we’re disclosing the details of the various test setups:

Intel - Dual Xeon Platinum 8380

For our new Ice Lake test system based on the Whiskey Lake platform, we’re using Intel’s SDP (Software Development Platform 2SW3SIL4Q, featuring a 2-socket Intel server board (Coyote Pass).

The system is an airflow optimised 2U rack unit with otherwise little fanfare.

Our review setup solely includes the new Intel Xeon 8380 with 40 cores, 2.3GHz base clock, 3.0GHz all-core boost, and 3.4GHz peak single core boost. That’s unusual about this part as noted in the intro, it’s running at a default 205W TDP which is above what we’ve seen from previous generation non-specialised Intel SKUs.

CPU 2x Intel Xeon Platinum 8380 (2.3-3.4 GHz, 40c, 60MB L3, 270W)
RAM 512 GB (16x32 GB) SK Hynix DDR4-3200
Internal Disks Intel SSD P5510 7.68TB
Motherboard Intel Coyote Pass (Server System S2W3SIL4Q)
PSU 2x Platinum 2100W

The system came with several SSDs including Optane SSD P5800X’s, however we ran our test suite on the P5510 – not that we’re I/O affected in our current benchmarks anyhow.

As per Intel guidance, we’re using the latest BIOS available with the 270 release microcode update.

Intel - Dual Xeon Platinum 8280

For the older Cascade Lake Intel system we’re also using a test-bench setup with the same SSD and OS image as on the EPYC 7742 system.

Because the Xeons only have 6-channel memory, their maximum capacity is limited to 384GB of the same Micron memory, running at a default 2933MHz to remain in-spec with the processor’s capabilities.

CPU 2x Intel Xeon Platinum 8280  (2.7-4.0 GHz, 28c, 38.5MB L3, 205W)
RAM 384 GB (12x32 GB) Micron DDR4-3200 (Running at 2933MHz)
Internal Disks Crucial MX300 1TB
Motherboard ASRock EP2C621D12 WS
PSU EVGA 1600 T2 (1600W)

The Xeon system was similarly run on BIOS defaults on an ASRock EP2C621D12 WS with the latest firmware available.

AMD - Dual EPYC 7763 / 7713 / 75F3 / 7662

In terms of testing the new EPYC 7003 series CPUs, unfortunately due to our malfunctioning Daytona server, we weren’t able to get first-hand experience with the hardware. AMD graciously gave us remote access to one of their server clusters – we had full controls of the system in terms of BMC as well as BIOS settings.

CPU ​2x AMD EPYC 7763 (2.45-3.500 GHz, 64c, 256 MB L3, 280W) /
2x AMD EPYC 7713 (2.00-3.365 GHz, 64c, 256 MB L3, 225W) /
2x AMD EPYC 75F3 (3.20-4.000 GHz, 32c, 256 MB L3, 280W) /
2x AMD EPYC 7662 (2.00-3.300 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Varying
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Software wise, we ran Ubuntu 20.10 images with the latest release 5.11 Linux kernel. Performance settings both on the OS as well on the BIOS were left to default settings, including such things as a regular Schedutil based frequency governor and the CPUs running performance determinism mode at their respective default TDPs unless otherwise indicated.

AMD - Dual EPYC 7742

Our local AMD EPYC 7742 system, due to the aforementioned issues with the Daytona hardware, is running on a SuperMicro H11DSI Rev 2.0.

CPU ​2x AMD EPYC 7742 (2.25-3.4 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Crucial MX300 1TB
Motherboard SuperMicro H11DSI0
PSU EVGA 1600 T2 (1600W)

As an operating system we’re using Ubuntu 20.10 with no further optimisations. In terms of BIOS settings we’re using complete defaults, including retaining the default 225W TDP of the EPYC 7742’s, as well as leaving further CPU configurables to auto, except of NPS settings where it’s we explicitly state the configuration in the results.

The system has all relevant security mitigations activated against speculative store bypass and Spectre variants.

Ampere "Mount Jade" - Dual Altra Q80-33

The Ampere Altra system we’re using the provided Mount Jade server as configured by Ampere. The system features 2 Altra Q80-33 processors within the Mount Jade DVT motherboard from Ampere.

In terms of memory, we’re using the bundled 16 DIMMs of 32GB of Samsung DDR4-3200 for a total of 512GB, 256GB per socket.

CPU ​2x Ampere Altra Q80-33 (3.3 GHz, 80c, 32 MB L3, 250W)
RAM 512 GB (16x32 GB) Samsung DDR4-3200
Internal Disks Samsung MZ-QLB960NE 960GB
Samsung MZ-1LB960NE 960GB
Motherboard Mount Jade DVT Reference Motherboard
PSU 2000W (94%)

The system came preinstalled with CentOS 8 and we continued usage of that OS. It’s to be noted that the server is naturally Arm SBSA compatible and thus you can run any kind of Linux distribution on it.

The only other note to make of the system is that the OS is running with 64KB pages rather than the usual 4KB pages – this either can be seen as a testing discrepancy or an advantage on the part of the Arm system given that the next page size step for x86 systems is 2MB – which isn’t feasible for general use-case testing and something deployments would have to decide to explicitly enable.

The system has all relevant security mitigations activated, including SSBS (Speculative Store Bypass Safe) against Spectre variants.

The system has all relevant security mitigations activated against the various vulnerabilities.

Compiler Setup

For compiled tests, we’re using the release version of GCC 10.2. The toolchain was compiled from scratch on both the x86 systems as well as the Altra system. We’re using shared binaries with the system’s libc libraries.

Ice Lake Xeon Processor List and Competition Topology, Memory Subsystem & Latency
Comments Locked

169 Comments

View All Comments

  • TomWomack - Wednesday, April 7, 2021 - link

    Is it known whether there will be an IceLake-X this time round? The list of single-Xeon motherboard launches suggests possibly not; it would obviously be appealing to have a 24-core HEDT without paying the Xeon premium.
  • EthiaW - Wednesday, April 7, 2021 - link

    Boeings and Airbuses are never actually sold at their nominal prices, they cost far less, a non-disclosed number, for big buyers after gruesome haggling, sometimes less than half the “catalogue” price.
    I think this is exactly what's intel doing now: set the catalogue price high to avoid losing face, and give huge discount to avoid losing market share.
  • duploxxx - Wednesday, April 7, 2021 - link

    well easy conclusion.

    EPYC 75F3 is the clear winner SKU and the must have for most of the workloads.
    This is based on price - performance - cores and its related 3rd party sw licensing...

    I wonder when Intel will be able to convince VMware to move from a 32core licensing schema to a 40core :)
    They used to get all the dev favor when PAT was still in the house, I had several senior engineers in escalation calls stating that the hypervisor was optimised for Intel ...guess what even under optimised looking for a VM farm in 2020-2021-....you are way better off with an AMD build.
  • WaltC - Wednesday, April 7, 2021 - link

    If you can't beat the competition, then what? Ian seems to be impressed that Intel was finally able to launch a Xeon that's a little faster than its previous Xeon, but not fast enough to justify the price tag in relation to what AMD has been offering for a while. So here we are congratulating Intel on burning through wads more cash to produce yet-another-non-competitive result. It really seems as if Intel *requires* AMD to set its goals and to tell it where it needs to go--and that is sad. It all began with x86-64 and SDRAM from AMD beating out Itanium and RDRAM years ago. And when you look at what Intel has done since it's just not all that impressive. Well, at least we can dispense with the notion that "Intel's 10nm is TSMC's 7nm" as that clearly is not the case.
  • JayNor - Wednesday, April 7, 2021 - link

    What about the networking applications of this new chip? Dan Rodriguez's presentation showed gains of 1.4x to 1.8x for various networking benchmarks. Intel's entry into 5G infrastructure, NFV, vRAN, ORAN, hybrid cloud is growing faster than they originally predicted. They are able to bundle Optane, SmartNICs, FPGAs, eASIC chips, XeonD, P5900 family Atom chips... I don't believe they have a competitor that can provide that level of solution.
  • Bagheera - Thursday, April 8, 2021 - link

    Patr!ck Patr!ck Partr!ck?
  • evilpaul666 - Saturday, April 10, 2021 - link

    It only works in front of a mirror. Donning a hoodie helps, too.
  • Oxford Guy - Wednesday, April 7, 2021 - link

    There is some faulty logic at work in many of the comments, with claims like it's cheating to use a more optimized compiler.

    It's not cheating unless:

    • the compiler produces code that's so much more unstable/buggy that it's quite a bit more untrustworthy than the less-optimized compiler

    • you don't make it clear to readers that the compiler may make the architecture look more performant simply because the other architectures may not have had compiler optimizations on the same level

    • you use the same compiler for different architectures when using a different compiler for one or more other architectures will produce more optimized code for those architectures as well

    • the compiler sabotages the competition, via things like 'genuine Intel'

    Fact is that if a CPU can accomplish a certain amount of work in a certain amount of time, using a certain amount of watts under a certain level of cooling — that is the part's actual performance capability.

    If that means writing machine code directly (not even assembly) to get to that performance level, so what? That's an entirely different matter, which is how practical/economical/profitable/effortful it is to get enough code to measure all of the different aspects of the part's maximum performance capability. The only time one can really cite that as a deal-breaker is if one has hard data to demonstrate that by the time the hand-tuned/optimized code is written changes to the architecture (and/or support chips/hardware) will obsolete the advantage — making the effort utterly fruitless, beyond intellectual curiosity concerning the part's ability. For instance, if one knows that Intel, for instance, is going to integrate new instructions (very soon) that will make various types of hand-tuned assembly obsolete in short order, it can be argued that it's not worth the effort to write the code. People made this argument with some of AMD's Bulldozer/Piledriver instructions, on the basis that enough industry adoption wasn't going to happen. But, frankly... if you're going to make claims about the part's performance, you really should do what you can to find out what it is.
  • Oxford Guy - Wednesday, April 7, 2021 - link

    One can, though, of course... include a disclaimer that 'it seems clear enough that, regardless of how much hand-tuned code is done, the CPU isn't going to deliver enough to beat the competition, if the competition's code is similarly hand-tuned' — if that's the case. Even if a certain task is tuned to run twice as fast, is it going to be twice as fast as tuned code for the competition's stuff? Is its performance per watt deficit going to be erased? Will its pricing no longer be a drag on its perceived competitiveness?

    For example, one could have wrung every last drop of performance out of Bulldozer but it wasn't going to beat Sandy Bridge E — a chip with the same number of transistors. Piledriver could beat at least the desktop version of Sandy in certain workloads when clocked well outside of the optimal (for the node's performance per watt) range but that's where it's very helpful to have tests at the same clock. It was discovered, for instance, that the Fury X and Vega had basically identical performance at the same clock. Since desktop Sandy could easily clock at the same 4.0 GHz Piledriver initially shipped with it could be tested at that rate, too.

    Ideally, CPU makers would release benchmarks that demonstrate every facet of their chip's maximum performance. The concern about those being best-case and synthetic is less of a problem in that scenario because all aspects of the chip's performance would be tested and published. That makes cherry-picking impossible.
  • mode_13h - Thursday, April 8, 2021 - link

    The faulty logic I see is that you seem to believe it's the review's job to showcase the product in the best possible light. No, that's Intel's job, and you can find plenty of that material at intel.com, if that's what you want.

    Articles like this should focus on representing the performance of the CPUs as the bulk of readers are likely to experience it. So, even if using some vendor-supplied compiler with trick settings might not fit your definition of "cheating", that doesn't mean it's a service to the readers.

    I think it could be appropriate to do that sort of thing, in articles that specifically analyze some narrow aspect of a CPU, for instance to determine the hardware's true capabilities or if it was just over-hyped. But, not in these sort of overall reviews.

Log in

Don't have an account? Sign up now