Frequency Ramp, Latency and Power

Frequency Ramp

One of the key items of a modern processor is its ability to go from an idle state up to a peak turbo state. For consumer workloads this is important for the responsiveness of a system, such as opening a program or interacting with a web page, but for the enterprise market it ends up being more relevant when each core can control its turbo, and we get situations with either multi-user instances or database accesses. For these systems, obviously saving power helps with the total cost of ownership, but being able to offer a low latency transaction in that system is often a key selling point.

For our 7F52 system, we measured a jump up to peak frequency within 16.2 milliseconds, which links in really well with the other AMD systems we have tested recently.

In a consumer system, normally we would point out that 16 milliseconds is the equivalent to a single frame on a 60 Hz display, although for enterprise it means that any transaction normally done within 16 milliseconds on the system is a very light workload that might not even kick up the turbo at all.

Cache Latency

As we’ve discussed in the past, the key element about Cache Latency on the AMD EPYC systems is the L3 cache – the way these cores are designed, with the quad-core core complexes, means that the only L3 each core can access is that within its own CCX. That means for every EPYC CPU, whether there is four cores per CCX enabled, or if there is only one core per CCX enabled, it only has access to 16 MB of L3. The fact that there is 256 MB across the whole chip is just a function of repeating units. As a result, we can get a cache latency graph of the following:

This structure mirrors up with what we’ve seen in AMD CPUs in the past. What we get here for the 7F52 is:

  • 1.0 nanoseconds for L1 (4 clks) up to 32 KB
  • 3.3 nanoseconds for L2 (13 clks) up to 256 KB,
  • 4.8-5.6 nanoseconds (19-21 clks) at 256-512 KB (Accesses starting to miss the L1 TLB here)
  • 12-14 nanoseconds (48-51 clks) from 1 MB to 8 MB inside the first half the CCX L3
  • Up to 37 nanoseconds (60-143 clks) at 8-16 MB for the rest of the L3 
  • ~150 nanoseconds (580-600+ clks) from 16 MB+ moving into DRAM

Compared to one of our more recent tests, Ryzen Mobile, we see the bigger L3 cache structure but also going beyond the L3 into DRAM, due to the hop to the IO die and then out to the main memory there’s a sizeable increase in latency in accessing main memory. It means that for those 600 or so cycles, the core needs to be active doing other things. As the L3 only takes L2 cache line rejects, this means there has to be a lot of reuse of L3 data, or cyclical math on the same data, to take advantage of this.

Core-to-Core Latency

By only having one core per CCX, the 7F52 takes away one segment of its latency structure.

  • Thread to Thread in same core: 8 nanoseconds
  • Core to Core in same CCX: doesn't apply
  • Core to Core in different CCX on same CPU in same quadrant: ~110 nanoseconds
  • Core to Core in different CCX on same CPU in different socket quadrant: 130-140 nanoseconds
  • Core to Core in a different socket: 250-270 nanosecons

 

All of the Power

Enterprise systems, unlike consumer systems, often have to adhere to a strict thermal envelope for the server and chassis designs that they go into. This means that, even in a world where there’s a lot of performance to be gained from having a fast turbo, the sustained power draw of these processors is mirrored in the TDP specifications of that processor. The chip may offer sustained boosts higher than this, which different server OEMs can design for and adjust the BIOS to implement, however the typical expected performance when ‘buying a server off the shelf’ is that if the chip has a specific TDP value, that will be the sustained turbo power draw. At that power, the system will try and implement the highest frequency it can, and depending on the microarchitecture of the power delivery, it might be able to move specific cores up and down in frequency if the workload is lighter on other cores.

By contrast, consumer grade CPUs will often boost well beyond the TDP label, to the second power limit as set in the BIOS. This limit is different depending on the motherboard, as manufacturers will design their motherboards beyond Intel specifications in order to supplement this.

For our power numbers, we take the CPU-only power draw at both idle and when running a heavy AVX2 load.

Load Power Per Socket

When we pile on the calories, all of our enterprise systems essentially go to TDPmax mode, with every system being just under the total TDP. The consumer processors give it a bit more oomph by contrast, being anywhere from 5-50% higher.

Idle Power Per Socket

In our high performance power plan, the AMD CPUs idle quite high compared to the Intel CPUs – both of our EPYC setups are at nearly 70 W per processor a piece, while the 32C Threadripper is in about that 45 W region. Intel seems to aggressively idle here.

AMD’s New EPYC 7F52 Reviewed SPEC2006 and SPEC2017 (Single Thread)
Comments Locked

97 Comments

View All Comments

  • eastcoast_pete - Tuesday, April 14, 2020 - link

    Thanks Ian! Two questions: 1. Could you and some of your readers give specific examples of applications for which these high frequency CPUs are of great interest?
    2. Any recent moves by Intel to make software developers use AVX512 even more, basically whenever it would make any sense?
    The reason I am asking the second question is that this seems to be the last bastion Intel holds, almost regardless of CPU class. Except for AVX512, AMD is beating them in price/performance quite badly, now from servers to workstations to desktop to mobile.
  • schujj07 - Tuesday, April 14, 2020 - link

    DB servers are one place where you want a fast CPU. SAP HANA for example loves frequency and RAM. I've seen PRD systems with all of 16CPUs but 1.5TB RAM.
  • DanNeely - Tuesday, April 14, 2020 - link

    AVX is a compute feature. Rendering and math heavy scientific/engineering workloads are where it'd shine. Databases, typical webservers, and most other 'conventional' business related software don't care.
  • Shorty_ - Thursday, April 16, 2020 - link

    Web serving is another place where frequency really helps. I run threadrippers with ECC UDIMM for php hosting for this express reason
  • Mikewind Dale - Tuesday, April 14, 2020 - link

    Unfortunately, this breaks AMD's trend of being cheaper than Intel. A 20 core Xeon Xeon Gold 5218R boosts up to 4.0 GHz and costs $1273. This new EPYC is only 16 core, boosts only up to 3.9 GHz, and costs $3100.

    Usually, AMD is cheaper than Intel, but this seems to be an exception. A pity.
  • Fataliity - Tuesday, April 14, 2020 - link

    That's because its a specialized processor. If you are buying one of these, you won't be worried about the price.

    To get that much cache, they are using 6-8chiplets. So as many as their top of the line products. So yeah, its going to cost more because theres more silicon.
  • schujj07 - Tuesday, April 14, 2020 - link

    The 5218R that you referenced isn't what the 7F52 is competing against. With a base clock of 2.1GHz the 5218R isn't a frequency optimized part. Most of Intel's CPUs have high boost clocks and middle of the road base clocks. The actual competition is the 6246R which has a 3.4GHz base and 4.1GHz boost clock. These high base clocks are for sustained performance in a given scenario.
  • MFinn3333 - Tuesday, April 14, 2020 - link

    That’s is also because it has about 12x as much L3 cache per CPU core. A combined 256MB vs 30MB cache size speaks for itself.
  • edzieba - Tuesday, April 14, 2020 - link

    It's down to two design choices: process choice, and core choice.

    AMDs hands are somewhat tied when it comes to process choice. They get what TSMC has on offer, and what TSMC has on offer is geared towards mobile devices because that's where the volume market is. The high-performance variants are variants, rather than the baseline.
    But even in general, as you shrink your process from 21nm on down, it gets harder and harder to clock up. Gate oxide thickness hit its limit generations ago, which is why gate voltage has remained near constant (~1.1v) for so long. This is only going to get harder as processes shrink further while being stuck with the constant gate oxide thickness but trying to cram closer together without interfering.

    In AMD's hands is the design goal of cramming as many cores as possible in. Great for multi-core workloads, but not so great for single core speed. Getting CPUs to clock higher means using multiple transistors per gate (2-3 or even more as processes shrink), and AMD figured they may as well use these transistors for more cores instead of faster cores. The obvious downside is the difficulty in getting Zen cores to even approach 5GHz (with Zen 2 being notable for getting above 4GHz without overkill cooling), and that any workloads that do not span beyond one thread leave those transistors sitting idle.
  • twtech - Tuesday, April 14, 2020 - link

    On the 7H12 - Dell offers it in their EPYC servers, such as the R7525. It's currently about a $375 upgrade over the 7742.

Log in

Don't have an account? Sign up now