With AMD’s new launch of Raven Ridge APUs, the Ryzen 3 2200G ($99) and Ryzen 5 2400G ($169), AMD holds the current integrated graphics apex when directly compared against Intel’s current SKU list. One of the most intriguing aspects of AMDs new Ryzen 2000 series desktop APU solutions is the fact that they are unlocked, offering a potential performance boost to both the core, the graphics, and the memory. In this article, we examine overclocking both our launch CPUs, a pair of retail CPUs, as well as going through each of the methods to overclock from each motherboard vendor.

The Benefits of Overclocking an APU

The basic reasoning behind CPU overclocking is to increase the clock frequency of the processor from the manufacturers default speeds with the aim to increase the compute performance of your system. This can have a couple of advantages, including decreasing video rendering time and improving gaming performance in titles that rely on raw CPU horsepower to drive up the frame rates.


A standard APU setup: MSI B350I Pro AC motherboard, AMD stock cooler, G.Skill memory

With APU overclocking, the added element of overclocking is in the integrated graphics, and there is also focus on the DRAM. For a Ryzen APU, overclocking the Vega graphics cores could really benefit a user struggling to make steady frame rates in certain titles and thus could potentially make the new budget focused AMD Ryzen 2000 series APUs an even better option than previously thought. The integrated graphics inside a CPU are often crying out for more memory bandwidth - compared to high-end discrete graphics, that have north of 200 GB/s of bandwidth, an APU sometimes has to deal with 12.5 GB/s (in a badly configured system) up to around 50 GB/s (dual-channel and overclocked) - so driving up the DRAM speeds and decreasing latencies has obvious knock on effects for the gaming experience.

Overclocking Used To Mean A Lot More

Back in the annals of history, where single and dual core systems ruled the desktop, overclocking had major performance advantages. When all the commands from the desktop have to be funneled through a single core, increasing that core had a direct impact on performance: 30% boost gave +30% performance - it was all direct an proportional. When desktops migrated to dual core, some of the extra could be offloaded onto that second core, but there was still competition for resources. In this era, especially with chips like the Core 2 Duo E2160, overclocks of +100% on a daily system were not impossible. When throughput is limited by the core, overclocking has an impressive result, and at this time the core frequency also had to match.


An E2160, from 2005 2006. A 1.8 GHz base frequency,
was able to reach 4.2 GHz stable 24/7.

Today, overclocking does not have the same amount of excitement: extra work is offloaded, and more content is multi-threaded. In order to get that same performance boost gain, the workload has to be embarassingly parallel, such as ray tracing or specific video codec transcoding. The days of +100% overclocks are gone as well, with most high-end processors only offering +20% at most.

But the issue here isn't really that: a decade ago, it was possible to overclock a $200 processor and make it perform like one that sold for $999. Anyone with a Core i7-920, pushing that 2.6 GHz processor to 4.0+ GHz to act like a Core i7-970, knows what this is all about. As we've seen with the Core i3-7350K review or the Pentium G3258 review, even with a chip that can overclock, there is one thing you cannot get: more cores. If a four-core processor gives you raw performance, no matter how much that dual-core is overclocked (in a 24/7 system), it has trouble meeting the quad-core in those pure throughput tests. 


Overclocking the i7-920 to 4.4 GHz, from our 2009 article, using a DFI X58 motherboard

However, the advent of integrated graphics has changed this, albeit on the low-end. For a budget gamer that cannot invest in a discrete GPU, having a decent integrated graphics system would seem ripe for a nice overclock. Boosting single core frequencies help with games targeted at this audience, especially those built on DX9 or DX11, while overclocking the integrated graphics allows more premium pixels to be pushed in the pipeline. For Ryzen APUs, overclocking the DRAM has the double benefit: better deep memory latency and bandwidth for the integrated graphics, but it also has a direct impact on AMD's Infinity Fabric, allowing the cores to talk to each other faster.

Overclocking an APU can be a challenge, but as this article will demonstrate, there are substantial real world benefits. It means that even budget builders, using the stock air coolers that AMD provides (which are substantially better than the Intel counterparts, as shown in our review), can get tangible benefits in user experience. 

A side note: there are professional overclockers - people actually make money from doing this.


The GIGABYTE team at a G.Skill overclocking event at Computex. Dino, HiCookie, and Sofos are all well known in the competitive overclocking community, and work for GIGABYTE.

Just as with car tuning, there are people in the industry (usually tied to motherboard manufacturers or retailers) that use their skills to get the best overclocked systems possible. These people either sell chips already pre-tested, build systems for clients that are pre-overclocked, or help design the components that the rest of us use. Some business, such as those in financial trading, also rely on heavily overclocked system to do their work, and hire overclockers to design and maintain those systems.


An extreme overclocking setup with Liquid Nitrogen to break world records

Most of these individuals have made names for themselves through competing in overclocking tournaments, first as amateurs, and holding world records. As with any activity, knowing what to do, and experience, are often key elements in succeeding.


Ian in 2011 at an Overclocking Meet, using an i7-2600K at 5.3 GHz on water with -5ºC ambient
to push an old AMD GPU for gold cups

AnandTech has at least three writers on staff (Ian, Joe, and I) that have spent *way* too much time competing in overclocking leagues, using sub-zero coolants, and holding world records.

Related Reading

 

The Basic Principles of Overclocking

At the entry level of overclocking, we rely on two things: frequency and voltage. For a given component to work at a frequency, it has to have enough voltage to remain stable: switching transistors require power, and transistors that switch faster require more power. As the frequency and voltage are tuned to go beyond the on-package specifications, the component transfers that electrical energy into thermal energy, which means that the component has to be appropriately cooled. For the most part, overclocking comes down to these three factors: frequency, voltage, and cooling.

Before overclocking any component, it’s a good idea to first acclimate yourself with the key principles behind overclocking. Not only does this save unnecessary time spent deep-diving into the realms of the unknown, but overclocking components past their limits can not only cause effects on the longevity of the parts, but it can also actually cause damage beyond repair (in extreme cases, death of the components). The last thing you really want is to do break something, so here's a handy hardware guide to go with AMD’s new Ryzen 2200G ($99) and Ryzen 5 2400G ($169).

Selecting the Right Cooler and Power Supply

For the most part, users will mention the benefit to overclocking, as if it’s the holy grail, is to attain the ultimate performance. However, there are some downsides - the main pitfall to overclocking is that a higher frequency requires an increase in CPU voltage to ensure stability. Increases in CPU voltage will put an extra level of pressure on cooling performance, as more volts equals more heat, and more heat will usually mean more noise in the way of cooling fans having to work harder. If silence is a key attribute in a system, then more noise can be annoying.


The new Wraith Prism RGB stock cooler on AMD's high end CPUs

Choosing the right CPU cooler for a 24/7 system is a necessary evil as a more expensive offering will add to the overall budget of the system. Selecting a sizable air cooler, like a Noctua NH-U14S, can offer better overclocking headroom compared to a stock cooler (AMD Wraith is very good as a bundled cooler, but there are better options). For the extreme air cooling, Ian uses Thermalright TRUE Copper coolers for his open-air testing, which weigh about 2 kg, along with Delta fans that come in at 72 dB - not something for daily use! Aside from air cooling, other options include liquid based closed loop coolers, such as the newly announced NZXT Kraken 360mm AIO with a large radiator surface area. With the bigger liquid coolers, the fans can often be run at lower speeds, which aids in reducing noise. The extreme end of liquid cooling is building a custom water cooling loop, as Dustin did back in 2013 (read here).

Once you have selected a suitable cooler, getting familiar with the BIOS on the chosen motherboard is paramount to achieving a successful overclock. When it comes to overclocking the Ryzen 3 2200G ($99) or the Ryzen 2400G ($169) APUs, the concepts on how to overclock are identical. There will be some differences however, based on the slightly higher core clock of 100 MHz at base, with a 200 MHz increase at boost on the Ryzen 5 2400G ($169). Other distinctions come in the way of the Vega cores with the 2400G sporting 192 more streaming processors and a higher base frequency of 1250 MHz, as opposed to a base frequency of 1100 MHz on the cheaper 2200G. This comes down to how AMD separates its chips - the ones that perform better after manufacturing are put up for sale as the premium component - the premium components are typically the better performers when overclocking as well.

When it comes to the Ryzen 2000 series APUs, both of the components feature a base TDP (thermal design power) of 65W, which is related to the peak power consumption when the processor is at its base frequency - the one that the CPU manufacturer guarantees. This number is also there for cooling companies to design their products to be able to dissipate this much heat. However, as overclocking does ratchet up the power consumption, the high-end processors can peak at 250W or more - so having a stable power supply is also a key component for an overclocked system. 


Seasonic's PRIME Titanium Series got an AnandTech Recommended award

The easiest thing to remember when selecting the right power supply for the task at hand, is quality over quantity. This means selecting a good quality unit, using quality components and having a good 80 PLUS rating, is better over something cheap and cheerful that uses a high wattage value as a visual grab. A good quality $150 80 PLUS Titanium 500W unit will out perform and provide better power stability over a very cheap $50 Power Supply rated at 1000W. The cheap non-branded power supplies are generally best avoided, and often fail to meet their ‘rated’ total power output. In relation to the Ryzen 2000 Series APUs, a solid 80 PLUS Bronze 500W branded unit is more than enough and will allow extra headroom if you wish to add a discrete graphics card into your build down the line.

Tracy does some amazing power supply reviews, detailing performance, efficiency, and even thermal losses in stressful environments. They are worth the read, or head over to our Best PSU guide to see our most recent preferred options on the market.

How To Overclock Options: BIOS or Software (Ryzen Master)?

Back in March of 2017, AMD launched its Ryzen 7 1700 ($299), 1700X ($309) and 1800X ($349) CPUs. This has brought them back into the performance desktop segment, which was reaffirmed by its Q4 Fiscal Year results for the end of last year. Along with this launch, AMD introduced its Ryzen Master software, which allows the user to overclock within software, creating a universal interface regardless of the motherboard vendor.

Having done various amounts of testing over the last couple of weeks on the Ryzen 3 2200G ($99) and Ryzen 5 2400G ($169), one common thing across most of the motherboards that I’ve used (at time of writing) is the uncertainty that the settings being applied in the BIOS would actually translate to the settings being applied in Windows. Ryzen Master arguably fixes that with its unified interface. However, most professionals prefer to use the BIOS.

BIOS issues are often numerous and unique to certain products and firmware versions. For example, we had issues such as specific memory straps on some boards not working at all, or some boards not having the required options to tune the iGPU frequency, or even the fact that some boards had a hard voltage limit. With AMD Ryzen Master, the latest version on hand at the time of testing (ver 1.2.0.0540) allowed for up to 1.55 volts on the CPU core, so any UEFI BIOSs with maturity limitations within the BIOS can be bypassed.

I reached out to AMD, and they have stated that all their ODM partners are updating their AM4 platforms. The updates are currently being rolled out now and that better fine tuning and improved configurations for overclocking the core frequency, the iGPU and even better memory support are already underway.

Although the AMD Ryzen Master software is very intuitive, and it offers a wide variety of options for basic overclocking as well as giving a user-friendly interface, overclocking through the BIOS is usually the more preferred choice. The main reason for this is that the settings in the BIOS are applied at a hardware level before the operating system is loaded - often changing settings after the OS has loaded can introduce instability. That being said, where previously it was once frowned upon to use software to overclock the processor, the AMD Ryzen Master utility is probably one of the most comprehensive pieces of software for what a general user would want. We go into the software in detail later in this review.

For this review, in our final results, the BIOS was used where applicable with Ryzen Master being used as a backup in case of issues, or lack of available options such as voltage limitations or the omitted settings such as graphics core clock settings.

Testing an Overclock: Is Stability Binary, or a Scale?

Once the system has the desired/attempted overclock applied, the next step is ensuring that the overclock is stable. A stability test is a simple way of pressuring your components, usually under maximum load, to see whether or not your settings remain stable. Stability testing often goes above and beyond regular system loading, ensuring that it is a good indicator of measuring temperatures of components as it is highly unlikely that the system will reach that level of compute pressure (workloads like rendering and transcoding video will push those stability boundries). Stress testing can also aid in diagnosing certain issues: depending on the system, an unstable system might show compute errors, a blue screen, or shut down immediately. Experience will often dictate what these mean, whether the DRAM frequency is too high, or there is not enough voltage, or the thermal performance of the cooler is insufficient. For anyone that has experienced a dying video card, showing green lines across a game, can identify with this almost 'trial-and-error' approach to solving stability issues.

One of the key discussions with overclocking forums and topics is 'how stable is stable?'. After dissecting the issue over several years, users fall into two broad categories: stability is either binary, or stability is a scale.

For users that see stability as binary, a system is either completely stable in 100% of situations, or unstable. For these users, stability testing can go on for days, and they can use a wide variety of high-pressure, high-compute workloads to expose every edge case possible. Even if that edge case might never be hit during the working life of the processor for their given workload, if the system does not remain stable, it isn't.


Stability testing can require seeing this a lot

For users that see stability as a sliding scale, the system can be 'stable enough'. The stability tests that these users perform are not as substantial or as long as the binary users, however these users often know the lengths at which they will use the system: there is no need to be 144 hours Power Virus stable for a system used to play Counter Strike. For our overclocking testing, because we have deadlines, we have to fall into this category (and the binary stability users aren't too keen on this category). Sliding scale users might experience a stability issue, every now and again - depending on the threshhold that could be once-a-week, once-a-month, or once-a-year.

(ed: I'm in the sliding scale camp, for what it is worth. I find it amusing to note that DRAM bit-errors, unaffected by overclocking, can happen on the scale of one per GB per four years (or less). That's about a worse case scenario, but it translates to about one bit-error per three months in a system with 16GB. that is something that can't be controlled by stability testing. Hopefully it occurs in DRAM that isn't being used.)

There are multiple programs and utilities designed for stressing components - some are designed specifically for CPU, some are for memory, and others that test multiple components or graphics. Understanding what the stress test is doing is as important as the stress test itself, as it helps to debug a system that is not stable. It is often recommended, for the sake of debugging, that each component that is overclocked is stress tested individually, as well as overclocking one component at a time. The deep end looks tempting, layering on the overclocks at stage one, but a user could overclock the all the CPU, the memory, and the graphics, only to encounter issues and not know which of the components is causing the instability.

For the purposes of overclocking on AMD’s Ryzen 2000 series APUs, the Ryzen 3 2200G ($99) and Ryzen 5 2400G ($169), I chose two main tools: Prime95 and Furmark.

Prime95 focuses on the CPU frequency and/or memory stability testing. The stress test computes multiple variations of FFTs (Fast Fourier Transforms) and ensures that the CPU floating point capabilites are pushed as hard as can be. Other options in Prime95 focus on the memory too, as the torture test itself can be configured to allocate more memory, but still computing FFTs.

Another popular tool is MemTest86, which is a more intensive memory stability checker. MemTest86 is also used to detect memory failures in bad memory, but this is usually more extreme. 

To test for stability on the Vega cores of APUs, we use Furmark. Furmark is a commonly used graphics card stability testing utility is tasked with the job. It is an OpenGL benchmark which uses fur rendering, which in turn puts immense pressure on the graphics card or in this case, the integrated GPU.

During the stress testing, aside from monitoring the CPU core frequency and integrated graphics, another important element is monitoring the temperatures. The two main ways of temperature detection are physical and software. Professional overclockers often use the physical method, using specialist tools designed to cope with sub-zero temperatures to which the thermistors inside the CPU or GPU are not capable of detecting properly.

Using software is often the method most people use, especially for daily systems, although it requires confidence that the values are reported accurately. Having a range of software available for monitoring GPU, CPU, and even memory temperatures, is important: instability issues from thermals can occur just as much as voltage. In the case of overheating, the system can simply turn off. Some users also have IR thermal tools, or thermal cameras, to note how the power delivery is performing, and if the heatsinks on the power delivery are sufficient and not causing thermal throttling on the performance. 

There are a couple of applications that do a good job in monitoring temperatures, such as CoreTemp, but the most comprehensive freeware version we tend to use is HWMonitor. This tool is created by the same company that does CPU-Z / CPUID. HWMonitor gives real-time temperature information from all the sensors on board, as well as voltages and fan speeds.

Most processors have a TJMax, or thermal limit, before the system switches off. For our Ryzen 3 2200G APU, this happened at around 95c reported within HWMonitor as the package temperature. From the stock temperatures, this potentially gives a lot of headroom with good cooling.

Another pair of useful monitoring applications are CPUID's CPU-Z software, and TechPowerUp's GPU-Z.

CPU-Z gives information on the firmware of the processor and offers real time information on things such as CPU frequency, baseclock, memory speeds and timings. GPU-Z displays information pertaining to the graphics side, which in the case of the 2200G are the 8 Vega compute units, the GPU core frequency, the memory frequency, the BIOS information, and even GPU load and fan speeds can be displayed, as well as information relating to compatible support such as DirectX 12.

Test Bed and Setup

For our testing, we have our usual test setup and peripherals. The CPUs in play number four: we have two samples from the launch (listed as retail) and two from retail, and each pair split with one 2400G and one 2200G. When analysing the BIOSes of each motherboard company, we used one motherboard from each, however the overclocking results at the end were done on the MSI B350I Pro AC motherboard. This motherboard is not a particularly overclocking focused motherboard, but more indicative of the price range that an APU buyer will be interested in.

Test Setup - Ryzen 3 2200G and Ryzen 5 2400G
Processors AMD Ryzen 3 2200G ($99)
Four Cores, Four Threads
3.50 GHz Base
3.70 GHz Boost

Vega 8 Integrated Graphics
8 CUs = 512 SPs
1100 MHz


Tested Launch Sample
Tested Retail Sample
AMD Ryzen 5 2400G ($169)
Four Cores, Eight Threads
3.60 GHz Base
3.90 GHz Boost

Vega 11 Integrated Graphics
11 CUs = 704 SPs
1250 MHz

Tested Launch Sample
Tested Retail Sample
Motherboard MSI B350I Pro AC
GIGABYTE AX370-Gaming 5
ASUS Prime X370-Pro
ASRock X370-Gaming ITX/ac
Cooling Thermaltake Floe Riing RGB 360
AMD Wraith Stealth
Automatic fan profile determined from the CPU Fan Header
Thermal Paste Cooler Master Mastergel Nano
Power Supply Thermaltake Toughpower Grand 1200 W Gold PSU
Memory G.Skill Ripjaws V 
2x8 GB, 1.35V 
DDR4-3600 17-18-18 (XMP)
DDR4-2933 for Stock
Hard Drive Crucial MX300 1 TB
Case Open Test Bed

Many thanks to our usual partners for assisting with the hardware for this article, such as AMD with the processors, the motherboard vendors for their samples, G.Skill for the memory, and Crucial for the storage.

What's In This Article

Over the next few pages, we will go through the BIOSes of each of the four main motherboard manufacturers, identifying where the overclocking options are. We will also approach the Ryzen Master tool, which is likely to be the first port of call to users new to overclocking. We also have the results from overclocking all four of our APUs: our two launch-day samples as well as two retail samples. The two different CPUs were benchmarked at their peak results, comparing the out-of-the-box performance to our best stable overclocks, to see if overclocking these APUs actually gives notable results.

Testing and Core by Gavin Bonshor
Additional Commentary by Ian Cutress

How to Overclock With MSI UEFI BIOS
Comments Locked

63 Comments

View All Comments

  • Eidorian - Monday, April 16, 2018 - link

    Alarms were going off in my head when it says in the article that the E2160 was from 2005. I have July 26, 2006 seared into my memory as Core 2 Duo Day. I see the E2160 IHS does say '05.

    https://ark.intel.com/products/29739/Intel-Pentium...
  • Ian Cutress - Monday, April 16, 2018 - link

    IHS says 05, ARK says Q3 '06, CPU-World says May 2017.
    http://www.cpu-world.com/Releases/Desktop_CPU_rele...
  • jjj - Monday, April 16, 2018 - link

    Pretty sure retail was early June 2007 (certain about year) for the Allendale Pentiums.
  • jjj - Monday, April 16, 2018 - link

    Some folks found it in retail in late May 2007
    http://www.overclockers.com/forums/showthread.php/...
  • Eidorian - Monday, April 16, 2018 - link

    That looks good.

    https://web.archive.org/web/20070927002239/http://...
  • nathanddrews - Monday, April 16, 2018 - link

    That 2200G is quite the little spitfire, relatively speaking. Not sure I would spend extra money for a better cooler on a budget gaming build, though. I would put that money toward an SSD or maybe a FreeSync display and then overclock as best I could.
  • coolhardware - Monday, April 16, 2018 - link

    I'm going with a 2200G for my son's first PC build. I remember my Pentium II 233 build with the help of Anandtech WAY back in the day, complete with SCSI HDD. It was a sweet system that lasted a long time.

    Now, back the AMD build, any mobo recommendations? I would like to keep it mini-itx if possible and I am leaning toward the GIGABYTE GA-AB350N:
    https://amzn.to/2HFQAoS (~$109) but am open to suggestions.

    Reliability is top concern and two digital video outs (HDMI, or DisplayPort, not analog DSUB). Starting with the integrated GPU but maybe down the road going discrete.

    Thanks in advance for advice! :-)
  • coolhardware - Monday, April 16, 2018 - link

    PS I remember Anand posting SO MANY motherboard reviews back when he was just a kid (and so was I). Back then I settled on a Tyan motherboard after his recommendation. :-)
  • RaduR - Tuesday, April 17, 2018 - link

    Back old ViA MVP3 platform . Yes we were kids and Anand was actually working here !
  • gavbon - Tuesday, April 17, 2018 - link

    There will be many more to come, don't worry about that!

Log in

Don't have an account? Sign up now