Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Whole-Drive Fill

Pass 1
Pass 2

The SLC write cache on the 2TB Inland Performance Plus lasts for about 225GB on first pass (about the same cache size as 980 PRO, but a bit faster), and about 55GB on the second pass when the drive is already full. Performance during each phase of filling the drive is quite consistent, with the only significant variability showing up after the drive is 80% full. Sequential write performance during the SLC cache phase is higher than any other drive we've tested to date.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

The post-cache performance is a bit slower than the fastest TLC drives, but overall average throughput is comparable to other top TLC drives. The Inland Performance Plus is still significantly slower than the MLC and Optane drives that didn't need a caching layer, but one or two more generational improvements in NAND performance may be enough to overcome that difference.

Working Set Size

As expected from a high-end drive with a full-sized DRAM buffer, the random read latency from the Inland Performance Plus is nearly constant regardless of the working set size. There's a slight drop in performance when random reads are covering the entire range of the drive, but it's smaller than the drop we see from drives that skimp on DRAM.

Performance vs Block Size

Random Read
Random Write
Sequential Read
Sequential Write

There are no big surprises from testing the Inland Performance Plus with varying block sizes. The Phison E18 controller has no problem handling block sizes smaller than 4kB. The random write results are a little rough especially when testing the drive at 80% full, but it's hardly the only drive to have SLC cache troubles here. Like many other drives, the sequential read performance doesn't scale smoothly with the larger block sizes, and the drive really needs a larger queue depth or very large block size to deliver great sequential read performance. 

Synthetic Tests: Basic IO Patterns Mixed IO Performance and Idle Power Management
Comments Locked

118 Comments

View All Comments

  • Billy Tallis - Thursday, May 13, 2021 - link

    Single-core performance can help with a lot of synthetic storage benchmarks, by making for faster context switches and system calls. But if you care about such marginal improvements, I suspect we would find that dropping Windows and using Linux instead will have a far greater impact on storage performance and OS overhead.

    I don't recall any of the PCIe 4.0 SSD controller vendors complaining about AMD's PCIe implementation being a bottleneck.
  • mode_13h - Thursday, May 13, 2021 - link

    @thestryker is right that Intel claimed faster PCIe 4 SSD performance than the competition, in one of their Rocket Lake slides. I think it was like 20%, but now I can't find the slide.

    I was so struck by it that I clearly remember it, and was wondering if they were talking about a PCIe 4.0 drive connected to Ryzen via its chipset link. Because that's the only way it made sense to me.
  • GeoffreyA - Friday, May 14, 2021 - link

    "connected to Ryzen via its chipset link"

    That's a possibility.
  • Spunjji - Friday, May 14, 2021 - link

    Ryan Shrout released the information in February, and it was 11%. The claim was based on performance from the PCMark 10's "quick" storage benchmark. Apparently the drives being tested were connected to a riser card in a secondary PCIe slot, which was an odd decision as X570 supports connecting the SSD directly to the CPU via the M.2 slots.

    It looks like they found a benchmark that favoured their setup specifically and went with it.
  • Slash3 - Friday, May 14, 2021 - link

    Rocket Lake itself also has a dedicated CPU connected NVMe M.2 slot. The whole setup was just absurd.
  • carcakes - Thursday, May 13, 2021 - link

    Experience the Best of Both Worlds: 8x M.2 Ports @ x16 PCIe 4.0 Speed!

    1x HighPoint SSD7540 PCIe Gen4 x16 8-Port M.2 NVMe RAID Controller + 8x ASRock Legacy M.2 Graphics Card.
  • mode_13h - Thursday, May 13, 2021 - link

    That's more expensive, chews up PCIe lanes, and can only hurt read latency. Plus, having faster SSDs to put in a RAID makes such configurations even faster!
  • Dug - Thursday, May 13, 2021 - link

    So what you are really saying is, buy the WD SN850 instead of this.
  • Oxford Guy - Friday, May 14, 2021 - link

    Looks like the ADATA is the price-performance winner for budget buyers.
  • Alexvrb - Thursday, May 13, 2021 - link

    When the 176L-equipped models with tuned firmware roll around, they just might take the crown.

    Then again, until hardware-accelerated DirectStorage titles start coming out, I don't think there's much benefit for me. Even then, only for titles that have some extremely large assets that need to be streamed in and don't fit in RAM... DS is far more beneficial for consoles since they need to save money wherever possible - mainly RAM.

Log in

Don't have an account? Sign up now