Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Whole-Drive Fill

Pass 1
Pass 2

The SLC write cache on the 2TB Inland Performance Plus lasts for about 225GB on first pass (about the same cache size as 980 PRO, but a bit faster), and about 55GB on the second pass when the drive is already full. Performance during each phase of filling the drive is quite consistent, with the only significant variability showing up after the drive is 80% full. Sequential write performance during the SLC cache phase is higher than any other drive we've tested to date.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

The post-cache performance is a bit slower than the fastest TLC drives, but overall average throughput is comparable to other top TLC drives. The Inland Performance Plus is still significantly slower than the MLC and Optane drives that didn't need a caching layer, but one or two more generational improvements in NAND performance may be enough to overcome that difference.

Working Set Size

As expected from a high-end drive with a full-sized DRAM buffer, the random read latency from the Inland Performance Plus is nearly constant regardless of the working set size. There's a slight drop in performance when random reads are covering the entire range of the drive, but it's smaller than the drop we see from drives that skimp on DRAM.

Performance vs Block Size

Random Read
Random Write
Sequential Read
Sequential Write

There are no big surprises from testing the Inland Performance Plus with varying block sizes. The Phison E18 controller has no problem handling block sizes smaller than 4kB. The random write results are a little rough especially when testing the drive at 80% full, but it's hardly the only drive to have SLC cache troubles here. Like many other drives, the sequential read performance doesn't scale smoothly with the larger block sizes, and the drive really needs a larger queue depth or very large block size to deliver great sequential read performance. 

Synthetic Tests: Basic IO Patterns Mixed IO Performance and Idle Power Management
Comments Locked

118 Comments

View All Comments

  • DominionSeraph - Thursday, May 13, 2021 - link

    Try an optimized OS like XP. There's really no difference.
  • philehidiot - Thursday, May 13, 2021 - link

    I do actually have windows 95 installed as a VM, running off an SSD. If you want to really understand how bloated and sluggish Windows 10 is, try using Windows 95 and see how far they have regressed in pursuit of looking pretty.
  • GeoffreyA - Friday, May 14, 2021 - link

    Even XP feels faster, on an older computer, than 10. Vista is where the sluggishness crept in.
  • GeoffreyA - Friday, May 14, 2021 - link

    Also, software in general has become more sluggish, owing to excessive use of abstractions, frameworks, and modern languages.
  • jospoortvliet - Friday, May 14, 2021 - link

    Software has become vastly more complex as users demand more and more features and slick interfaces. Also, platforms evolve faster and more need to be supported. Developers have less time per feature so more abstractions and higher level languages are needed. You can't write code in a browser that is as efficient as good old assembly as it has to run everywhere and even if you could you would lose to a competitor who wrote more features with less developers....

    So yeah, you are right but it is a trend that is hard to reverse.
  • GeoffreyA - Friday, May 14, 2021 - link

    Quite true, but one can't help feeling a pang of regret when looking at today's applications vs. those rare C/C++ Win32 ones that, as they say, just fly.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "Quite true, but one can't help feeling a pang of regret when looking at today's applications vs. those rare C/C++ Win32 ones that, as they say, just fly."

    true fact. I used 1-2-3 pretty much from version 1, which is X86 assembler as was DOS. somewhere around 2.4 it was re-written in C (C++ didn't yet exist). the first time I fired up 2.4 1-2-3 (on a 640K 8088) what had been instant screen updates were now slow as molasses up hill in winter; you could see individual elements change, one by one.

    it appears to be the fact that the constant push and pull between node shrink, more transistors, phatter cpu, more memory on the one hand and software bloat on the other doesn't balance out. I've always been sceptical of ever-increasing number of 'tiers' in the memory hierarchy paired with load-store architectures. may haps persistent memory will give us a true Single Level Storage that's more performant than just virtual storage/memory. have to work out a new version of transaction control, though.
  • GeoffreyA - Saturday, May 15, 2021 - link

    Well, soon they'll need some big changes, when the quantum limits set by Nature are hit. As for the software, yes, it tends to get slower as time goes by. Any gains in hardware are quickly reversed. I think there's been a view inculcated against C++, instigated by Java perhaps, that it's not safe, it's bad, and so one needs to use a better, more modern language; or if C++, do things in an excessive object-oriented way, away from the lighter C sort of style. As in all of life, even "good" programming principles can be taken too far. So moderation is best.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "if C++, do things in an excessive object-oriented way, away from the lighter C sort of style."

    C has been described as the universal assembler. pretty much true, esp. if you limit the description to the bare language w/o the many libraries. a C program can be blazingly fast, if the code treats the machine as a Control Program would. but that's how the PC World was nearly extinguished in the late 80s and early 90s by viruses of all kinds. I'm among those who spent more time than I wanted, editing with Norton Disk Doctor. not an era I miss.
  • GeoffreyA - Sunday, May 16, 2021 - link

    Oh, yes, programs were doing their own thing, till OS's began to clamp down. As years went by, security got more attention too, as it should, and newer languages guaranteed different types of safety. An important point in this era where so much of our information is handled electronically. Or portability made easier, or maintenance.

Log in

Don't have an account? Sign up now