Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Whole-Drive Fill

Pass 1
Pass 2

The SLC write cache on the 2TB Inland Performance Plus lasts for about 225GB on first pass (about the same cache size as 980 PRO, but a bit faster), and about 55GB on the second pass when the drive is already full. Performance during each phase of filling the drive is quite consistent, with the only significant variability showing up after the drive is 80% full. Sequential write performance during the SLC cache phase is higher than any other drive we've tested to date.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

The post-cache performance is a bit slower than the fastest TLC drives, but overall average throughput is comparable to other top TLC drives. The Inland Performance Plus is still significantly slower than the MLC and Optane drives that didn't need a caching layer, but one or two more generational improvements in NAND performance may be enough to overcome that difference.

Working Set Size

As expected from a high-end drive with a full-sized DRAM buffer, the random read latency from the Inland Performance Plus is nearly constant regardless of the working set size. There's a slight drop in performance when random reads are covering the entire range of the drive, but it's smaller than the drop we see from drives that skimp on DRAM.

Performance vs Block Size

Random Read
Random Write
Sequential Read
Sequential Write

There are no big surprises from testing the Inland Performance Plus with varying block sizes. The Phison E18 controller has no problem handling block sizes smaller than 4kB. The random write results are a little rough especially when testing the drive at 80% full, but it's hardly the only drive to have SLC cache troubles here. Like many other drives, the sequential read performance doesn't scale smoothly with the larger block sizes, and the drive really needs a larger queue depth or very large block size to deliver great sequential read performance. 

Synthetic Tests: Basic IO Patterns Mixed IO Performance and Idle Power Management
Comments Locked

118 Comments

View All Comments

  • Samus - Sunday, May 16, 2021 - link

    Microsoft really has to get with the times and launch ReFS on the client end already. NTFS is a joke compared to even legacy file systems like EXT3 and hasn't been updated in 20 years (unless you consider the journaling update starting with Windows 8)
  • GeoffreyA - Monday, May 17, 2021 - link

    Well, NTFS might not have been updated much, but you know what they say, if it ain't broke, don't fix it. It was quite advanced for its time. Still is solid. Had journalling from the start, Unicode, high-precision time, etc. Compression came next. Then in NT 5, encryption, sparse files, quotas, and all that. Today, the main things it's lacking are copy-on-write, de-duplication, and checksums for data. Microsoft seems to have downplayed ReFS, owing to some technical issues.
  • MyRandomUsername - Tuesday, May 18, 2021 - link

    Have you tried compression on NTFS (particularly on small files). I/O performance on a high end NVME drive plummets to first gen SSD level. Absolutely unusable.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    Haven't got an NVMe drive but I'll try some experiments and see how it goes. Could be that many, small files stagger any SSD.
  • mode_13h - Tuesday, May 18, 2021 - link

    > copy-on-write, de-duplication

    A huge use case for that is snapshots. They're my favorite feature of BTRFS.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    Glancing over it, Btrfs looks impressive.
  • mode_13h - Thursday, May 20, 2021 - link

    Copy-on-write can cause problems, in some cases. BTRFS lets you disable it on a per-file, per-directory, or per-subvolume basis.

    One feature of BTRFS I haven't touched is its built-in RAID functionality. I've always used it atop a hardware RAID controller or even a software RAID. And if you're using mechanical disks, software RAID is plenty fast, these days.
  • GeoffreyA - Thursday, May 20, 2021 - link

    Whenever there's sharing of this sort, there's always trouble round the corner.
  • mode_13h - Friday, May 21, 2021 - link

    > Whenever there's sharing of this sort, there's always trouble round the corner.

    Maybe. I think the issue is really around pretending you have a unique copy, when it's really not. In that sense, it's a little like caches -- usually a good optimization, but there's pretty much always some corner case where you hit a thrashing effect and they do more harm than good.
  • GeoffreyA - Sunday, May 23, 2021 - link

    "I think the issue is really around pretending you have a unique copy, when it's really not."

    You hit the nail there. A breaking down between concept ("I've got a unique copy") and implementation. And so the outside world, tying itself to the concept, runs into occasional trouble.

Log in

Don't have an account? Sign up now