Conclusion

The enterprise SSD market has undergone major shifts from a few years ago. PCIe SSDs have expanded from an expensive niche to include a broad range of mainstream products. It's no longer possible to carve the market up into just a few clear segments; the enterprise SSD market is a rich spectrum of options. We're further than ever from having a one size fits all approach to storage.

But at the same time, we're as close as we'll ever get to seeing the market dominated by one kind of memory. TLC NAND has pushed MLC NAND out of the market. QLC, 3D XPoint and Z-NAND are all still niche memories compared to the vast range that TLC currently covers. We tested enterprise SSDs from a variety of market segments: two tiers of SATA SSD and a range of NMVe from a low-power 1TB M.2 up to power-hungry multi-TB U.2 and add-in card drives.

The latest Samsung enterprise SATA drives show that SATA is far from a dying legacy technology. The SATA drives often come out on top of our power efficiency ratings: with power draw that largely stays in the 2-3W range, they can compete in IOPS per Watt even when the raw performance is much slower than the NVMe drives. And the SATA drives aren't always far behind on performance: the smaller and slower NVMe drives don't have a huge advantage in steady-state write performance compared to SATA drives of the same capacity. Granted, most of these drives are intended for heavily read-oriented workloads, and it no longer makes sense to make a high-endurance write-oriented SATA drive because then the interface would be more of a bottleneck than the NAND flash itself.

Where the NVMe drives shine is in delivering read performance far beyond what a single SATA link can handle, and this carries over to relatively read-heavy mixed workloads. The downsides of these drives are higher cost and higher power consumption. Their power efficiency is only competitive with the SATA drives if the NVMe drives are pushed to deliver the most performance their controllers can handle. That usually means higher queue depths than needed to saturate a SATA drive, and it often means that a higher capacity drive is needed as well: the 1TB and 2TB NVMe drives often don't have enough flash memory to keep the controller busy. The big, power-hungry controllers used in high-end NVMe SSDs are most worthwhile when paired with several TB of flash. Samsung's 983 DCT uses the same lower-power NVMe controller as their consumer NVMe drives, and its sweet spot is clearly at lower capacities than the ideal for the Intel P4510 or Memblaze PBlaze5.

The choice between SATA, low-power NVMe and high-end NVMe depends on the workload, and each of those market segments has a viable use case in today's market. The SATA drives are by far the cheapest way to put the most TB of flash into a single server, and in aggregate they can deliver high performance and great performance per Watt. Their downside is in applications requiring high performance per TB: datasets that aren't very large, but are very hot. It takes hours to read or write the entire capacity of a 4TB SATA SSD. A handful of 4TB SATA SSDs can easily be large enough while not offering enough aggregate performance. In those cases, splitting the same dataset across 1TB SATA SSDs won't provide as much performance boost as moving to multi-TB NVMe drives.

The most powerful NVMe SSDs like the Memblaze PBlaze5 have shown that modern 3D TLC NAND can outperform older MLC-based drives in almost every way. With a sufficiently high queue depth, the PBlaze5 can even approach the throughput of Intel's Optane SSDs for many workloads: the PBlaze5 offers similar sequential write performance and better sequential read performance than the Intel Optane P4800X. The random write speed of the PBlaze5 is slower by a third, but for random reads it matches the Optane SSD and with careful tuning it can provide substantially more random read throughput than a single Optane SSD. All of this is from a drive that's high-end even by enterprise standards, but is actually a generation behind the other flash-based SSDs in this review.

Overall, there's no clear winner from today's review, and no obvious sweet spot in the enterprise SSD market. Samsung still puts out a very solid product lineup, but they're not the only supplier of good 3D NAND anymore. Intel's 64-layer 3D TLC is just as fast and power efficient, though Intel's current use of it the P4510 suggests that they're still a bit behind on the controller side of things—the Samsung 983 DCT's QoS is much better even if the throughput is a bit lower. And the Memblaze PBlaze5 shows that the brute force power of the largest SSD controllers can overcome the disadvantage of being a generation behind on the flash memory; we look forward to testing their more recent models that upgrade to 64-layer 3D TLC.

We're still feeling our way with how we want to present future Enterprise SSD reviews, so if you have comments on what you'd like to see, either product wise or testing methodology, then please leave a comment below.

Mixed I/O & NoSQL Database Performance
Comments Locked

36 Comments

View All Comments

  • ZeDestructor - Friday, January 4, 2019 - link

    Could you do the MemBlaze drives too? I'm really curious how those behave under consumer workloads.
  • mode_13h - Thursday, January 3, 2019 - link

    At 13 ms, the Peak 4k Random Read (Latency) chart is likely showing the overhead of a pair of context switches for 3 of those drives. I'd be surprised if that result were reproducible.
  • Billy Tallis - Thursday, January 3, 2019 - link

    Those tail latencies are the result of far more than just a pair of context switches. The problem with those three drives is that they need really high queue depths to reach full throughput. Since that test used many threads each issuing one IO at a time, tail latencies get much worse once the number of threads outnumbers the number of (virtual) cores. The 64-thread latencies are reasonable, but the 99.9th and higher percentiles are many times worse for the 96+ thread iterations of the test. (The machine has 72 virtual cores.)

    The only way to max out those drive's throughput while avoiding the thrashing of too many threads is to re-write an application to use fewer threads that are issuing IO requests in batches with asynchronous APIs. That's not always an easy change to make in the real world, and for benchmarking purposes it's an extra variable that I didn't really want to dig into for this review (especially given how it complicates measuring latency).

    I'm comfortable with some of the results being less than ideal as a reflection of how the CPU can sometimes bottleneck the fastest SSDs. Optimizing the benchmarks to reduce CPU usage doesn't necessarily make them more realistic.
  • CheapSushi - Friday, January 4, 2019 - link

    Hey Billy. this is a bit of a tangent but do you think SSHDs will have any kind of resurgence? There hasn't been a refresh at all. The 2.5" SSHDs max out at about 2TB I believe with 8GB of MLC(?) NAND. Now that QLC is being pushed out and with fairly good SLC schemes, do you think SSHDs could still fill a gap in price + capacity + performance? Say, at least a modest bump to 6TB of platter with 128GB of QLC/SLC-turbo NAND? Or some kind of increase along those lines? I know most folks don't care about them anymore. But there's still something appealing to me about the combination.
  • leexgx - Friday, January 4, 2019 - link

    Sshd tend to use MLC, Only ones been interesting has been the Toshiba second gen sshds as they use some of the 8gb for write caching (from some Basic tests I have seen)
    where as seagate only caches commonly read locations
  • leexgx - Friday, January 4, 2019 - link

    Very annoying the page reloading

    Want to test second gen Toshiba but finding the right part number as they are using creptic part numbers
  • CheapSushi - Friday, January 4, 2019 - link

    Ah, I was not aware of the ones from Toshiba, thanks for the heads up. Write caching seems the way to go for such a setup. Did the WD SSHD's do the same as Seagates?
  • leexgx - Friday, January 11, 2019 - link

    I have obtained the Toshiba mq01, mq02 and there h200 sshd all 500gb to test to see if write caching works (limit testing to 500mb writing at start see how it goes from There
  • thiagotech - Friday, January 4, 2019 - link

    Can someone help me understanding which scenarios is considered as QD1 and higher? Does anyone have a guide for dummies what is queue depth? Lets suppose i'll start Windows and there is 200 files of 4k, is it a QD1 or QD64? Because i was copying a folder with a large number of tiny files and my Samsung 960 Pro reached like 70MBPS of copy speed, is really bad number...
  • Greg100 - Saturday, January 5, 2019 - link

    thiagotech,

    About queue depth during boot up a Windows check last post: https://forums.anandtech.com/threads/qd-1-workload...

    About optimization Samsung 960 Pro performance check: "The SSD Reviewers Guide to SSD Optimization 2018" on thessdreview

Log in

Don't have an account? Sign up now