Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

Um, hello, awesome? The SanDisk Extreme II is the first Marvell based consumer SSD to actually prioritize performance consistency. The Extreme II does significantly better than pretty much every other drive here with the exception of Corsair's Neutron. Note that increasing the amount of spare area on the drive actually reduces IO consistency, at least during the short duration of this test, as SanDisk's firmware aggressively attempts to improve the overall performance of the drive. Either way this is the first SSD from a big OEM supplier that actually delivers consistent performance in the worst case scenario.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

Introduction AnandTech Storage Bench 2013
Comments Locked

51 Comments

View All Comments

  • qqqww314 - Monday, December 16, 2013 - link

    great review. You made me read all of it!! and learn everything about SSDs. Really great work!!

    do you know if SanDisk Extreme II is compatible with macbook white 2009? (last version of white)

    I read about DRAM 512 MB 1600MHZ
    and my macbook has MEMORY: 8GB RAM DDR3 1067 MHZ. It is not campatible with more MHZ. It crashes and doesn't work properly.

    so... i don't know if this is a problem. I Play live keyboards with Logic Pro and don't want any crashes that would not matter in other occasions. But also need the speed of an SSD running through my projects.

    I know friends that have serious monitor crashes of 10 to 20 seconds with older macbooks(2006,2007) and older SSDs.

    Thanks

Log in

Don't have an account? Sign up now