Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

Um, hello, awesome? The SanDisk Extreme II is the first Marvell based consumer SSD to actually prioritize performance consistency. The Extreme II does significantly better than pretty much every other drive here with the exception of Corsair's Neutron. Note that increasing the amount of spare area on the drive actually reduces IO consistency, at least during the short duration of this test, as SanDisk's firmware aggressively attempts to improve the overall performance of the drive. Either way this is the first SSD from a big OEM supplier that actually delivers consistent performance in the worst case scenario.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

Introduction AnandTech Storage Bench 2013
Comments Locked

51 Comments

View All Comments

  • Quizzical - Monday, June 3, 2013 - link

    Good stuff, as usual. But at what point do SSD performance numbers cease to matter because they're all so fast that the difference doesn't matter?

    Back when there were awful JMicron SSDs that struggled along at 2 IOPS in some cases, the difference was extremely important. More recently, your performance consistency numbers offered a finer grained way to say that some SSDs were flawed.

    But are we heading toward a future in which most SSDs do well in any test that you can come up with shows all of the SSDs performing well? Does the difference between 10000 IOPS and 20000 really matter for any consumer use? How about the difference between 300 MB/s and 400 MB/s in sequential transfers? If so, do we declare victory and cease caring about SSD reviews?

    If so, then you could claim some part in creating that future, at least if you believe that vendors react to flaws that reviews point out, even if only because they want to avoid negative reviews of their own products.

    Or maybe it will be like power supply reviews, where mostly only good ones get sent in for reviews, while bad ones just show up on New Egg and hope that some sucker will buy it, or occasionally get a review when some tech site buys one rather than getting a review sample sent from the manufacturer?
  • Tukano - Monday, June 3, 2013 - link

    I feel the same way. Almost need an order of magnitude improvement to notice anything different.

    My question now is, where are the bottlenecks?

    What causes my PC to boot in 30 seconds as opposed to 10?

    I don't think I ever use the same amount of throughput as what these SSD's offer
    My 2500K @ 4.5GHz doesn't seem to ever get stressed (I didn't notice a huge difference between stock vs OC)

    Is it now limited to the connections between devices? i.e. transferring from SSD to RAM to CPU and vice versa?
  • talldude2 - Monday, June 3, 2013 - link

    Storage is still the bottleneck for performance in most cases. Bandwidth between CPU and DDR3 1600 is 12.8GB/s. The fastest consumer SSDs are still ~25 times slower than that in a best case scenario. Also, you have to take into account all the different latencies associated with any given process (i.e. fetch this from the disk, fetch that from the RAM, do an operation on them, etc.). The reduced latency is really what makes the SSD so much faster than an HDD.

    As for the tests - I think that the new 2013 test looks good in that it will show you real world heavy usage data. At this point it looks like the differentiator really is worst case performance - i.e. the drive not getting bogged down under a heavy load.
  • whyso - Monday, June 3, 2013 - link

    Its twice that If you have two RAM sticks.
  • Chapbass - Monday, June 3, 2013 - link

    I came in to post that same thing, talldude2. Remember why RAM is around in the first place: Storage is too slow. Even with SSDs, the latency is too high, and the performance isn't fast enough.

    Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD. That changes a lot of the ways that things can be accessed, and perhaps frees up RAM for more important things. I don't know this for a fact, but if the possibility is there you never know.

    Either way, back to my original point, until RAM becomes redundant, were not fast enough, IMO.
  • FunBunny2 - Monday, June 3, 2013 - link

    -- Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD.

    It's called an organic normal form relational schema. Lot's less bytes, lots more performance. But the coder types hate it because it requires so much less coding and so much more thinking (to build it, not use it).
  • crimson117 - Tuesday, June 4, 2013 - link

    > It's called an organic normal form relational schema

    I'm pretty sure you just made that up... or you read "Dr. Codd Was Right" :P
  • FunBunny2 - Tuesday, June 4, 2013 - link

    When I was an undergraduate, freshman actually, whenever a professor (english, -ology, and such) would assign us to write a paper, we'd all cry out, "how long does it have to be????" One such professor replied, "organic length, as long as it has to be." Not very satisfying, but absolutely correct.

    When I was in grad school, a professor mentioned that he'd known one guy who's Ph.D. dissertation (economics, mathy variety) was one page long. An equation and its derivation. Not sure I believe that one, but it makes the point.
  • santiagoanders - Tuesday, June 4, 2013 - link

    I'm guessing you didn't get a graduate degree in English. "Whose" is possessive while "who's" is a contraction that means "who is."
  • FunBunny2 - Tuesday, June 4, 2013 - link

    Econometrics. But, whose counting?

Log in

Don't have an account? Sign up now