Last year Intel introduced its first truly new SSD controller since 2008. Oh how times have changed since then. Intel's original SSD controller design was dual purpose, designed for both consumer and enterprise workloads. Launching first as the brains behind a mainstream Intel SSD, that original controller did a wonderful job of kicking off the SSD revolution that followed. Growing pains and a couple of false starts kept a true successor to Ephraim (Intel's first controller) from ever really surfacing over the next few years.

Last year, Ephraim got a true successor and it came in the form of a very high-end enterprise drive: the Intel SSD DC S3700. Equipped with tons of 25nm HET-MLC NAND, the S3700 officially broke the enterprise addiction to SLC for high endurance drives while raising the bar in all aspects of performance. In addition to the usual considerations however, Intel had a new focus with the S3700: performance consistency.

Due to the nature of NAND flash, there's a lot of background management/cleanup that happens in order to ensure endurance as well as high performance. It's these background management tasks that can dramatically impact performance. I love the cleaning your room analogy because it accurately describes the tradeoffs SSD controller makers have to deal with. Clean early and often and you'll do well. Put off cleaning until later and you'll enjoy tons of free time early but you'll quickly run into a problem. It's an oversimplification, but the latter is what most SSD controllers have done historically, and the former is what Intel always believed in. With the S3700, Intel took it to the next level and built the most consistent performing SSD I'd ever tested.

Performance consistency matters for a couple of reasons. The most obvious is an impact to user experience. Predictable latencies are what you want, otherwise your applications can encounter odd hiccups. In client drives, those hiccups appear as unexpected pauses during application usage. In the enterprise, the manifestation is similar except the user encounters the issue somewhere over the internet rather than locally. The other issue with inconsistent performance really creeps up in massive RAID arrays. With many drives in a RAID array, overall performance is determined by the slowest performing drive. Inconsistent performance, particularly with large downward swings, can result in substantial decrease in the performance of large RAID arrays. The motivation to build a consistently performing SSD is high, but so is the level of difficulty in building such a drive.

Intel had the luxury of being able to start over with the S3700's controller architecture. It moved to a flat indirection table (mapping between LBAs and NAND pages), which incurred a DRAM size penalty but ultimately made it possible to deliver much better performance consistency. The S3700 did amazingly well in our enterprise tests, and produced the most consistent IO consistency curves I'd ever seen. The only downside? Despite being much better priced than the Intel X25-E and SSD 710, the S3700 is still a very expensive drive. The move to a better architecture helped reduce the amount of spare area needed for good performance, which in turn reduced cost, but the S3700 still used Intel's most expensive, highest endurance MLC NAND available (25nm HET-MLC). With the largest versions capable of enduring nearly 15 petabytes of writes, the S3700 was really made for extremely write intensive workloads. The drive performs very well across the board, but if you don't have an extremely write intensive workload you'd be paying for much more than you needed.

We always knew that Intel would build a standard MLC version of the S3700, and today we have that drive: the Intel SSD DC S3500.

The Drives & Architecture
Comments Locked

54 Comments

View All Comments

  • ShieTar - Wednesday, June 12, 2013 - link

    I think the metric is supposed to show that you need a dedicated drive per VM with mechanical HDDs, but that one of these SSDs can support and not slow down 12 VMs by itself. Having 12 VMs access the same physical HDD can drive access times into not-funny territory.
    The 20GB per VM can be enough if have a specific kernel and very little software. Think about a "dedicated" Web-Server. Granted, the comparison assumes a quiet specific usage scenario, but knowing Intel they probably did go out and retrieve that scenario from an actual commercial user. So it is a valid comparison for somebody, if maybe not the most convincing one to a broad audience.
  • Death666Angel - Wednesday, June 12, 2013 - link

    Read the conclusion page. That just refers to the fact that those 2 setups have the same random IO performance. Nothing more, nothing less.
  • FunBunny2 - Wednesday, June 12, 2013 - link

    Well, there's that other vector to consider: if you're enamoured of sequential VSAM type applications, then you'd need all that HDD footprint. OTOH, if you're into 3NF RDBMS, you'd need substantially less. So, SSD reduces footprint and speeds up the access you do. Kind of a win-win.
  • jimhsu - Wednesday, June 12, 2013 - link

    Firstly, the 500 SAS drives are almost certainly short-stroked (otherwise, how do you sustain 200 IOPS, even on 15K drives). That cuts capacity by 2x at least. Secondly, the large majority of web service/database/enterprise apps are IO-limited, not storage-limited, hence all that TB is basically worthless if you can't get data in and out fast enough. For certain applications though (I'm thinking image/video storage for one), obviously you'd use a HDD array. But their comparison metric is valid.
  • rs2 - Wednesday, June 12, 2013 - link

    That doesn't mean it's not also confusing. The primary purpose of a "SW SAN Solution" is storage, not IOPS, so one SAN is not comparable to another SAN unless they both offer the same storage capacity.

    In the specific use-case of virtualization, IOPS are generally more important than storage space. But if IOPS are what they want to compare across solutions is IOPS performance then they shouldn't label either column a "SAN".

    So yes, on the one hand it's valid, but on the other it's definitely presented in a confusing way.
  • thomas-hrb - Wednesday, June 12, 2013 - link

    It is a typical example of a vendor highlighting the statistics they want to you remember, and ignoring the ones that they hope are not important. That is the reason why technical people exist. Any fool can read and present excellent arguments for one side or the other. It is the understanding of these parameters, what they actually mean in a real world usage scenario that is the bread and butter of our industry. I don't know if this is typical for most modern SAN's. I am using a IBM v7000 (very popular SAN for IBM). But the v7000 comes with Auto Teiring which moves "hot blocks" from normal HDD Storage to SSD, thus having a solid performing random IO SSD that is consistent is essential to how this type of SAN works.
  • Jaybus - Monday, June 17, 2013 - link

    Well, but but look at it another way. You can put 120 SSDs in 20U and have 200 GB per VM using half the rack space and a tenth the power but with FAR higher performance, and for less cost.

    Also, the ongoing cost of power and rack space is more important. In the same 42U space you can have a 252 SSD SAN (201,600 GB) and still use less than a fifth the power and have far, far greater performance.
  • thomas-hrb - Wednesday, June 12, 2013 - link

    They are comparing IOP's. There are a few use cases where having large amounts of storage is the main target (databases, mailbos datastores etc), but typically application servers are less than 20GB in size. Even web-servers will typically be less than 10GB (nix based) in size. Ultimately any storage system will have a blend of both technologies and have a teir'd setup where they have Traditional HDD's to cover their capacity and somewhere between 5-7% of that capacity as high performance SSD's to cover for the small subset of data blocks that are "hot" and require significant'y more IOP's. This new SSD simply gives storage professionals an added level of flexibility in their designs.
  • androticus - Wednesday, June 12, 2013 - link

    Why is "performance consistency" supposed to be so good... when the *lowest* performance number of the Seagate 600 is about the same as the *consistent* number for Intel? The *average* of the Seagate looks much higher? I could see this as an advantage if the competitor numbers also went way below Intel's consistent number, but not in this case.
  • Lepton87 - Wednesday, June 12, 2013 - link

    Compared to Seagate random write performance this doesn't look unlike a GF that delivers almost constant 60fps compared to a card that delivers 60-500fps, so what's the big deal? Cap the performance at whatever level Intel SSD delivers and you will have the same consistency, but what's the point? It only matters if the drives deliver comparable performance but one is a roller-coaster and the second is very consistent which is not the case is this comparison. Allocate more spare area to the Seagate, even 25% and it will mop the floor with this drive and price per GB will be still FAR lower. Very unimpressed with this drive, but because it's an Intel product we are talking about on Anandtech it's lauded and praised like there's no tomorrow.

Log in

Don't have an account? Sign up now