When it comes to server hardware failures, I've seen them all with our own infrastructure. With the exception of CPUs, I've seen virtually every other component that could fail, fail in the past 16 years of running AnandTech. Motherboards, power supplies, memory and of course, hard drives. 

By far the most frequent failure in our infrastructure had to be mechanical drives. Within the first year after the launch of Intel's X25-M in 2008, I had transitioned all of my testbeds to solid state drives. The combination of performance and reliability was what I needed. Most of my testbeds were CPU bound, so I didn't necessarily need a ton of IO performance - but having the headroom offered by a good SSD meant that I could get more consistent CPU performance results between runs. The reliability side was simple to understand - with a good SSD, I wouldn't have to worry about my drive dying unexpectedly. Living in fear of a testbed hard drive dying over the weekend before a big launch was a thing of the past. 

When it came to rearchitecting the AnandTech server farm, these very same reasons for going the SSD route on all of our testbeds (and personal systems) were just as applicable to the servers that ran AnandTech.

Our infrastructure is split up between front end application servers and back end database servers. With the exception of the boxes that serve our images, most of our front end app servers don't really stress IO all that much. The three 12-core virtualized servers at the front end would normally be fine with some hard drives, however we instead decided to go with mainstream SSDs to lower the risk of a random mechanical failure. We didn't need the endurance of an enterprise drive in these machines since they weren't being written to all that frequently, but we needed reliable drives. Although quite old by today's standards, we settled on 160GB Intel X25-M G2s but partitioned the drives down to 120GB in order to ensure they'd have a very long lifespan.

Where performance matters more is in our back end database servers. We run a combination of MS SQL and MySQL, and our DB workloads are particularly IO intensive. In the old environment we had around a dozen mechanical drives in various RAID configurations powering all of the databases that ran the site. To put performance in perspective, I grabbed our old Forum Database server and took a look at the external SAS RAID array we had created. Until last year, the Forums were powered by a combination of 6 x Seagate Barracuda ES.2s and 4 x Seagate Cheetah 10K.7s. 

For the new Forums DB we moved to 6 x 64GB Intel X25-Es. Again, old by modern standards, but a huge leap above what we had before. To put the performance gains in perspective I ran some of our enterprise IO benchmarks on the old array and the new array to compare. We split the DB workload across the Barracuda ES.2 array (6 drive RAID-10) and the Cheetah array (4 drive RAID-5), however to keep things simple I just created a 4-drive RAID-0 using the Cheetahs which should give us more than a good indication of peak performance of the old hardware:

AnandTech Forums DB IO Performance Comparison - 2013 vs 2007
  MS SQL - Update Daily Stats MS SQL - Weekly Stats Maintenance Oracle Swingbench
Old Forums DB Array (4 x 10K RPM  RAID-0) 146.1 MB/s 162.9 MB/s 2.8 MB/s
New Forums DB Array (6 x X25-E RAID-10) 394.4 MB/s 450.5 MB/s 55.8 MB/s
Performance Increase 2.7x 2.77x 19.9x

The two SQL tests are actually from our own environment, so the performance gains are quite applicable. The advantage here is only around 2.7x. In reality the gains can be even greater, but we don't have good traces of our live DB load - just some of our most IO intensive tasks on the DB servers. The final benchmark however does give us some indication of what a more random enterprise workload can enjoy with a move to SSDs from a hard drive array. Here the performance of our new array is nearly 20x the old HDD array.

Note that there's another simplification that comes along with our move to SSDs: we rely completely on Intel's software RAID. There are no third party RAID controllers, no extra firmware/drivers to manage and validate, and there's no external chassis needed to get more spindles. We went from a 4U HP DL585 server with a 2U Promise Vtrak J310s chassis and 10 hard drives, down to a 2U server with 6 SSDs - and came out ahead in the performance department. Later this week I'll talk about power savings, which ended up being a much bigger deal.

This is just the tip of the iceberg. In our specific configuration we went from old hard drives to old SSDs. With even greater demands you could easily go to truly modern enterprise SSDs or even PCIe based solutions. Using a combination of consumer and enterprise drives isn't a bad idea if you want to transition to an all-SSD architecture. Deploying reliable consumer drives in place of lightly used hard drives is a way to cut down the number of moving parts in your network, while moving to higher performing/higher endurance enterprise SSDs can deliver significant performance benefits as well.

Comments Locked

57 Comments

View All Comments

  • Firedron - Wednesday, March 13, 2013 - link

    You’re not right – with large number of drives and high cpu load hardware RAID will be always faster. ASICs are always better than general purpose CPUs.
    And it really depends on your server usage scenario. If you’re using software RAID, you’re taking resources from CPU – for RAID1/10 it may be not very much, because you’re not calculating parity. But with large number of drives, especially SSDs, and RAID5/6 – it’s a lot of CPU cycles/bandwidth penalty, which can be used by application. In that case, it is really worth to invest into hardware raid controller. And if you’re buying server from big USA vendor with vendor’s raid card - it will be under warranty – firmware issues is not your problem – vendor will test it hundred times with all configurations before releasing public. That why you are paying premium buying enterprise level hardware from premium vendor.
  • drinking12many - Thursday, March 14, 2013 - link

    I dunno have you looked at something like an enterprise Software Raid type setup. Most SANs such as Netapp and Oracle dont really have raid cards. They are just glorified FreeBSD or Solaris installs and depend heavily on the CPU to do the parity work. On dedicated SAN boxes who cares how much cpu the Filer is using long as its handling your workload. We just bought some Oracle ZFS appliances using 4x8core CPUS enough for parity, compression, checksums and maybe even some dedupe if we want. I would agree if your doing the storage on box that a raid controller is probably the way to go but on the SAN side its usually better without since thats the only workload they have to run.
  • mike55 - Tuesday, March 12, 2013 - link

    Anand, what's the largest amount of writes you've seen on one of your SSDs? Have you ever seen one fail due to old age or start losing capacity due to sectors failing (I'm not sure how SSDs handle losing lots of NAND)?
  • andy318 - Tuesday, March 12, 2013 - link

    In your new setup with software RAID, do you have write caching turned on? If so, is data in the cache protected from power loss?
  • ibb_1976 - Tuesday, March 12, 2013 - link

    Anand, can you include tests that where presented on Usenix this year (https://www.usenix.org/conference/fast13/understan... in your SSD reviews. I think these guys found some important things regarding failures, but they not provide info about models and manufacturers of tested SSD. Thank you!
    PS. Sorry about my english, it's not so good.
  • ibb_1976 - Tuesday, March 12, 2013 - link

    https://www.usenix.org/conference/fast13/understan...
    Sorry, this is the right link.
  • sideshow23bob - Tuesday, March 12, 2013 - link

    Do you expect you'll have to replace any of your SSDs before the next hardware change (due to hitting write cycle limit? Feel free to explain much more about your thoughts on that or keep vague if this is of strategic importance to you or the site. Thanks and looking forward to reading many more articles with the nice, new design.
  • bobbozzo - Wednesday, March 13, 2013 - link

    ASP.net is normally run on Windows.
  • bobbozzo - Wednesday, March 13, 2013 - link

    This above was in reply to another post. I'm not sure the threading/nesting is working right.
  • jmke - Wednesday, March 13, 2013 - link

    >> "The reliability side was simple to understand - with a good SSD, I wouldn't have to worry about my drive dying unexpectedly. Living in fear of a testbed hard drive dying over the weekend before a big launch was a thing of the past. "

    How are SSD failing unexpectedly any different from HDD? In fact they are worse. If a HDD starts failing, you most likely get a chance to do some data recovery, even restore it back to working order if there only failing sections.
    With an SSD it's light out. reboot device not found kind of fun. I've had 2 Intel SSD fail out of a batch of 30 in less than 3 months... so I would say that MTBF for SSD is not much different from HDD imho.
    What might be different is environmental impact on failure causes, SSDs can take more physical abuse, and with testbed PCs they would be more error-resistant I guess :)

Log in

Don't have an account? Sign up now