Worth the Price Premium?

The real question is thus whether LRDIMMs are worth the 60% higher cost per GB. Servers that host CPU-bottlenecked (HPC) applications are much better off with RDIMMs, as the budget should be spent on faster CPUs and servers as much as possible. The higher speed that LRDIMMs offer in certain configurations may help for some memory intensive HPC applications, but the memory buffer of LRDIMMs might negate the clock speed advantage as it introduces extra latency. We will investigate this further in the article, but it seems that most HPC applications are not the prime target for LRDIMMs.

Virtualized servers are the most obvious scenario where the high capacity of LRDIMMs can pay off. As the Xeon E5 V2 ("Ivy Bridge EP") increased the core count from maximum 8 to 12, many virtualized servers will run out of memory capacity before they can use all those cores. It might be wiser to buy half as many servers with twice as much memory. A quick price comparison illustrates this:

  • An HP DL380 G8 with 24 x 16GB RDIMMs, two E5-2680v2, two SATA disks and a 10 GbE NIC costs around $13000
  • An HP DL380 G8 with 24 x 32GB LRDIMMs, two E5-2680v2, two SATA disks and a 10 GbE NIC costs around $26000

At first sight, buying twice as many servers with half as much memory is more attractive than buying half as many servers with twice as much capacity. You get more processing power and more network bandwidth and so on. But those advantages are not always significant in a virtualized environment.

Most software licenses will make you pay more as the server count goes up. The energy bill of two servers with half as much memory is always higher than one server with twice as much memory. And last but not least, if you double the amount of servers, you will increase the time you spend on administering the server cluster.

So if your current CPU load is relatively high, chances are that an LRDIMM equipped server makes sense: the TCO will be quite a bit lower. We have tested this in our previous article and found that having more memory available can reduce the response time of virtualized applications significantly even if you're running at high CPU load. Since that test, little has changed, besides the fact that LRDIMMs have become a lot cheaper. So it is pretty clear that for virtualized clusters, LRDIMMs have become a lot more attractive.

Besides a virtualized cluster, there is another prime candidate: servers that host disk limited workloads, where memory caching can alleviate the bottleneck. Processing power is irrelevant in that case, as the workload is dominated by memory and/or disk accesses. Our Content Delivery Network (CDN) server test is a real world example of this and will quantify the impact of larger memory capacity.

DIMM Limitations Benchmarking Configuration
Comments Locked

27 Comments

View All Comments

  • JohanAnandtech - Friday, December 20, 2013 - link

    First of all, if your workload is read intensive, more RAM will almost always be much faster than any flash cache. Secondly, it greatly depends on your storage vendor whether adding more flash can be done at "dramatically lower cost". The tier-one vendors still charge an arm and a leg for flash cache, while the server vendors are working at much more competitive prices. I would say that in general it is cheaper and more efficient to optimize RAM caching versus optimizing your storage (unless your are write limited).
  • blaktron - Friday, December 20, 2013 - link

    Not only are you correct, but significantly so. Enterprise flash storage at decent densities is more costly PER GIG than DDR3. Not only that, but you need the 'cadillac' model SANs to support more than 2 SSDs. Not to mention fabric management is a lot more resource intensive and more prone to error.

    Right now, the best bet (like always) to get performance is to stuff your servers with memory and distribute your workload. Because its poor network architecture that creates bottlenecks in any environment where you need to stuff more than 256GB of RAM into a single box.
  • hoboville - Friday, December 20, 2013 - link

    Another thing about HPC is that, as long as a processor has: enough RAM to do its dataset on the CPU/GPU before it needs more data, the quantity of RAM is enough. Saving on RAM can let you buy more nodes, which gives you more performance capacity.
  • markhahn - Saturday, January 4, 2014 - link

    headline should have been: if you're serving static content, your main goal is to maximize ram per node. not exactly a shocker eh? in the real world, at least the HPC corner of it, 1G/core is pretty common, and 32G/core is absurd. hence, udimms are actually a good choice sometimes.
  • mr map - Monday, January 20, 2014 - link

    Very interesting article, Johan!

    I would very much like to know what specific memory model (brand, model number) you are referring to regarding the 32GB LRDIMM—1866 option.
    I have searched at no avail.
    Johan? / Anyone?
    Thank you in advance!
    / Tomas
  • Gasaraki88 - Thursday, January 30, 2014 - link

    A great article as always.
  • ShirleyBurnell - Tuesday, November 5, 2019 - link

    I don't know why people are still going after server hardware. I mean it's the 21st century. Now everything is on cloud. Where you have the ability to scale your server anytime you want to. I mean the hosting provider companies like: AWS, DigitalOcean, Vultr hosting https://www.cloudways.com/en/vultr-hosting.php, etc. has made it very easy to rent your server.

Log in

Don't have an account? Sign up now