Worth the Price Premium?

The real question is thus whether LRDIMMs are worth the 60% higher cost per GB. Servers that host CPU-bottlenecked (HPC) applications are much better off with RDIMMs, as the budget should be spent on faster CPUs and servers as much as possible. The higher speed that LRDIMMs offer in certain configurations may help for some memory intensive HPC applications, but the memory buffer of LRDIMMs might negate the clock speed advantage as it introduces extra latency. We will investigate this further in the article, but it seems that most HPC applications are not the prime target for LRDIMMs.

Virtualized servers are the most obvious scenario where the high capacity of LRDIMMs can pay off. As the Xeon E5 V2 ("Ivy Bridge EP") increased the core count from maximum 8 to 12, many virtualized servers will run out of memory capacity before they can use all those cores. It might be wiser to buy half as many servers with twice as much memory. A quick price comparison illustrates this:

  • An HP DL380 G8 with 24 x 16GB RDIMMs, two E5-2680v2, two SATA disks and a 10 GbE NIC costs around $13000
  • An HP DL380 G8 with 24 x 32GB LRDIMMs, two E5-2680v2, two SATA disks and a 10 GbE NIC costs around $26000

At first sight, buying twice as many servers with half as much memory is more attractive than buying half as many servers with twice as much capacity. You get more processing power and more network bandwidth and so on. But those advantages are not always significant in a virtualized environment.

Most software licenses will make you pay more as the server count goes up. The energy bill of two servers with half as much memory is always higher than one server with twice as much memory. And last but not least, if you double the amount of servers, you will increase the time you spend on administering the server cluster.

So if your current CPU load is relatively high, chances are that an LRDIMM equipped server makes sense: the TCO will be quite a bit lower. We have tested this in our previous article and found that having more memory available can reduce the response time of virtualized applications significantly even if you're running at high CPU load. Since that test, little has changed, besides the fact that LRDIMMs have become a lot cheaper. So it is pretty clear that for virtualized clusters, LRDIMMs have become a lot more attractive.

Besides a virtualized cluster, there is another prime candidate: servers that host disk limited workloads, where memory caching can alleviate the bottleneck. Processing power is irrelevant in that case, as the workload is dominated by memory and/or disk accesses. Our Content Delivery Network (CDN) server test is a real world example of this and will quantify the impact of larger memory capacity.

DIMM Limitations Benchmarking Configuration
Comments Locked

27 Comments

View All Comments

  • slideruler - Thursday, December 19, 2013 - link

    Am I the only one who's concern with DDR4 in our future?

    Given that it's one-to-one we'll lose the ability to stuff our motherboards with cheap sticks to get to "reasonable" (>=128gig) amount of RAM... :(
  • just4U - Thursday, December 19, 2013 - link

    You really shouldn't need more than 640kb.... :D
  • just4U - Thursday, December 19, 2013 - link

    seriously though .. DDR3 prices have been going up. as near as I can tell their approximately 2.3X the cost of what they once were. Memory makers are doing the semi-happy dance these days and likely looking forward to the 5x pricing schemes of yesteryear.
  • MrSpadge - Friday, December 20, 2013 - link

    They have to come up with something better than "1 DIMM per channel using the same amount of memory controllers" for servers.
  • theUsualBlah - Thursday, December 19, 2013 - link

    the -Ofast flag for Open64 will relax ansi and ieee rules for calculations, whereas the GCC flags won't do that.

    maybe thats the reason Open64 is faster.
  • JohanAnandtech - Friday, December 20, 2013 - link

    Interesting comment. I ran with gcc, Opencc with O2, O3 and Ofast. If the gcc binary is 100%, I get 110% with Opencc (-O2), 130% (-O3) and the same with Ofast.
  • theUsualBlah - Friday, December 20, 2013 - link

    hmm, thats very interesting.

    i am guessing Open64 might be producing better code (atleast) when it comes to memory operations. i gave up on Open64 a while back and maybe i should try it out again.

    thanks!
  • GarethMojo - Friday, December 20, 2013 - link

    The article is interesting, but alone it doesn't justify the expense for high-capacity LRDIMMs in a server. As server professionals, our goal is usually to maximise performance / cost for a specific role. In this example, I can't imagine that better performance (at a dramatically lower cost) would not be obtained by upgrading the storage pool instead. I'd love to see a comparison of increasing memory sizes vs adding more SSD caching, or combinations thereof.
  • JlHADJOE - Friday, December 20, 2013 - link

    Depends on the size of your data set as well, I'd guess, and whether or not you can fit the entire thing in memory.

    If you can, and considering RAM is still orders of magnitude faster than SSDs I imagine memory still wins out in terms of overall performance. Too large to fit in a reasonable amount of RAM and yes, SSD caching would possibly be more cost effective.
  • MrSpadge - Friday, December 20, 2013 - link

    One could argue that the storage optimization would be done for both memory configurations.

Log in

Don't have an account? Sign up now