Benchmark method

We used the HP DL 380 Gen8, the best selling server in the world. The good people at Micron helped ensure we could test with 24 DIMMs.

As we were told that the new Xeon E5-26xx V2 ("Ivy Bridge EP") had better support for LRDIMMs than the Xeon E5-26xx ("Sandy Bridge EP"), we tested with both the E5 2680 v2 and the E5-2690 in the Stream and latency tests. For the real-world CDN test (see further) we only use the E5-2680 v2. As the latter test is not limited by the CPU at all (25% load on the E5-2680v2), testing with different CPUs does not make much sense.

Benchmark Configuration: HP DL 380 Gen8 (2U Chassis)

Testing with the HP-DL 380 Gen8 was a very pleasant experience: changing CPUs and DIMMs is extremely easy and does not involve screwdrivers.

The benchmark configuration details can be found below.

CPU Two Intel Xeon processor E5-2680 v2 (2.8GHz, 12c, 25MB L3, 115W)
Two Intel Xeon processor E5-2690 (2.9GHz, 8c, 20MB L3, 135W)
RAM 768GB (24x32GB) DDR3-1866 Micron MT72JSZS4G72LZ-1G9 (LRDIMM)
or
384GB (24 x 16GB) Micron MT36JSF2G72PZ – DDR3-1866
Internal Disks One Intel MLC SSD710 200GB
BIOS version P70 09/41/2013
NIC Dual-port Intel X540-T2 10Gbit NC
PSU HP 750W CS Platinum Plus Hot Plug 750 Watt

A dual-port Intel X540-T2 10Gbit NC was installed and was connected to the DELL PowerConnect 8024F switch with a 20Gbit bond.

Worth the Price Premium? Measuring Stream Throughput
Comments Locked

27 Comments

View All Comments

  • slideruler - Thursday, December 19, 2013 - link

    Am I the only one who's concern with DDR4 in our future?

    Given that it's one-to-one we'll lose the ability to stuff our motherboards with cheap sticks to get to "reasonable" (>=128gig) amount of RAM... :(
  • just4U - Thursday, December 19, 2013 - link

    You really shouldn't need more than 640kb.... :D
  • just4U - Thursday, December 19, 2013 - link

    seriously though .. DDR3 prices have been going up. as near as I can tell their approximately 2.3X the cost of what they once were. Memory makers are doing the semi-happy dance these days and likely looking forward to the 5x pricing schemes of yesteryear.
  • MrSpadge - Friday, December 20, 2013 - link

    They have to come up with something better than "1 DIMM per channel using the same amount of memory controllers" for servers.
  • theUsualBlah - Thursday, December 19, 2013 - link

    the -Ofast flag for Open64 will relax ansi and ieee rules for calculations, whereas the GCC flags won't do that.

    maybe thats the reason Open64 is faster.
  • JohanAnandtech - Friday, December 20, 2013 - link

    Interesting comment. I ran with gcc, Opencc with O2, O3 and Ofast. If the gcc binary is 100%, I get 110% with Opencc (-O2), 130% (-O3) and the same with Ofast.
  • theUsualBlah - Friday, December 20, 2013 - link

    hmm, thats very interesting.

    i am guessing Open64 might be producing better code (atleast) when it comes to memory operations. i gave up on Open64 a while back and maybe i should try it out again.

    thanks!
  • GarethMojo - Friday, December 20, 2013 - link

    The article is interesting, but alone it doesn't justify the expense for high-capacity LRDIMMs in a server. As server professionals, our goal is usually to maximise performance / cost for a specific role. In this example, I can't imagine that better performance (at a dramatically lower cost) would not be obtained by upgrading the storage pool instead. I'd love to see a comparison of increasing memory sizes vs adding more SSD caching, or combinations thereof.
  • JlHADJOE - Friday, December 20, 2013 - link

    Depends on the size of your data set as well, I'd guess, and whether or not you can fit the entire thing in memory.

    If you can, and considering RAM is still orders of magnitude faster than SSDs I imagine memory still wins out in terms of overall performance. Too large to fit in a reasonable amount of RAM and yes, SSD caching would possibly be more cost effective.
  • MrSpadge - Friday, December 20, 2013 - link

    One could argue that the storage optimization would be done for both memory configurations.

Log in

Don't have an account? Sign up now