Benchmark Configuration & method

This review is mostly focused on performance. We have included the Xeon E5-2697 v2 (12 cores at 2.7-3.5GHz) and Xeon E5-2650L v2 (10 cores at 1.7GHz-2.1GHz) to categorize the performance of the high-end and lower-midrange new Xeons. That way, you can get an idea of where the rest of the 12 and 10 core Xeon SKUs will land. We also have the previous generation E5-2690 and E5-2660 so we can see the improvements from the new architecture. This also allows us to gauge how competitive the Opteron "Piledriver" 6300 is.

Intel's Xeon E5 server R2208GZ4GSSPP (2U Chassis)

CPU Two Intel Xeon processor E5-2697 v2 (2.7GHz, 12c, 30MB L3, 130W)

Two Intel Xeon processor E5-2690 (2.9GHz, 8c, 20MB L3, 135W)

Two Intel Xeon processor E5-2660 (2.2GHz, 8c, 20MB L3, 95W)

Two Intel Xeon processor E5-2650L v2 (1.7GHz, 10c, 25MB L3, W)
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0

or

128GB (8 x 16GB) Micron MT36JSF2G72PZ – BDDR3-1866
Internal Disks 2 x Intel MLC SSD710 200GB
Motherboard Intel Server Board S2600GZ "Grizzly Pass"
Chipset Intel C600
BIOS version SE5C600.86B (August the 6th, 2013)
PSU Intel 750W DPS-750XB A (80+ Platinum)

The Xeon E5 CPUs have four memory channels per CPU and support up to DDR3-1866, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth. The typical BIOS settings can be found below.

Supermicro A+ Opteron server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Abu Dhabi" 6380 at 2.5GHz

Two AMD Opteron "Abu Dhabi" 6376 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel MLC SSD710 200GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2012)
PSU SuperMicro PWS-704P-1R 750Watt

The same is true for the latest AMD Opterons: eight DDR3-1600 DIMMs for maximum bandwidth. You can check out the BIOS settings of our Opteron server below.

C6 is enabled, TurboCore (CPB mode) is on.

Common Storage System

To minimize different factors between our tests, we use our common storage system to provide LUNs via iSCSI. The applications are placed on a RAID-50 LUN of ten Cheetah 15k5 disks inside a Promise JBOD J300, connected to an Adaptec 5058 PCIe controller. For the more demanding applications (Zimbra, PhpBB), storage is provided by a RAID-0 of Micron P300 SSDs, with a 6 Gbps Adaptec 72405 PCIe raid controller.

Software Configuration

All vApus testing is done on ESXi vSphere 5 — VMware ESXi 5.1. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless otherwise indicated. All other testing is done on Windows 2008 Enterprise R2 SP1. Unless noted otherwise, we use the "High Performance" setting on Windows 2008 R2 SP1.

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) powerline. The room temperature is monitored and kept at 23°C by our Airwell CRACs. We use the Racktivity ES1008 Energy Switch PDU to measure power consumption. Using a PDU for accurate power measurements might seem pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Positioning: SKUs and Servers Virtualization Performance
Comments Locked

70 Comments

View All Comments

  • Kevin G - Tuesday, September 17, 2013 - link

    I'd be careful about using Java benchmarks on those SPARC chips for an overall comparison. The results on the SPARC side are often broken.

    x86 for many years has been ahead of SPARC. Only with the most recent chips has Oracle produced a very performance competitive chip.

    The only other architecture that out runs Intel's best x86 chips is the POWER7/POWER7+. When the POWER8 ships, it is expected to be faster still.
  • Brutalizer - Thursday, September 19, 2013 - link

    @Kevin G
    "...The results on the SPARC side are often broken..."
    What do you mean with that? The Oracle benchmarks are official and anyone can see how they did it. Regarding the SPARC T5 performance, it is very fast, typically more than twice as fast as Xeon cpus. Just look at the official, accepted benchmarks on the site I linked to.
  • Kevin G - Friday, September 20, 2013 - link

    @Brutalizer
    There is a SPEC subtest whose result on SPARC is radically higher than other platforms. The weight of this one test affects the overall score. It has been a few years since I read up about this and SPARC as a platform has genuinely become performance competitive again.
  • Phil_Oracle - Friday, February 21, 2014 - link

    Are you talking about libquantum?
    http://www.spec.org/cpu2006/Docs/462.libquantum.ht...

    I believe IBM is the worst culprit on this subtest, showing a significant difference between base and peak. More so than any other vendor.

    http://www.spec.org/cpu2006/results/res2012q4/cpu2...

    But today, I believe all vendors have figured out how to improvise (cheat) on this test, even Xeon based.

    http://www.spec.org/cpu2006/results/res2014q1/cpu2...

    That’s why I believe SPEC CPU2006 is outdated and needs replacing and suggest looking at more realistic, recent (dal world) benchmarks like SPECjbb2013, SPECjEnterprise2010 or even TPC-H.
  • Phil_Oracle - Friday, February 21, 2014 - link

    x86 was clearly ahead of SPARC till about SPARC T4 timeframe when Oracle took over R&D on SPARC. SPARC T4 allowed Oracle to equalize the playing field, especially in areas like database where the SPARC T4 really shines and shows considering many of the world record benchmarks that where released. When SPARC T5 came out last year, it increased performance by 2.4x, clobbering practically every other CPU out there. Today, you'll be hard pressed to find a real world benchmark, ones that are fully audited, where SPARC T5 is not in a leadership position, whether java based like SPECjbb2013 or SPECjEnterprise2010 or database like TPC-C or TPC-H.
  • psyq321 - Tuesday, September 17, 2013 - link

    I know that EX will be using the (scalable) memory buffer, which is probably the main reason for the separate pin-out. I guess they could still keep both memory controllers in, and fuse the appropriate one depending if it is an EX or EP SKU, if this would still make sense from a production perspective.
  • Kevin G - Tuesday, September 17, 2013 - link

    It wouldn't make much sense as the EX line up moves the DDR3 physical interface off to the buffer chip. There is a good chunk of die space used for the 256 bit wide memory interface in the EP line. Going the serial route, the EX line is essentially doubling the memory bandwidth while using the same number of IO pins (though at the cost of latency).

    The number of PCIe lanes and QPI links also changes between the EP and EX line up. The EP has 40 PCIe lanes where as the EX has 32. There are 4 QPI links on the EX line up making them ideal for 4 and 8 socket systems were as the EP line has 2 QPI links good for dual or a poor quad socket configuration.
  • psyq321 - Wednesday, September 18, 2013 - link

    Hmm, this source: http://www.3dcenter.org/news/intel-stellt-ivy-brid...

    Claims that HCC is 12-15 core design. They also have a die-shot of a 15 core variant.
  • Kevin G - Thursday, September 19, 2013 - link

    I'll 1-up you: Intel technical reference manuals (PDF).

    http://www.intel.de/content/dam/www/public/us/en/d...
    http://www.intel.de/content/dam/www/public/us/en/d...

    It does appear to be 15 core judging from the mask in the CSR_DESIRED_CORES register.

    However, there is not indication that the die supports serial memory links to a buffer chip or >3 QPI links that an EX chip would have.
  • psyq321 - Thursday, September 19, 2013 - link

    Well, I guess without Intel openly saying or somebody laser-cutting the die it would not be possible to know exactly is it a shared die between HCC EP and EX.

    However, there are lots of "hints" that the B-package Ivy Bridge EP hide more cores, like the ones you linked. If it is the case, it is really a shame that Intel did not enable all cores in the EP line. There would still be lots of places for differentiation between the EX and EP lines, since EX anyway contains RAS features without which the target enterprise customers would probably not even consider EP, even if it had the same number of cores.

    Also, Ivy EX will have some really high TDP parts.

Log in

Don't have an account? Sign up now