Benchmark Configuration & method

This review is mostly focused on performance. We have included the Xeon E5-2697 v2 (12 cores at 2.7-3.5GHz) and Xeon E5-2650L v2 (10 cores at 1.7GHz-2.1GHz) to categorize the performance of the high-end and lower-midrange new Xeons. That way, you can get an idea of where the rest of the 12 and 10 core Xeon SKUs will land. We also have the previous generation E5-2690 and E5-2660 so we can see the improvements from the new architecture. This also allows us to gauge how competitive the Opteron "Piledriver" 6300 is.

Intel's Xeon E5 server R2208GZ4GSSPP (2U Chassis)

CPU Two Intel Xeon processor E5-2697 v2 (2.7GHz, 12c, 30MB L3, 130W)

Two Intel Xeon processor E5-2690 (2.9GHz, 8c, 20MB L3, 135W)

Two Intel Xeon processor E5-2660 (2.2GHz, 8c, 20MB L3, 95W)

Two Intel Xeon processor E5-2650L v2 (1.7GHz, 10c, 25MB L3, W)
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0

or

128GB (8 x 16GB) Micron MT36JSF2G72PZ – BDDR3-1866
Internal Disks 2 x Intel MLC SSD710 200GB
Motherboard Intel Server Board S2600GZ "Grizzly Pass"
Chipset Intel C600
BIOS version SE5C600.86B (August the 6th, 2013)
PSU Intel 750W DPS-750XB A (80+ Platinum)

The Xeon E5 CPUs have four memory channels per CPU and support up to DDR3-1866, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth. The typical BIOS settings can be found below.

Supermicro A+ Opteron server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Abu Dhabi" 6380 at 2.5GHz

Two AMD Opteron "Abu Dhabi" 6376 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel MLC SSD710 200GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2012)
PSU SuperMicro PWS-704P-1R 750Watt

The same is true for the latest AMD Opterons: eight DDR3-1600 DIMMs for maximum bandwidth. You can check out the BIOS settings of our Opteron server below.

C6 is enabled, TurboCore (CPB mode) is on.

Common Storage System

To minimize different factors between our tests, we use our common storage system to provide LUNs via iSCSI. The applications are placed on a RAID-50 LUN of ten Cheetah 15k5 disks inside a Promise JBOD J300, connected to an Adaptec 5058 PCIe controller. For the more demanding applications (Zimbra, PhpBB), storage is provided by a RAID-0 of Micron P300 SSDs, with a 6 Gbps Adaptec 72405 PCIe raid controller.

Software Configuration

All vApus testing is done on ESXi vSphere 5 — VMware ESXi 5.1. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless otherwise indicated. All other testing is done on Windows 2008 Enterprise R2 SP1. Unless noted otherwise, we use the "High Performance" setting on Windows 2008 R2 SP1.

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) powerline. The room temperature is monitored and kept at 23°C by our Airwell CRACs. We use the Racktivity ES1008 Energy Switch PDU to measure power consumption. Using a PDU for accurate power measurements might seem pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Positioning: SKUs and Servers Virtualization Performance
Comments Locked

70 Comments

View All Comments

  • Bytales - Tuesday, September 17, 2013 - link

    Please make some gaming related tests. Im planning on upgrading from 2x2609 to 2x2690v2, now that i now for sure that 10 cores 25 mb cache is a complete die. I dont trust verz much the design on the 12 core die, its not how i would design the CPU. Besides the 2690v2 is 3ghz base and 3.6 boost, perfect for gaming.

    Would have like to see how a 2690v2 would compare with a 2687w v2 in gaming related tests, seeing as the latter has a 3.4 base 4 ghz boost but 2 cores less.

    Anyways, im not pazing 3000+ euros on disabled die (like the one in 2687v2) so the 10 core is my choice, but still would have like to seee how higher freq lower core count would impact gaming performance !
  • mking21 - Wednesday, September 18, 2013 - link

    I can tell you now that the 8 core is going to kick the 10 core's ass for gaming. The higher clock will win here. So as you are going to pay 3000 euros you may as well get the best, even if it does have two cores disabled. But I do agree for me a more interesting comparison would have been 12 vs 10 vs 8 all V2s all fastest clock available versions...
  • mapesdhs - Wednesday, September 18, 2013 - link


    IMO for gaming you'd be better off with a used oc'd 2700K. I just bought one for 160 UKP,
    fitted with a used TRUE (cost 15), two new Coolermaster Blademaster fans, Q-fan active
    (ASUS M4E mbd, used, cost 130), runs at 5GHz no problem, silent running when idle. See:

    http://valid.canardpc.com/a64s8p

    The vast majority of games gain the most from a sensible middle ground between
    multiple cores and a high clock. Few will properly exploit more than 4 cores with HT.
    Using a multi-core XEON for gaming is silly. You would see far greater gaming
    performance by getting a much cheaper 4/6-core and spending the saved cash on
    more powerful GPUs like two 780 or Titans SLI, or two 7970 CF, etc. A 4-core Z68
    should be just fine, though if you do want oodles of PCIe lanes for high-end SLI/CF
    then I'd get X79 and a 3930K (don't see the point of IB-E).

    Trust me, a 5GHz 2700K, or a 4.7GHz 3930K, paired with two much better GPUs
    via the saved money, will be massively better for gaming vs. what you could afford
    having spent thousands on two 10 or 12-core CPUs with much lower clocks. Most
    2600Ks will oc pretty nicely too.

    Bytales, what GPU(s) do you have in your system atm?

    Ian.

    PS. IB/HW are a waste of time. They don't oc aswell as SB. I bought a 2500K for 125, only
    took 3 mins to get it running 4.7 stable on a used Gigabyte Z68 board (which cost a mere 35).
  • Bytales - Saturday, September 21, 2013 - link

    The reason im looking at xeons is because of the motherboard i own, which is the z9ped8ws, which i bought because i need the pci express lanes two xeons provide. No other motherboard could have gottwn me what this one does, and i have looked everywhere. Thats the reason i need these xeons. I originally bought two 2609 cpus and a crossfire tahiti le(one burned down due to bitcoin mining) their purpose were/are to make the pc usable until the new xeons and the new radeons wil become available. I know i wont be getting the best possible cpus for gaming on this platform. I just want some decent performers. The 2609 i have now are 2.4 ghz no boost no HT, and did their job good so far. Im expecting decent gaming performance out of a 3ghz chip with multiple cores. Sure, i could get the 2687wv2 for the same price, but i have a hate for disabled things. Why the hell didnt they make a 10 core chip with 25 mb cache 3.5 base 4ghz boost and 150 160 w tdp. I would have bought such a cpu. But as it is ill have to make due with two 2690. Maybe, just maybe, if i see some gaming benchmarks between the two cpus, i will consider the 2687wv2. Untill then, my first choice is the 2690.
    Hopefully, the people from anandtwch will test this aspect of the cpus, gaming that is, becauae all they tested was server/enterpriae stuff, which was to be expected after all.
    Gaming was not what these cpus were built for. But i like having strong cpus which will have my back if i decie to do some other stuff as well. I do bunch of converting, compressing, autocad photoshop. Etc. Thats why more cores. The better.
  • Ktracho - Thursday, October 3, 2013 - link

    I would think you can get the PCIe lanes you want with a motherboard that has a PLX bridge chip, such as the ASUS P9X79-E WS, without needing to resort to a two-socket motherboard. As far as gaming, I think the E5-1620 v2 gives good bang for the money, and if you need more cores, the E5-1650 v2 does well, too. If you need a little better performance, you can get the E5-1680 v2, but at a price. Too bad Intel doesn't sell single-socket CPU versions with more than 6 cores, though.
  • MrSpadge - Tuesday, September 17, 2013 - link

    The Xeon2660v2 could in theory be what Ivy-E should have been for enthusiasts: something at least a bit more worth spending big $ on. The mainboard would have to let us enable multi-core turbo and OC the bus though.
  • psyq321 - Tuesday, September 17, 2013 - link

    Situation with IvyBridge EP is absolutely the same as with Sandy Bridge EP:

    - No BCLK "straps" (or ratios) for Xeon line - only 100 MHz allowed
    - No unlocked multipliers
    - BCLK overclocking works - your mileage may vary. I can get up to 105 MHz with dual Xeon 2697 v2 setup on Z9PE D8 WS

    So, Ivy Bridge EP Xeons do not overclock particularly well - the best you can get out of 2S parts (26xx v2) is 100-150 MHz depending on the max. turbo multiplier your SKU has.
  • ezekiel68 - Wednesday, September 18, 2013 - link

    Johan, what do you mean by "...over four NUMA nodes" in the last sentence on the Compression And Decompression page?

    My understanding is that for both Opeteron and Xeon, a NUMA node is a complete CPU package (with all its cores) and the associated RAM directly connected to that CPU's memory controllers. In the charts, all of the Opterons are listed as "2x Opteron XXXX". Are you considering each die within the Opteron MCM package to be a separate NUMA node -- or how else are you coming up with "four" above?
  • JohanAnandtech - Friday, September 20, 2013 - link

    AFAIK, the two dies in the package communicate via hypertransport links and it is quicker for one die to communicate with its own memory than with the memory attached to the second die.
  • ddkeenan - Wednesday, September 18, 2013 - link

    The data in this article is incomplete. The JVM tuning used is targeted for throughput alone, basically ignoring GC pause times. The critical jOPS metric is intended to measure with response time constraints, and the results posted here are most likely highly variable and definitely dreadfully low because of the poor tuning choices.

    Actual customers care more about response time/latency these days. Throughput is often solved by scaling horizontally, response time is not. Commercial benchmarking should try to reflect that desire by focusing on response time and the SPECjbb2013 critical jOPS in order to influence hardware and software vendors to compete.

    Finally, to Kevin G, I think it's also likely that SPARC T-series systems have been focusing on customer metrics more than competitive benchmarks, and now there's a benchmark that takes response time into consideration.

Log in

Don't have an account? Sign up now