Twenty two months ago Intel launched its LGA-2011 platform and Sandy Bridge E aimed at the high-end desktop enthusiast. The platform brought more cores, more PCIe lanes and more memory bandwidth to those users who needed more than what had become of Intel's performance desktop offerings. It was an acknowledgement of a high end market that seems to have lost importance over the past few years. On the surface, Sandy Bridge E was a very good gesture on Intel's part. Unfortunately, the fact that it's been nearly two years since we first met LGA-2011 without a single architecture update, despite seeing the arrival of both Ivy Bridge and Haswell, doesn't send a great message to the users willing to part with hard earned money to buy into the platform.

Today we see that long awaited update. LGA-2011 remains unchanged, but the processor you plug into the socket moves to 22nm. This is Ivy Bridge Extreme.

Ivy Bridge E: 1.86B Transistors, Up to 6 Cores & 15MB L3

There’s a welcoming amount of simplicity in the Extreme Edition lineup. There are only three parts to worry about:

With the exception of the quad-core 4820K, IVB-E launch pricing is identical to what we saw with Sandy Bridge E almost two years ago. The 4820K is slightly cheaper than the highest end Haswell part, but it’s still $25 more expensive than its SNB-E counterpart was at launch. The difference? The 4820 is a K-SKU, meaning it’s fully unlocked, and thus comes with a small price premium.

All of the IVB-E parts ship fully unlocked, and are generally capable of reaching the same turbo frequencies as their predecessors. The Core i7-4960X and the i7-3970X before it, are the only Intel CPUs officially rated for frequencies of up to 4GHz (although we’ve long been able to surpass that via overclocking). Just as before, none of these parts ship with any sort of cooling (because profit), you'll need to buy a heatsink/fan or closed loop water cooler separately. Intel does offer a new cooler for IVB-E, the TS13X:

While Sandy Bridge E was an 8-core die with two cores disabled, Ivy Bridge E shows up in a native 6-core version. There’s no die harvesting going on here, all of the transistors on the chip are fully functional. The result is a significant reduction in die area, from the insanity that was SNB-E’s 435mm2 down to an almost desktop-like 257mm2.

CPU Specification Comparison
CPU Manufacturing Process Cores GPU Transistor Count (Schematic) Die Size
Haswell GT3 4C 22nm 4 GT3 ? 264mm2 (est)
Haswell GT2 4C 22nm 4 GT2 1.4B 177mm2
Haswell ULT GT3 2C 22nm 2 GT3 1.3B 181mm2
Intel Ivy Bridge E 6C 22nm 6 N/A 1.86B 257mm2
Intel Ivy Bridge 4C 22nm 4 GT2 1.2B 160mm2
Intel Sandy Bridge E 6C 32nm 6 N/A 2.27B 435mm2
Intel Sandy Bridge 4C 32nm 4 GT2 995M 216mm2
Intel Lynnfield 4C 45nm 4 N/A 774M 296mm2
AMD Trinity 4C 32nm 4 7660D 1.303B 246mm2
AMD Vishera 8C 32nm 8 N/A 1.2B 315mm2

Cache sizes remain unchanged. The highest end SKU features a full 15MB L3 cache, while the mid-range SKU comes with 12MB and the entry-level quad-core part only has 10MB. Intel adds official support for DDR3-1866 (1 DIMM per channel) with IVB-E, up from DDR3-1600 in SNB-E and Haswell.

TDPs all top out at 130W, bringing back memories of the high-end desktop SKUs of yesterday. Obviously these days much of what we consider to be high-end exists below 100W.

Of course processor graphics is a no-show on IVB-E. As IVB-E retains the same socket as SNB-E, there are physically no pins set aside for things like video output. Surprisingly enough, early rumors indicate Haswell E will also ship without an integrated GPU.

The Extreme Cadence & Validated PCIe 3.0

Understanding why we’re talking about Ivy Bridge E now instead of Haswell E is pretty simple. The Extreme desktop parts come from the Xeon family. Sandy Bridge E was nothing more than a 6-core Sandy Bridge EP variant (Xeon E5), and Ivy Bridge E is the same. In the Xeon space, the big server customers require that Intel keep each socket around for at least two generations to increase the longevity of their platform investment. As a result we got two generations of Xeon CPUs (SNB-E/EP, and IVB-E/EP) that leverage LGA-2011. Because of when SNB-E was introduced, the LGA-2011 family ends up out of phase with the desktop/notebook architectures by around a year. So we get IVB-E in 2013 while desktop/notebook customers get Haswell. Next year when the PC clients move to 14nm Broadwell, the server (and extreme desktop) customers will get 22nm Haswell-E.

The only immediate solution to this problem would be for the server parts to skip a generation - either skip IVB-E and go to Haswell-E (not feasible as that would violate the 2 generations rule above), or skip Haswell-E and go directly to Broadwell-E next year. Intel tends to want to get the most use out of each one of its architectures, so I don’t see a burning desire to skip an architecture.

Server customers are more obsessed with core counts than modest increases in IPC, so I don’t see a lot of complaining there. On the desktop however, Ivy Bridge E poses a more interesting set of tradeoffs.

The big advantages that IVB-E brings to the table are a ridiculous number of PCIe lanes, a quad-channel memory interface and 2 more cores in its highest end configuration.

While the standard desktop Sandy Bridge, Ivy Bridge and Haswell parts all feature 16 PCIe lanes from the CPU’s native PCIe controller, the Extreme parts (SNB-E/IVB-E) have more than twice that.

There are 40 total PCIe 3.0 lanes that branch off of Ivy Bridge E. Since IVB-E and SNB-E are socket compatible, that’s the same number of lanes we got last time. The difference this time around is IVB-E’s PCIe controller has been fully validated with PCIe 3.0 devices. While Sandy Bridge E technically supported PCIe 3.0 the controller was finalized prior to PCIe 3.0 devices being on the market and thus wasn’t validated with any of them. The most famous case being NVIDIA’s Kepler cards which by default run in PCIe 2.0 mode on SNB-E systems. Forcing PCIe 3.0 mode on SNB-E worked in many cases, while in others you’d see instability.

NVIDIA tells us that it plans to enable PCIe 3.0 on all IVB-E systems. Current drivers (including the 326.80 beta driver) treat IVB-E like SNB-E and force all Kepler cards to PCIe 2.0 mode, but NVIDIA has a new driver going through QA right now that will default to PCIe 3.0 when it detects IVB-E. SNB-E systems will continue to run in PCIe 2.0 mode.

Intel’s X79: Here for One More Round

Unlike its mainstream counterpart, Ivy Bridge E does not come with a new chipset. That’s right, not only is IVB-E socket compatible with SNB-E, it ships with the very same chipset: X79.

As a refresher Intel’s X79 chipset has no native USB 3.0 support and only features two native 6Gbps SATA ports. Motherboard makers have worked around X79’s limitations for years now by adding a plethora of 3rd party controllers. I personally prefer Intel’s native solutions to those we find from 3rd parties, but with X79 you’ve got no choice.

The good news is that almost all existing X79 motherboards will see BIOS/EFI updates enabling Ivy Bridge E support. The keyword there is almost.

When it exited the desktop motherboard market, Intel only promised to release new Haswell motherboards and to support them through the end of their warranty period. Intel never promised to release updated X79 motherboards for Ivy Bridge E, nor did it promise to update its existing X79 boards to support the new chips. In a very disappointing move, Intel confirmed to me that none of its own X79 boards will support Ivy Bridge E. I confirmed this myself by trying to boot a Core i7-4960X on my Intel DX79SI - the system wouldn’t POST. While most existing X79 motherboards will receive BIOS updates enabling IVB-E support, anyone who bought an Intel branded X79 motherboard is out of luck. Given that LGA-2011 owners are by definition some of the most profitable/influential/dedicated customers Intel has, I don’t think I need to point out how damaging this is to customer relations. If it’s any consolation, IVB-E doesn’t actually offer much of a performance boost over SNB-E - so if you’re stuck with an Intel X79 motherboard without IVB-E support, you’re not missing out on too much.

The Testbed: ASUS’ New X79 Deluxe

As all of my previous X79 boards were made by Intel, I actually had no LGA-2011 motherboards that would work with IVB-E on hand. ASUS sent over the latest revision of its X79 Deluxe board with official IVB-E support:

The board worked relatively well but it seems like there’s still some work that needs to be done on the BIOS side. When loaded with 32GB of RAM I saw infrequent instability at stock voltages. It’s my understanding that Intel didn’t provide final BIOS code to the motherboard makers until a couple of weeks ago, so don’t be too surprised if there are some early teething pains. For what it’s worth, that this makes Ivy Bridge E the second high-end desktop launch in a row that hasn’t gone according to Intel’s previously high standards.

Corsair supplied the AX1200i PSU and 4 x 8GB DDR3-1866 Vengeance Pro memory for the testbed.

For more comparisons be sure to check out our performance database: Bench.

Testbed Configurations
Motherboard(s)
ASUS X79 Deluxe
ASUS P8Z77-V Deluxe
ASUS Crosshair V Formula
Intel DX58SO2
Memory
Corsair Vengeance DDR3-1866 9-10-9-27
SSD
Corsair Neutron GTX 240GB
OCZ Agility 3 240GB
OCZ Vertex 3 240GB
Video Card
NVIDIA GeForce GTX Titan x 2 (only 1 used for power tests)
PSU
Corsair AX1200i
OS
Windows 8 64-bit
Windows 7 64-bit
Windows Vista 32-bit (for older benchmarks)

 

Memory & General Purpose Performance
Comments Locked

120 Comments

View All Comments

  • Rick83 - Tuesday, September 3, 2013 - link

    If you really want that number of cores, Ivy Bridge E5/E7 Xeons are going to deliver that, in the 150W power envelope. This is useful in the server market, but will only sell in homeopathic quantities in the desktop market. Still, you should be able to find them in retail around Christmas. Knock yourself out!

    Really IB-E is a free product for Intel, which is the only reason it made it to market at all. They need the 6-core dies for the medium density servers anyway, which is where they actually make sense over SB-Xeons, due to the smaller power envelope/higher efficiency. The investment to turn that core into a consumer product on an existing platform is almost zero, short of a small marketing budget, and possibly a tiny bit of (re-)validation.

    This was never a product designed for the enthusiast market, and is being shoe-horned into that position. Due to the smaller die Intel can probably make better margin over SB-E, which is the only reason to introduce this product in the sector anyway, and possibly to get some brand awareness going with the launch of a new flagship.

    From an economical point of view it makes no sense for Intel to have an actual enthusiast platform. Haswell refresh will be unlikely to bring more cores either (and without the extra I/O they would be a bit hobbled, I imagine), so possibly with Skylake there will be a 6-core upper mainstream solution. Still unlikely from an economical point of view, as Intel would probably prefer sticking to two dies, and going 6/4 may not be economical, whereas selling 6-core CPUs as quads (as they do with 48xx) doesn't work that well in the part of the market that generates reasonable volume.
  • f0d - Tuesday, September 3, 2013 - link

    the problem with xeon is that you cant overclock them so my 5ghz SBE would be close to as good as a 8/10 core xeon

    i dont really care about why intel are not releasing high core count cpu's i just know i want them at a decent price ($1k and under) and overclockable - these 6 core ones just dont make the cut anymore

    i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4 (6 with sbe) core cpu's as standard which blows and per core performance hasnt really changed much going from sandy bridge to haswell

    i dont care about power and heat just give me the performance i want to encode highest quality handbrake movies in less than 24 hours.!!
  • ShieTar - Tuesday, September 3, 2013 - link

    "All" you want is Intel to invest a massive development effort in order to produce for the first time an overclocking CPU with a TDP of around 200W, with silicone for which their business customers would pay 2k$ to 3k$, and sell it to you and the other 500 people in your niche for less than 1k$?

    Intel already offer you a solution if you need more processing power than the enthusiast solution gives you: 2 socket workstation boards, 4 socket server boards, 60-core co-processor cards.
  • f0d - Tuesday, September 3, 2013 - link

    2 socket is inefficient for my workloads
    they could just release a xeon that is unlocked and let me do what i want with it - its not like the workstation/server guys would overclock so its not like intel would be losing any money
    no development needed
    2-3k? i can already buy 8 core SBE for 1k - why not let me oc that?
  • wallysb01 - Tuesday, September 3, 2013 - link

    Many would overclock when Intel is charging hundreds of dollars for just small GHz bumps. You won't seem the academic or large corporation clusters doing it, but the small businesses with just a handful of workstations? They might.

    Look at the 2660 v2 2.2GHz at $1590 and the 2680 v2 2.8GHz at $1943. That's $353 for 600MHz. On a dual processor system its $700, then you have to pay the markups from those actually selling the computers (ie Dell/HP), which takes $700 to $1000 or more. One small little tweak and you're saving yourself $1000, while not stressing the system all that much (assuming you don't go crazy and try to get 3.5GHz from that 2.2GHz base chip).
  • mapesdhs - Wednesday, September 4, 2013 - link


    The catch though is that the mbds used for these systems don't have
    BIOS setups which support oc'ing, and the people who use them aren't
    generally experienced in such things. I know someone at a larger movie
    company who said it'd be neat to be able to experiment with this,
    especially an unlocked XEON, but in reality the pressures of time, the
    scale of the budgets involved, the large number of systems used for
    renderfarms, the OS management required, etc., all these issues mean
    it's easier to just buy off the shelf units and scale as required (the
    renderfarm at the company I'm thinking of has more than 7000 cores
    total, mostly based on Dell blade servers) and management isn't that
    interested in doing anything different or innovative/risky. It's easy
    to think a smaller company might be more likely to try such a thing,
    but in reality for a smaller company it would be a much larger
    financial risk to do so. Bigger companies could afford to try it, but
    aren't geared up for such ideas.

    Btw, oc'ing a XEON is viable with single-socket mbds that happen to
    support them and have chipsets which don't rely on the CPU multiplier
    for oc'ing, eg. an X5570 on an Asrock X58 Extreme6 works ok (I have
    one); the chip advantage is a higher TDP and 50% faster QPI compared to
    a clock-comparable i7 950.

    Sadly, other companies often don't bother supporting XEONs anyway;
    Gigabyte does on some of its boards (X58A-UD3R is a good example) but
    ASUS tends not to.

    Some have posted about core efficiency and they're correct; I have a
    Dell T7500 with two X5570s, but my oc'd 3930K beats it for highly
    threaded tasks such as CB 11.5, and it's about 2X faster for single-
    threaded ops. The 3930K's faster RAM probably helps aswell (64GB
    DDR3/2400, vs. only DDR3/1333 in the Dell which one can't change).

    Someone commented about Intel releasing an unlocked XEON. Of course
    they could, but they won't because they don't need to, and biz users
    wouldn't really care, it's not what they want, and note that power
    efficiency is very important for big server setups, something of which
    oc'ing can of course make utterly ruin. :D Someone said who cares about
    power guzzling when it comes to enthusiast builds, and that's true, but
    when it comes to XEONs the main target market does care, so again Intel
    has no incentive to bother releasing an unlocked XEON.

    I agree with the poster who said 40 PCIe lanes isn't ridiculous. We had
    such provision with X58, so if anything for a top-end platform only 40
    lanes isn't that impressive IMO. Far worse is the continued limit of
    just 2 SATA3 ports; that really is a pain, because the 3rd party
    controllers are generally awful. The Asrock X79 Extreme11 solved this
    to some extent by offering onboard SAS, but they kinda crippled it by
    not having any cache RAM as part of the built-in SAS chip.

    Ian.
  • wallysb01 - Wednesday, September 4, 2013 - link

    "It's easy to think a smaller company might be more likely to try such a thing, but in reality for a smaller company it would be a much larger financial risk to do so. Bigger companies could afford to try it, but aren't geared up for such ideas.

    Btw, oc'ing a XEON is viable with single-socket mbds that happen to support them and have chipsets which don't rely on the CPU multiplier for oc'ing, eg. an X5570 on an Asrock X58 xtreme6 works ok (I have one); the chip advantage is a higher TDP and 50% faster QPI compared to a clock-comparable i7 950."

    These two statements work against eachother. If OC a SP xeon is relatively easy, only if supported, there isn't much reason a DP xeon set up couldn't be OCed within reason without much effort.

    I'm not going to say this would be a common thing, but the small shops run by someone with a "tinkerer" mind set towards computing would certainly be interested in attempting to get that extra 10-20% performance, which Intel would change another $1000 or more for, but get it for free.
  • psyq321 - Thursday, September 5, 2013 - link

    Z9PE-D8 WS has decent overclocking options (not like their consumer X79 boards, but not bad either).

    However, apart from a small BCLK bump, this is useless as SNB-EP and IVB-EP Xeons are locked.

    The best I can do with dual Xeon 2697 v2 is ~3150 MHz (I might be able to go a bit further but I did not bother) for all-core turbo.

    Even if Intel ignores the business reasons NOT to allow Xeon overclocking (to force high-performance-trading people to buy more expensive Xeons as they showed willingness to overclock and, so, potentially cannibalize market for more expensive EX parts) technically this would be a huge challenge.

    Why? Well, 12-core Xeon 2697 power-usage would literally explode if you allow running this on 4+ GHz and with voltages normally seen in overclocking world. I am sure the power draw of the single part would be more than 300W, so 600W for a dual-socket board.

    This is not unheard of (after all, high-end GPUs can draw comparable power) - however, this would mandate significantly higher specs for the motherboard components and put people in actual danger of fires by using inadequate components.

    Maybe when Intel moves to Haswell E/EP - when the voltage regulation becomes CPUs's business, maybe they can find a way to allow overclocking of such huge CPUs after passing lots of checks. Otherwise, Intel runs huge risk of being sued for causing fires.
  • mapesdhs - Sunday, August 13, 2017 - link

    Four years later, who could have imagined we'd end up with Threadripper, and the mess Intel is now in? Funny old world. :D
  • stephenbrooks - Monday, September 23, 2013 - link

    --[i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4]--

    So I got an AMD FX8350, that's 8 cores and 4GHz before turbo. Quite a bit cheaper than Intel's too.

    OK, obviously AMD gets less operations per clock and the 8 cores only have 4 "real" FPUs between them but I wanted 8 cores to test scaling of computer programs on without breaking the bank.

Log in

Don't have an account? Sign up now