Also Launching Today: Z170 Motherboards, Dual Channel DDR4 Kits

The new Skylake processors are assigned a new socket – LGA1151. Intel’s policy since 2006 has been to maintain sockets for two generations and as a result moving from Broadwell to Skylake we were expecting the change. This means that Skylake processors will not work in LGA1150 based motherboards, i.e. those with Intel’s 8th and 9th generation chipsets. For Skylake we get the 100-series chipsets with additional functionality. Launching today in turn is the first member of the 100-series family, the overclocking-friendly Z170, with the other chipsets in the family to follow later in the year.

We have a large piece on the motherboards being released or talked about for Skylake, covering some 55+ products and the different variations within. The major motherboard manufacturers such as ASUS, GIGABYTE, ASRock, MSI, EVGA and a couple of others should all have a wide range ready to purchase on day one, although some models may be region specific.


The badly MSPaint’ed hybrid: MSI’s XPower Gaming Socket, GIGABYTE’s G1 Gaming IO panel, EVGA’s DRAM slots, ECS’s chipset, ASRock’s PCIe arrangement and ASUS’ Deluxe audio.

Here’s an amalgamation of some of the designs coming to end users, with almost all of them investing heavily in gaming brands with specific components to aid the user experience while gaming. Aesthetic designs are also going to be a focus of this generation, with some of the manufacturers moving into a different direction with their designs and trying some new color schemes. Some basic looking models will also be available.

Prices for Z170 motherboards will range from $80 all the way past $400+, depending on feature set and size. A number of motherboards above $150 will feature a couple USB 3.1 Gen 2 (10Gbps) ports, although you will have to check whether they are Type-A or Type-C. That being said, most motherboards with USB 3.1 will use both, but there are a select few that are C-only or A-only.  Also over $150 we will see a lot of Intel’s new network controller, the I219-V, although the gaming lines might invest in Rivet Network’s Killer solution instead.

Intel is launching the Alpine Ridge controller at this time as well, which is said to support USB 3.1 Gen 2, Thunderbolt 3, HDMI 2.0, DisplayPort, and DockPort. According to our sources it would seem that GIGABYTE currently has an exclusive on this technology, and it will be used for their USB 3.1 Gen 2 ports on most motherboard models. Other functionality from the Alpine Ridge controller (TB3, HDMI 2.0) will be on a case-by-case basis depending on how the controller works in two different modes or if extra components are used. We are told that Alpine Ridge costs similarly to the ASMedia ASM1142 controller, but will enable two USB 3.1 Gen 2 ports at 10 Gbps simultaneously as it uses four PCIe lanes from the chipset.

We will go more into the 100-series chipset in the next page, but it is worth mentioning briefly here that the speed between the CPU and the chipset has increased from DMI 2.0 (5 GT/s, 2GB/sec) to DMI 3.0 (8 GT/s, 3.93GB/sec), and that the chipset has a new high speed hub (HSIO) that allows 26 lanes to be used from it although some lanes are limited (e.g. 20 PCIe 3.0 lanes maximum split into five x4 controllers). Intel’s Rapid Storage Technology is upgraded as well to give three PCIe drives access to its features as long as they are on the correct HSIO ports.

DRAM: The March to DDR4

In the world of DRAM for personal computers, DDR3 is currently king. Having been the main standard since 2007, you would be hard pressed to find a mainstream or low end platform sold that did not require access to DDR3. That changed in the enthusiast segment last year with the launch of Haswell-E which also introduced DDR4 at a high premium. For Haswell-E there was no crossover – you had no choice but to use DDR4 (unless you might be a million-unit customer).

Because the consumers and consumer product OEMs are more price sensitive, DDR4 will be a slower transition. There is precedent here in that the move from DDR2 to DDR3 saw a generation of processors that supported both standards and it was up to the motherboard manufacturer to design for it. In this transition, Skylake processors will support both DDR3L and DDR4 modules, with a few caveats.

Caveat number one is that initially, only DDR4 motherboards will be on the market. So if you upgrade now, DDR4 needs to be on the shopping list as well. We have had word of some DDR3L-only motherboards coming, as well as combo boards with DDR3L and DDR4 slots on board. Caveat one-point-five, you can use either DDR3L or DDR4, but not both at the same time.

Caveat number two, DDR3L is different to DDR3 as it operates at a lower voltage. This means that the memory controllers on Skylake most likely have a combined voltage domain, and regular DDR3 might not work (in fact early testing suggests not without reducing the voltage). Very few people currently own DDR3L DIMMs, so the likelihood of a user performing an upgrade while reusing their RAM might be slim.

Caveat number three: prices of DDR4 have dropped significantly since last year, and there is only a small premium over DDR3. The benefits of DDR4 include a lower operating voltage, a more stable design, and the ability to purchase 16GB modules with ease. That means that a Skylake platform will happily take 64GB of memory.

With that last point, we should point out that Skylake is a dual memory channel architecture, supporting two memory modules per channel. This gives a maximum of four DDR4 tests, and 4x16 = 64GB maximum.

We have been told that Skylake’s memory controller, compared to previous generations, is absolutely golden at higher speed memory support. By default Skylake supports the JEDEC standard for DDR4, 2133 MT/s at a latency of 15-15-15, but the overclocking guides we have received suggests that all processors should be able to reach DDR4-3200 relatively comfortably, with a few processors in the right motherboards going for DDR4-4000. While this should bode well for integrated graphics users, those high end kits are typically very expensive.

We currently have dual channel kits in to test from a number of the DRAM companies, and plan on performing a memory scaling article within the next few weeks to see how exactly performance might scale on Skylake. Though in the meantime, as part of this review, we were able to source a closed beta variant of a combination DDR3L/DDR4 motherboard for Skylake and have included a test comparing the two.

The Intel 6th Gen Skylake-K Review: CPUs, Motherboards and DRAM The Skylake CPU Architecture
Comments Locked

477 Comments

View All Comments

  • SkOrPn - Tuesday, December 13, 2016 - link

    Well if you were paying attention to AMD news today, maybe you partially got your answer finally. Jim Keller yet again to the rescue. Ryzen up and take note... AMD is back...
  • CaedenV - Wednesday, August 5, 2015 - link

    Agreed, seems like the only way to get a real performance boost is to up the core count rather than waiting for dramatically more powerful single-core parts to hit the market.
  • kmmatney - Wednesday, August 5, 2015 - link

    If you have an overclocked SandyBridge, it seems like a lot of money to spend (new motherboard and memory) for a 30% gain in speed. I personally like to upgrade my GPU and CPU when I can get close the double the performance of the previous hardware. It's a nice improvement here, but nothing earth=shattering - especially considering you need a new motherboard and memory.
  • Midwayman - Wednesday, August 5, 2015 - link

    And right as dx12 is hitting as well. That sandy bridge may live a couple more generations if dx12 lives up to the hype.
  • freaqiedude - Wednesday, August 5, 2015 - link

    agreed I really don't see the point of spending money for a 30% speedbump in general, (as its not that much) when the benefit in games is barely a few percent, and my other workloads are fast enough as is.

    If Intel would release a mainstream hexa/octa core I would be all over that, as the things I do that are heavy are all SIMD and thus fully multithreaded, but I can't justify a new pc for 25% extra performance in some area's. with CPU performance becoming less and less relevant for games that atleast is no reason for me to upgrade...
  • Xenonite - Thursday, August 6, 2015 - link

    "If Intel would release a mainstream hexa/octa core I would be all over that, as the things I do that are heavy are all SIMD and thus fully multithreaded, but I can't justify a new pc for 25% extra performance in some area's."

    SIMD actually has absolutely nothing to do with multithreading. SIMD refers to instruction-level parallellism, and all that has to be done to make use of it, for a well-coded app, is to recompile with the appropriate compiler flag. If the apps you are interested in have indeed been SIMD optimised, then the new AVX and AVX2 instructions have the potential to DOUBLE your CPU performance. Even if your application has been carefully designed with multi-threading in mind (which very few developers can, let alone are willing to, do) the move from a quad core to a hexa core CPU will yield a best-case performance increase of less than 50%, which is less than half what AVX and AVX2 brings to the table (with AVX-512 having the potential to again provide double the performance of AVX/AVX2).

    Unfortunately it seems that almost all developers simply refuse to support the new AVX instructions, with most apps being compiled for >10 year old SSE or SSE2 processors.

    If someone actually tried, these new processors (actually Haswell and Broadwell too) could easily provide double the performance of Sandy Bridge on integer workloads. When compared to the 900-series Nehalem-based CPUs, the increase would be even greater and applicable to all workloads (integer and floating point).
  • boeush - Thursday, August 6, 2015 - link

    Right, and wrong. SIMD are vector based calculations. Most code and algorithms do not involve vector math (whether FP or integer). So compiling with or without appropriate switches will not make much of a difference for the vast majority of programs. That's not to say that certain specialized scenarios can't benefit - but even then you still run into a SIMD version of Amdahl's Law, with speedup being strictly limited to the fraction of the code (and overall CPU time spent) that is vectorizable in the first place. Ironically, some of the best vectorizable scenarios are also embarrassingly parallel and suitable to offloading on the GPU (e.g. via OpenCL, or via 3D graphics APIs and programmable shaders) - so with that option now widely available, technologically mature, and performant well beyond any CPU's capability, the practical utility of SSE/AVX is diminished even further. Then there is the fact that a compiler is not really intelligent enough to automatically rewrite your code for you to take good advantage of AVX; you'd actually have to code/build against hand-optimized AVX-centric libraries in the first place. And lastly, AVX 512 is available only on Xeons (Knights Landing Phi and Skylake) so no developer targeting the consumer base can take advantage of AVX 512.
  • Gonemad - Wednesday, August 5, 2015 - link

    I'm running an i7 920 and was asking myself the same thing, since I'm getting near 60-ish FPS on GTA 5 with everything on at 1080p (more like 1920 x 1200), running with a R9 280. It seems the CPU would be holding the GFX card back, but not on GTA 5.

    Warcraft - who could have guessed - is getting abysmal 30 FPS just standing still in the Garrison. However, system resources shows GFX card is being pushed, while the CPU barely needs to move.

    I was thinking perhaps the multicore incompatibility on Warcraft would be an issue, but then again the evidence I have shows otherwise. On the other hand, GTA 5, that was created in the multicore era, runs smoothly.

    Either I have an aberrant system, or some i7 920 era benchmarks could help me understand what exactly do I need to upgrade. Even specific Warcraft behaviour on benchmarks could help me, but I couldn't find any good decisive benchmarks on this Blizzard title... not recently.
  • Samus - Wednesday, August 5, 2015 - link

    The problem now with nehalem and the first gen i7 in general isn't the CPU, but the x58 chipset and its outdated PCI express bus and quickpath creating a bottleneck. The triple channel memory controller went mostly unsaturated because of the other chipset bottlenecks which is why it was dropped and (mostly) never reintroduced outside of enthusiast x99 quad channel interface.

    For certain applications the i7 920 is, amazingly, still competitive today, but gaming is not one of them. An SLI GTX 570 configuration saturates the bus, I found out first hand that is about the most you can get out of the platform.
  • D. Lister - Thursday, August 6, 2015 - link

    Well said. The i7 9xx series had a good run, but now, as an enthusiast/gamer in '15, you wouldn't want to go any lower than Sandy Bridge.

Log in

Don't have an account? Sign up now