Combining 10nm CPU + 14nm Chipset

As with Intel’s previous low-power (sub 28W) mobile designs, the company will be combining the CPU and the chipset onto a single package. This typically helps in enabling a smaller motherboard design for thin and light systems that might want to dedicate that area to additional battery, or simply save the footprint altogether. The flip side is that there are more pin-outs, and the board design has to be suitable for the increased signal traces, but this is nothing new. For Ice Lake-U, Intel has stated that the CPU, which is being built on their second-generation 10nm "10+" process, will be paired with a new generation ‘14nm class’ chipset.

As stated on the previous page, the CPU will offer up to four cores, eight threads, go up to 4.1 GHz, provide integrated TB3 (more on that later), and a vast improvement to the graphics capabilities (also more on that later).

One of the big updates to the design will be enabling support for both DDR4 and LPDDR4/LPDDR4X on a single memory controller. Up until this point, only Intel’s latest Atom designs had LPDDR4 enabled, so this is a welcome addition to the main high-performance product line. In this instance Intel will support LPDDR4/X with up to four 32-bit channels at 3733 MT/s for 32 GB, or if a user needs more capacity, up to 64GB of DDR4-3200 with two 64-bit channels. The LPDDR4/X implementation will allow for lower idle power states, saving power and battery life when the memory is not in use, however it does this in exchange for maximum capacity. It’s an interesting tradeoff for sure. The peak power consumption of both modes in an equally populated system, given the respective frequency difference, should be similar.

One element that Intel hasn’t talked about is its ‘new converged chassis fabric’, which we believe to be a scalable power and environment control fabric that connects between chips in order to both transfer data but also to manage how and where power is applied within a system. The fact that Intel calls this ‘IP and Core scalable’ suggests that if vendors want to add additional supported chips to the system, say for example a future Intel AI chip, that might be possible and it could be controlled through this fabric. It will be interesting to see if Intel divulges any more of this information.

Not mentioned in this slide are the new performance enhancements, such as AVX-512 and other new instructions, which we’ll cover in detail a bit later.

The connection to the chipset is through a DMI 3.0 x4 link, as with current generations of Intel products.

The new ‘14nm class’ chipset is also an update from Intel’s 8th Gen Core mobile processors. It does not yet have a series designation, but given the updates, this is likely to be a member of the 400-series chipset line.

With the new chipset, Intel has updated the proprietary CNVi protocol on the die to support CNVi 2, which is found on the latest Wi-Fi 6 Intel-Only RF modules, such as AX201. This update to a second generation implementation is likely to be for throughput reasons, given the increased bandwidth that Wi-Fi 6 provides over Wi-Fi 5.

Intel also promotes better integrated power delivery on the chipset, meaning that the PCH can share the same power rail as the CPU, reducing the number of on-board components inside the mobile system. This should reduce cost, and hopefully the thermal density of the power delivery onboard the chip doesn’t have a negative effect. Also on the power side, Intel states that the audio DSP on the chipset has also been improved for lower power operation.

On the IO side of the chipset, Ice Lake will support up to six ports of USB 3.1 (which is now USB 3.2 Gen 1 at 5 Gbps), or 10 USB 2.0 ports, or a mix. There are also 16 HSIO lanes which are broadly presented as PCIe 3.0, although given previous HSIO models it is likely that certain combinations of ports will be required for specific features. However this should afford at least two, if not three, potential PCIe 3.0 x4 NVMe drives in these systems. For SATA, there is access to three direct ports, but there is also an eMMC 5.1 link. Intel does not state if UFS is supported for super-small form factor systems.

Ice Lake is not Intel’s first crack at 10nm

We documented the history of Intel’s 10nm process with our review of the first 10nm CPU to come from the company. Our Core i3-8121U analysis of the Cannon Lake design is one Intel would rather forget – the manufacturing process didn’t yield, the frequency wasn’t there, the power consumption was too high, and the graphics were completely fused off. But hey, Intel still shipped it to customers by the end of 2017, and it found its way into some Chinese educational notebooks, and some NUCs, eventually, satisfying promises made to shareholders.

Fast forward almost 18 months, and we’re about to see Intel’s second take on 10nm with Ice Lake. Under Intel’s nomenclature, Ice Lake is technically being built with the ‘10+’ process, which is meant to be an improved version of the 10nm used in Cannon Lake. The fact that parts of the CPU are now working means that it earns that plus. Intel pointed to an extra metal layer (going from 12 to 13) in the manufacturing has helped with power delivery and frequency, and new transistor library designs and MIM caps have helped with voltages. Intel is also using a thin magnetic inductor array, and a recess in motherboard design for reduced package z-height.

As for Intel’s future, the company has stated at events that it is now developing future process nodes with these ‘BKM’ updates built into the lifecycle of the process node, such as 10+ and 10++ as part of the family of 10nm processes, and then beyond to 7nm with 7, 7+, and 7++ over the next few years. It’s worth noting that while Intel has officially shipped a first generation 10nm product, is promoting Ice Lake as a ‘10+’ second generation 10nm product, and has specified a number of 10nm class products up and down its product portfolio, we are still waiting on an official presentation about how Intel’s 10nm process works in a formal setting, rather than bits and pieces from events like ISSCC and IEDM. What we have pieced together from those events is likely out of date for 10+, however we do know that for 10nm Intel has three different libraries based on performance, each with different transistor densities: from 67 million transistors / mm2 for ultra-high performance, to 101 million transistors / mm2 for high-density logic. You can read more about it here.

Ice Lake and Project Athena Sunny Cove Microarchitecture: Going Deeper and Wider
Comments Locked

107 Comments

View All Comments

  • repoman27 - Tuesday, July 30, 2019 - link

    “Each CPU has 16 PCIe 3.0 lanes for external use, although there are actually 32 in the design but 16 of these are tied up with Thunderbolt support.”

    This isn’t quite right. The ICL-U/Y CPU dies do not expose any PCIe lanes externally. They connect to the ICL PCH-LP via OPI and the PCH-LP exposes up to 16 PCIe 3.0 lanes in up to 6 ports via HSIO lanes (which are shared with USB 3.1, SATA 6Gbps, and GbE functions). So basically no change over the 300 Series PCH.

    The integrated Thunderbolt 3 host controller may well have a 16-lane PCIe back end on-die, and I’m sure the CPU floorplan can accommodate 16 more lanes for PEG on the H and S dies, but that’s not what’s going on here.
  • voicequal - Friday, August 2, 2019 - link

    The SoC architecture shows a direct path for the Thunderbolt3 PCIe lanes to the CPU, with only USB2 going across OPI.. Whatever PCIe lanes are available on the PCH are in addition those available via TB3.

    https://images.anandtech.com/doci/14514/Blueprint%...
  • repoman27 - Tuesday, August 6, 2019 - link

    The Thunderbolt 3 controller is part of the CPU die. There are four PCIe 3.0 x4 root ports connected to the CPU fabric that feed the Thunderbolt protocol converters connected to the Thunderbolt crossbar switch (the Converged I/O Router block in that diagram). The CPU exposes up to three (for Y-Series) or four (for U-Series) Thunderbolt 3 ports. The only way you can leverage the PCIe lanes on the back-end of the integrated Thunderbolt 3 controller is via Thunderbolt.

    The PCH is a separate die on the same package as the CPU die. The two are connected via an OPI x8 link operating at 4 GT/s which is essentially the equivalent of a PCIe 3.0 x4 link. The PCH contains a sizable PCIe switch internally which connects to the back-ends of all of the included controllers and also provides up to 16 PCIe 3.0 lanes in up to 6 ports for connecting external devices. These 16 lanes are fed into a big mux which Intel refers to as a Flexible I/O Adapter (FIA) along with all the other high-speed signals supported by the PCH including USB 3.1, SATA 6Gbps, and GbE to create 16 HSIO lanes which are what is exposed by the SoC. So there are up to 16 PCIe lanes available from the Ice Lake SoC package, all of which are provided by the PCH die, but they come with the huge asterisk that they are exposed as HSIO lanes shared with all of the other high-speed signaling capabilities of the PCH and provisioned by a PCIe switch that effectively only has a PCIe 3.0 x4 connection to the CPU.

    This is not at all what Ian seemed to be describing, but it is the reality.

    And the USB 2.0 signals for the Thunderbolt 3 ports do indeed come from the PCH, but they do not cross the OPI, they're simply routed from the SoC package directly to the Thunderbolt port. The Thunderbolt 3 host controller integrated into the CPU includes a USB 3.1 xHCI/xDCI but does not include a USB 2.0 EHCI.
  • poohbear - Tuesday, July 30, 2019 - link

    I was looking at buying Dell's XPS 15.6" (7590 model), but with Project Athena laptops a few months away, i think i'll wait. Intel parts for solid reliability and unified drivers, and "4 hours of battery life with <30min of charging", those 2 on their own make the wait worth it for me!
  • repoman27 - Tuesday, July 30, 2019 - link

    “The connection to the chipset is through a DMI 3.0 x4 link...”

    Should be OPI x8 for U/Y Series.

    “...Ice Lake will support up to six ports of USB 3.1 (which is now USB 3.2 Gen 1 at 5 Gbps)...”

    They’re USB 3.1 Gen 2 ports, so it’s six USB 3.2 Gen 2 x 1 (10 Gbit/s) ports.
  • Roel9876 - Tuesday, July 30, 2019 - link

    Well, for one, it is certainly not realistic to run single thread benchmarks on application that support multi threading. Realistically, most (all?) people will run the application multi threaded?
  • HStewart - Tuesday, July 30, 2019 - link

    As developer for many years, multiple threads are useful for handling utility threads and such - but IO is typically area which still has to single thread. Unless it has significantly change in API, it is very difficult to multi-thread the actual screen. And similar for disk io as resource.
  • Arnulf - Tuesday, July 30, 2019 - link

    "Our best guess is that these units assist Microsoft Cortana for low-powered wake-on voice inference algorithms ..."

    Our best guess is that these are designed for use by assorted three-letter agencies.
  • PeachNCream - Tuesday, July 30, 2019 - link

    Open mics are totally okay. There is absolutely no privacy risk to you at all and you should never give it a second thought.
  • ToTTenTranz - Tuesday, July 30, 2019 - link

    With 4x TB3 connections available, I wonder if the maker of an external GPU box could develop a multiplexer that combined two TB3 connections into a PCIe 3.0 8x.

    This would significantly decrease some problems that eGPU owners are having due to relatively low CPU-GPU bandwidth.

Log in

Don't have an account? Sign up now