At next week’s annual VLSI Symposium, Intel will be presenting a pair of highly-anticipated papers about their progress with their upcoming PowerVia chip fabrication technology – the company’s in-development implementation of backside power delivery networks. Along with Intel’s RibbonFET technology for gate-all-around transistors, PowerVia and RibbonFET are slated to serve as Intel’s big one-two punch to the rest of the silicon lithography industry, introducing two major chip technologies together that Intel believes will vault them back into the fab leadership position. Combined, the two technologies are going to be the backbone of Intel’s “angstrom” era fab nodes, which will go into high volume manufacturing next year, making Intel’s progress with the new technologies a subject of great importance both inside and outside of the company – and one which Intel wants to address.

When it comes to Intel’s chip manufacturing technology, the stakes for Intel’s R&D groups couldn’t be any higher than they are right now. The long-time leader of the fab would has faulted – repeatedly – and is now in the middle of a multi-year effort to course-correct, not only to get back that leadership position which they’ve lost, but to break into the contract chip manufacturing business in a big way.

As a result, while VLSI research papers do not normally attract a ton of outside attention, this year in particular is a big exception. With RibbonFET and PowerVia set to go into production next year, Intel is reaching the point where they’re wrapping up R&D work on the first generation of those technologies. Intel is now at the point where, to their peers in the VLSI industry, they can present their first findings around producing a complex logic test chip. And to Intel’s investors and other outsiders, Intel can show the first real evidence that their efforts to get back on track may very well be succeeding, giving Intel an opening to leap ahead of the competition that the company greatly needs.

To that end, at next week’s symposium Intel is going to be disclosing a great deal of information around their implementation of backside power delivery network technology, which they call PowerVia. Central to these papers is Blue Sky Creek, an Intel “product-like” logic test chip that implements backside power delivery on the EUV-enabled Intel 4 process technology. With Blue Sky Creek, Intel intends to demonstrate that not only do they have PowerVia working with a CPU on time for high volume manufacturing next year, but that the performance and manufacturing benefits of backside power delivery are everything Intel has promised. Suffice it to say, Intel is preparing for this year’s VLSI conference to be a very big moment for the company.

Backgrounder: Backside Power Delivery

Backside power delivery networks (BSP/BS-PDN) is a technology concept that has been quietly in development for the past several years across the entire chip fab industry. Similar to EUV in recent fab nodes, BS-PDN is seen as an essential technology to continue developing ever-finer process node technologies, and as a result, it’s expected that all leading-edge chip fabs will move to the technology in the future.

With that said, they are not all moving there at the same time. Intel expects to be first of the big three chip fabs to productize the technology, bringing it into chips at least two years ahead of its competitors. As a result, Intel is very much the pathfinder for the technology, which comes with no shortage of risks – but it also comes with significant rewards for getting the technology (and their timing) correct. For Intel, the company believes that this will be their new FinFET moment – a reference to Intel’s introduction of FinFETs on 22nm in 2012, which cemented Intel’s lead in the market for several years. This is particularly important as Intel won’t have a lead over its competitors when it comes to gate-all-around FETs (GAAFET) timing, so BS-PDN/PowerVia is going to be Intel’s trump card in the fab space for the next few years.

Meanwhile, to fully understand how BS-PDN works, the best place to start is to recap how traditional (frontside) power delivery works, so that we can contrast that to the many changes that come with building a BS-PDN wafer.

The manufacturing process of a modern, frontside power delivery chip starts with etching the transistor layer, M0. This is the smallest and most intricate layer on a die, and is where high-precision tools like EUV and multi-patterning are needed the most. In short, it’s the most expensive and complex layer of the chip, which has significant ramifications for both how chips are constructed and how they are tested.

Above that, additional metal layers are added at increasing sizes to account for all the wiring needed between the transistors and different parts of the processor (cache, buffers, accelerators), as well as providing routing for power that comes in from even further up the stack. Intel likens this akin to building a pizza, which is a crude analogy, but effective.

A modern high-performance processor typically has anywhere from 10 to 20 metal layers in its design. With the Intel 4 process in particular, there are 16 layers for logic, ranging from a pitch of 30 nm up to 280 nm. And then there’s another two “giant metal” layers on top of that just for power routing and placing external connectors.

Once a die has been fully fabbed and polished, the die is then flipped over (making this a flip chip) so that the chip can talk to the outside world. This flip places all the connectors (power and data) at the now-bottom of the chip, while the transistors end up at the top of the chip. Once flip chip manufacturing was mastered, this made chip debugging and cooling especially convenient, as it allowed debugging tools easy access to the all-important transistor layer. Meanwhile those hot little gates also became especially close to the chip’s cooler, allowing for the relatively easy transfer of heat out and away from a chip.

The drawback with frontside power delivery, however, is that it means both power and signal wires are on the same side of the chip. Both wires have to take a significant trip down through 15+ layers to reach the transistors, all the while competing for valuable space and creating interference with each other. For the power wires in particular, this is especially a nuisance, as the resistance along these wires causes the voltage of the incoming power to drop, which is known as the IR Drop/Droop effect.

For most of the history of chip fabbing, this hasn’t been a huge issue. But like so many other aspects of chip building, the problem has become more and more pronounced as chip feature sizes have shrunk. There is no obvious hard limit here on frontside power delivery, but given how much harder it is to shrink chips with every successive generation, the problem has become too big (or rather, too expensive) to work around.

And that brings us to backside power delivery. If having both signals and power coming in on the same side of the chip is causing problems, why not separate the two? This, in a nutshell, is exactly what backside power delivery looks to resolve, by moving all of the power connections to the other side of the wafer.

For Intel’s PowerVia implementation of this concept, Intel quite literally flips a wafer upside down, and polishes away almost all of the remaining silicon until they reach the bottom of the transistor layer. At that point, Intel then builds the metal layers for power delivery on the opposite side of the chip, similar to how they would have previously built them on the front side of the chip. The net result is that Intel ends up with what’s essentially a double-sided chip, with power delivery on one side and signaling on the other.

On paper (and in Intel’s paper), there are ultimately several benefits to moving to BS-PDN. First and foremost, this has a meaningful impact on simplifying the construction of a chip. We’ll get to Intel’s specific claims and findings in a bit, but of particular note here is that it allowed Intel to relax the density of their M0 metal layer. Rather than requiring a 30 nm pitch on Intel 4, Intel’s test node for Intel 4 + PowerVia allowed for a 36 nm pitch. That directly simplifies the most complex and expensive processing step of the entire chip, rolling it back to something closer in size to the Intel 7 process.

BS-PDN is also primed to deliver some modest performance improvements to chips as well. Shortening the route for power delivery to the transistors with something more direct helps to counteract the IR Droop effect, resulting in better power delivery to the transistor layer. And getting all of those power delivery wires out of the signal layers improves the performance of those as well, eliminating the power interference and giving chip designers more room to optimize their designs.

Otherwise, the trade-off for all of this is primarily in losing the previously mentioned perks of building a frontside chip. The transistor layer is now roughly in the middle of the chip, rather than at the end. This means that traditional debugging tools can’t directly poke the transistor layer of a completed chip to test it, and there is now 15 or so layers of signal wires between the transistor layer and the cooling service. These aren’t insurmountable challenges, as Intel’s paper takes care to lay out, but were issues that Intel had to work around in their design.

Manufacturability is the other set of trade-offs involved in moving to BS-PDN. Building power layers on the backside of a die is something that’s never been done before, which increases the chance of something going wrong. So not only does power delivery need to work, but it needs to work without significantly reducing chip yields or otherwise making chips less reliable. But if all of these things pan out, then the extra effort of building power layers on the backside of a wafer are more than cancelled out by the time and cost savings of not having to route power in through the front side.

Intel’s Added Wrinkles: Carrier Wafers & Nano TSVs

Because backside power delivery networks are going to become an industry standard feature in due time, we’ve tried not to focus too much on Intel’s specific implementation of BS-PDN/PowerVia. But now that we’ve hit the fundamentals of BS-PDN, there are a couple of Intel-specific implementation details worth noting.

First and foremost, Intel is using a carrier wafer as part of their construction process in order to provide chip rigidity. Bonded to the front side of a PowerVia wafer after the fabrication of that side is completed, the carrier wafer is a dummy wafer to help support the chip while Intel is blasting away at the other side. Because the two-sided chipmaking process polishes away so much of the remaining silicon wafer, there isn’t much structural silicon to hold the whole thing together. As Intel lightly jokes, despite this being silicon lithography, there’s only a tiny amount of silicon left on the wafer by the end.

That carrier wafer, in turn, remains as part of the chip for the rest of its life. Once chip fabbing is completed, Intel can polish the bonded carrier wafer down to the necessary thickness. Of note, because the carrier wafer is on the signal side of the chip, this means it presents another layer of material between the transistors and the cooler. Intel’s techniques to improve heat transfer take this into account, but for PC enthusiasts accustomed to transistors at the top of their chip, this is going to be a significant change.

Intel’s other notable detail with their implementation of BS-PDNs is the use of TSVs for power routing. In PowerVia, there are nano-scale TSVs (aptly named Nano TSVs) in the transistor layer of the chip. This is in contrast to buried power rails, which is what industry pioneer IMEC has been looking into with their BS-PDNs.

The long and short of matters is that while power rails still required going up-and-over the transistor layer to deliver power, the use of TSVs allows power to be delivered more directly to the transistor layer. For Intel this is a technological advantage that they’re keen to exploit, as it avoids having to design and build-in the routing required for buried power rails.

PowerVia In Practice: Blue Sky Creek
Comments Locked

31 Comments

View All Comments

  • DaRagingLunatic - Monday, June 5, 2023 - link

    Great article. Very informative and highly impressed. These are the types of articles that keep me coming back to anandtech. Thanks very much. I hope the back side power delivery goes well for Intel, and from this article it appears that it will, also it’s nice to get that 6% frequency boost due to a change in power delivery. With the intel 4 M0 transistor gate pitch being 30nm, and with backside power delivery they made it 36nm. Really makes me wonder just how wide they need to keep these gate pitches, in the die shrinks, and just how many transistors per mm2 we are going to run up to, and where the point of shrink will stop.
  • name99 - Monday, June 5, 2023 - link

    As usual from an Intel presentation, you get an accurate (kinda...) story about what INTEL is doing, and a completely distorted story about how this relates to the rest of the world...

    As far as the rest of the world goes
    https://www.imec-int.com/en/articles/imec-demonstr...
    Points being
    - imec (and through them, essentially TSMC and SS) have already demonstrated this tech in 2021. This includes Intel's precious nanoTSV's that they're trying hard to pretend are tech exclusive to them. What's known of TSMC is that their roadmap starts with BSPDN but NOT nano-TSV's, that's for a year later; but that reflects TSMC's usual caution, one step at a time; rather than they don't know about nTSV's or can't manufacture them.

    - the big win with BSPDN is not the power/voltage droop improvements. Those are nice, but only matter if you're already driving your chip at crazy power levels. No, the big win is relieving routing congestion in M1. It is this routing congestion that has throttled the down-scaling of SRAM in recent processes; and the expectation is that by moving to BSPDN (even the initial, simpler, non-nTSV version chosen by TSMC for gen1) SRAM scaling can resume. BUT for that to be of value, you have to be willing to keep the rest of your metal network in place, as low-pitch as before...

    In other words, the issue is not "Intel smart, TSMC stupid"; it is priorities.
    Intel's priority (for better or worse) it to be able to crank up the *effective* power and frequency of their chips even more. For them, right now, BSPDN matters insofar as it allows for slightly lower power translated into slightly higher clock.
    ISMC's priorities are (a) process implementation reliability [hence two stages, BSPDN first, nTSV second a year later] and (b) denser SRAM (which they get some of, even at the first stage of BSPDN without nTSV).
    IMHO the longer-term imec/TSMC strategy will also move clocks down to the backside. This will have little effect on power, but it will relieve routing even more, allowing for even denser SRAM. I have no idea, but I would not be surprised if at least part of the slow cautious TSMC strategy is to put together a process that naturally extends to moving the clocking layers to the backside, whereas the faster Intel strategy has been to ignore this and hope that, somehow, it can be retrofitted in the future...
  • DannyH246 - Tuesday, June 6, 2023 - link

    Excellent post! Quite honestly this kind of balanced comparison of the various tech out there is what we would want in the main article. It’s a shame www.IntelTech.com hasn’t done this for a while now.
  • drajitshnew - Wednesday, June 7, 2023 - link

    @name99 impressive post.
    I had been confused for some time about the limitation on scaling of SRAM as traditional 6T SRAM has only transisters which are scaling. Even DRAM with Capacitors is now scaling in production to 12 nm
  • Quantum Mechanix - Thursday, June 8, 2023 - link

    Love your explanation on what's hindering SRAM scaling, not that easy to find out there- bravo!
  • my_wing - Sunday, June 11, 2023 - link

    At first it seem like what you talking about IMEC make a bit of sense. But the longer I read from other website and other things (of course a lot still not understand). But what I can conclude is now the following.

    In July 2021, intel already mentioned a lot about PowerVIA, it that meant that your IMEC paper is in the same class, then TSMC is not behind. Sorry I don't get it.

    Intel give us number and it also give us number of Intel 4 without PowerVIA, So now we can even draw some number out from TSMC N3 vs TSMC N5 vs Intel 4.

    So the Contacted Gate Pitch is 45nm Vs 51nm Vs 50nm (lower is better).
    We know that there is only Single Digit Density improvement TSMC N3E Vs TSMC N5, we know that Intel 4 is at least better then TSMC N5, we know that Intel 4 Vs Intel 4 (with PowerVIA) bring double Digit Density with 6% free frequency boost through less noise and direct connect.

    i.e. Intel 4 with PowerVIA beat both TSMC N3E and TSMC N5.

    After I saw this from you, I am very worry about TSMC. Because I don't know whether TSMC N2 Vs TSMC N3E will bring single digit density boost (from another Anand Article), if you look at TSMC N2 Vs Intel 4 (with PowerVIA) then you can have the same density profile, that is very bad because we will not see TSMC N2 in another 2 years (2025). I didn't see any Risk Production on TSMC N2 Today, but Intel 4 (with PowerVIA) is at Risk Production stages.

    On top you mentioned small steps, i.e. the First Backside Power Delivery might not even use nTSV and Intel PowerVIA meant that is TSMC 2nd Generation Backside Power Delivery.

    In my point of view, I felt like that TSMC have all the resource (even more then Intel), they did not do a de-risk and produce a N3(Backside Power Delivery) (they have some many N3 Node name already (N3B/E/P/X) is is clear they can do a N3 (Backside Power Delivery Node), that is very bad for TSMC.

    If Intel pull this off and just leave GAA just rename this Intel 4 PowerVia as Intel 20A, then they already fully catch up with TSMC. This is the fact.

    What TSMC is doing sound like a company in trouble.
    1) Quickly talk to their customer (probably with some price cut) and sell what it can - from what nVidia say they will use TSMC still.
    2) Having buzz word and tell people they start procurement and process to start something that is remotely truth. (TSMC say that they start risk production of N2 but the road map show it is H2 2025 like 2 years away and for semi industry this is long for procurement purposes).
    3) Having the CEO/CFO saying to analyst that the recent profit dip is temporary and having a good long term (financial tricks) (just like Bob Swam @ Intel), and not the CTO saying we have this tech and that tech so we are good.
    4) No admitting this danger and just quick dive into the technology, i.e. have a N3 with backside power delivery, and not just focus on GAA, this is a mistake. The Intel slide show that it is cheap to manufacture Intel 4 with PowerVIA (2nd slides).
    5) Your competitor is show you with hard numbers (Contacted Poly Pitch is very hard number and 95% density, people can scan your chip after it is available (meteor lake is 2H 2023), there is little room to cheat, and what you give in >1.15X over TSMC N3E (another article in Anand), so what is ">" meant about/around/best case????.

    In other word it is an issue of Intel being smart and TSMC is stupid. This is a Trillion Dollar Company and this is what is happen after Chang left, I am very worry, just like when Paul Otellini and Pat left Intel (I really don't think Larrabee is a failure).
  • Zoolook - Monday, June 12, 2023 - link

    You seem to have missed that Intel is not introducing PowerVia until with 20A and thats half a year before TSMC N2, and looking back at the execution of process improvements the last 10 years between the two companies, Intel will likely still be behind, maybe a bit less but personally I doubt it.
  • my_wing - Wednesday, June 14, 2023 - link

    That is the Pre Chang TSMC, post Chang TSMC missed N3B for at least 1 year, no one except Apple is going to use N3B, and the amount of EUV exposure in N3B is way too much, so they reduced in N3E (technically without any SRAM scaling vs N5, then N3E is just a name?). Looking at the post Chang record, I can also assume that TSMC N2P is only going to come out at 202X with delays on the card. CEO is long term planning guy, it was also not day 1 when Paul Otellini left and Intel shows the sinking, it take 3 years for us to know, CEO influence often shown 3+ years and guess what Chang retired 2018, and N3B problem is in 2022 (3/4 yearish).

    Intel is already ahead (intel 4 is already 100% of what TSMC N3E High Performance is (3 Fins), the density side then intel did not show us or have High Density Library (2 Fins)), not behind you can be a cheerleader for TSMC, I was a TSMC shareholder and I sold that at 117 USD ADR and switch to ASML. With this intel slide, I am starting to switch from ASML to Applied Materials. I put real money to things that I talk, not a fanboy.

    You can doubt it but the number 45nm Vs 51nm Vs 50nm, as far as I can see (because they all (all article from different source) say TSMC N3E will not bring density improvement over N5), might be this is not clear enough, it meant in the high performance library, TSMC N3E (next year) = Intel 4.

    Intel is already 1 year ahead. As TSMC N3B is not a production ready node. Apple is consider go to N3E for iPhone 16. TSMC N3B with it's high number EUV layer + min density improve just not worth the cost. While Intel 4 with Applied Material help seem to crack to code, and their improve Intel 3 is next year, then that is second generation vs a first real N3 Class from TSMC.
  • tipoo - Monday, June 5, 2023 - link

    Intel will see a monster turnaround through 2025-26. Only fear priced in right now.
  • evilpaul666 - Monday, June 5, 2023 - link

    They'll buy back more of their shares to pump the price a bit. Should have plenty of cash since they're getting that CHIPS Act money to pay for manufacturing upgrades.

Log in

Don't have an account? Sign up now