Coming to the end of the first quarter of 2023, Intel’s Data Center and AI group is finding itself at an interesting inflection point – for reasons both good and bad. After repeated delays, Intel is finally shipping their Sapphire Rapids CPUs in high volumes this quarter as part of the 4th Generation Xeon Scalable lineup, all the while its successors are coming up very quickly. On the other hand, the GPU side of the business has hit a rough spot, with the unexpected cancelation of Rialto Bridge – what would have been Intel’s next Data Center GPU Max product. It hasn’t all been good news in the past few months for Intel’s beleaguered data center group, but it’s not all bad news, either.

It’s been just over a year since Intel last delivered a wholesale update on its DCAI product roadmaps, which were last refreshed at their 2022 investors meeting. So, given the sheer importance of the high margin group, as well as everything that has been going on in the past year – and will be going on over the next year – Intel is holding an investor webinar today to update investors (and the public at large) at the state of its DCAI product lineups. The event is being treated as a chance to recap what Intel has accomplished over recent months, as well as to lay out an updated roadmap for the DCAI group covering the next couple of years.

The high-level message Intel is looking to project is that the company is finally turning a corner in their critical data center business segment after some notable stumbles in 2021/2022. In the CPU space, despite the repeated Sapphire Rapids delays, Intel’s successive CPU projects remain on track, including their first all E-core Xeon Scalable processor. Meanwhile Intel’s FPGA and dedicated AI silicon (Gaudi) are similarly coming along, with new products hitting the market this year while others are taping-in.

Sapphire Rapids: 4th Generation Xeon Scalable Shipping in Volume

Following what can only be described as a prolonged development process for Intel’s next generation Xeon Scalable processors, Sapphire Rapids finally began shipping in volume over the past few months. The Q1’23 (ed: or is that Q5’22?) launch of the product has come later than Intel would have ever liked, but the company is finally able to put the development process behind them and enjoy the fruits of shipping the massive chips in high volumes.

At this point Intel isn’t quoting precise shipment numbers – back at launch, the company said it expected to make it to a million units in record time – but the company is doubling-down on their claims that they’ll be able to product the large, complex chips in high enough volumes to meet customer demand. Built on the Intel 7 process, the final iteration of what started as Intel’s 10nm line, Intel is benefitting from that well-tuned process. At the same time, however, the 4th Generation Xeon Scalable lineup includes Intel’s first chiplet-based Xeon design, so it is still not the easiest launch.

Besides meeting customer demand, Intel’s main point is that all of their major customers are adopting the long-awaited chips. This is largely unsurprising given that Intel still holds the majority of the data center CPU market, but given the investor audience for today’s announcements, it’s also unsurprising to see Intel explicitly calling attention to this. Besides a generational improvement in CPU core architecture, Sapphire Rapids also delivers everything from DDR5 to PCIe 5/CXL support, so there is no shortage of interest in replacing older Ice lake and Cascade Lake (3rd & 2nd Gen Xeon Scalable) hardware with something newer and more efficient.

Intel, of course, is looking to fend off arch-rival AMD from taking even more market share in this space with their EPYC processors, which are now on to their 4th generation (9004 series) Genoa parts. There are a few demos slated to be run this morning showcasing performance comparisons; Intel is keen to show investors that they’re shipping the superior silicon, especially as AMD has the advantage in terms of core counts. So expect Intel to focus on things like their AI accelerator blocks, as well as comparisons that pitch an equal number of Sapphire Rapids (Golden Cove) and Genoa (Zen 4) CPU cores against each other.

Emerald Rapids: On Track for Q4’23, Will Be 5th Generation Xeon Scalable

Diving into the future of Intel’s product roadmap, the first disclosure from today’s event is an update on the status of Emerald Rapids, the architectural successor to Sapphire Rapids. Intel’s previous roadmap had chips based on the architecture slated to arrive in 2023, a launch cycle that has been increasingly called into question given Sapphire Rapids’ delay to 2023. But sure enough, Intel still expects to deliver the next generation of Xeon processors later this year, in Q4.

According to Intel, Emerald Rapids chips are already sampling to customers. At the same time, volume validation is already underway as well. As Emerald Rapids is a relatively straightforward successor to Sapphire Rapids, Intel is looking to avoid the long validation period that Sapphire Rapids required, which will be critical for making up for lost time and getting the next Xeon parts out by the end of this year.

Given that this is an investor meeting, Intel isn’t offering much in the way of technical specifications for the next-generation chips. But the company is confirming that Emerald Rapids will operate in the same power envelope as Sapphire Rapids – improving on the platform’s overall performance-per-watt efficiency. In fact, the fact that Emerald Rapids will use the same LGA 4677 platform as Sapphire is being treated as a major selling point for Intel, who will be fully leveraging the drop-in compatibility that will afford. Customers will be able to swap out Sapphire for Emerald in their existing designs, allowing for easy upgrades of already-deployed systems, or in the case of OEMs, quickly bringing Emerald Rapids systems to the market.

Intel has previously disclosed that Emerald Rapids will be built on the Intel 7 process. This means that the bulk of any performance/efficiency gains will have to come from architectural improvements. That said, Intel is also touting “increased core density”, so it sounds like Emerald will also offer higher core counts than Sapphire, which topped out at 60.

As part of the webinar, Intel also showed off an uncapped Emerald Rapids chip. Based on the sheer amount of silicon on the package and the multi-tile configuration (each tile is easily over 700mm2), we believe this is likely the highest-end XCC configuration. Which at two tiles, is a significant design change from Sapphire Rapids, which used four smaller tiles for its XCC configuration. Which goes to show that even though Sapphire and Emerald are socket-compatible and using the same platform, Intel isn't restraining itself from making changes under the hood (or in this case, under the IHS).

Finally, following in the footsteps of the product naming scheme they’ve used for the last several years now, Intel is officially naming Emerald Rapids as the 5th Generation Xeon scalable family. So expect to see the official name used in place of the code name for the bulk of Intel’s announcements and disclosures going forward.

Granite Rapids: Already Sampling, to Ship In 2024 With MCR DIMM Support

Following Emerald Rapids, in 2024 Intel will be shipping Granite Rapids. This will be Intel’s next-generation P-core based product. Like Emerald, Granite has been previously disclosed by Intel, so today’s announcement is an update on their progress there.

According to Intel, Granite Rapids remains on track for its previously announced 2024 launch. The part is expected to launch “closely following” Sierra Forest, Intel’s first E-core Xeon Scalable processor, which is due in H1’24. Despite being at least a year out, Granite Rapids is already to the point where the first stepping is up and running, and it’s already sampling to some Intel customers.

As noted in previous disclosures, Granite Rapids is a tile-based architecture, with separate compute and I/O tiles – an evolution from Sapphire Rapids, which even in its tiled form is essentially a complete SoC in each tile. Granite Rapids’ compute tiles are being built on the Intel 3 process, Intel’s second-generation EUV node, having been pulled in from Intel 4 in its earliest incarnation. Meanwhile we still don’t have significant official information on the I/O tiles.

Along with upgrades to its CPU architecture, Intel is also disclosing for the first time that Granite Rapids will also come with a notable new memory feature: MCR DIMM support. First revealed by SK hynix late last year, Multiplexer Combined Ranks (MCR) DIMMs essentially gang up two sets/ranks of memory chips in order to double the effective bandwidth to and from the DIMM. With MCR, Intel and SK hynix are aiming to get data rates equivalent to DDR5-8800 (or higher) speeds, which would be a significant boon to memory bandwidth and throughput, as that's often in short supply with today's many-core chips.

As part of today’s presentation, Intel is showing off an early Granite Rapids system using MCR DIMMs to achieve 1.5TB/second of memory bandwidth on a dual socket system. Based on Intel’s presentation, we believe this to be an 8 12 channel memory configuration with each MCR DIMM running at the equivalent of DDR5-8800 speeds.

As an aside, it’s worth noting that as the farthest-out P-core Xeon in Intel’s roadmap, there’s a notable lack of mention of High Bandwidth Memory (HBM) parts. HBM on Sapphire Rapids was used as the basis of Intel’s offerings for the HPC market, and while that wasn’t quite a one-off product, it’s close. Future HPC-focused CPUs were being developed as part of the Falcon Shores project, which was upended with the change to Intel’s GPU schedule. So at this time, there is not a new HBM-equipped Xeon on Intel’s schedule – or at least, not one they want to talk about today.

Sierra Forest: The First E-Core Xeon and Intel 3 Lead Product, Shipping H1’24

Shifting gears, we have Intel’s forthcoming lineup of E-core Xeons. These are chips that will be using density-optimized “efficiency” cores, which were introduced by Intel in late 2021 and have yet to make it to a server product.

Sierra Forest is another previous Intel disclosure that the company is updating investors on, and is perhaps the most important of them. The use of E cores in a Xeon processor will significantly boost the number of CPU cores Intel can offer in a single CPU socket, which the company believes will be extremely important for the market going forward. Not only will the E core design improve overall compute efficiency per socket (for massively threaded workloads, at least), but it will afford cloud service providers the ability to consolidate even more virtual machine instances on to a single physical system.

Like Granite Rapids, Sierra Forest is already up and running at Intel. The company completed the power-on process earlier in the quarter, getting a full operating system up and running within 18 hours. And even though it’s the first E-core Xeon, it’s already stable enough that Intel has it sampling to at least one customer.

As previously disclosed, despite the E-Core/P-Core split, Sierra Forest and Granite Rapids will be sharing a platform. In fact, they’re sharing a whole lot more, as Sierra will also use the same I/O tiles as Granite. This allows Intel to develop a single set of I/O tiles and then essentially swap in E-core or P-core tiles as needed, making for Sierra Forest or Granite Rapids.

And for the first time, we have confirmation of how many E-cores that Sierra will offer. The Xeon will ship with up to 144 E-cores, over twice as many cores as found on today’s P-core based Sapphire Rapids processors. There are no further architectural disclosures on the E-cores themselves – it was previously confirmed that it’s a post-Gracemont architecture – so more details are to come on that front. Gracemont placed its E-cores in quads, which if that holds for the CPU architecture used in Sierra Forest, would mean we’d be looking at 36 E-core clusters across the entire chip.

With Sierra Forest up and running, this also means that Intel has wafers to show off. As part of her portion of the presentation, Lisa Spelman, Intel's CVP and GM of the Xeon product lineup, held up a finished Sierra Forest compute tile wafer to underscore Intel's progress in manufacturing their first E-core Xeon CPU.

Speaking of manufacturing, Intel has also confirmed that Sierra Forest is now the lead product for the Intel 3 node across the entire company. Which means Intel is looking to make a massive leap in a very short period of time with respect to its Xeon product lineup, moving from Intel 7 on Emerald Rapids in Q4’23 to their second-generation EUV process no later than Q2’24. Sierra does get the benefit of products based on Intel 4 (the company’s first-generation EUV process) coming first, but this still makes Sierra’s progress very important, as Intel 3 is the first “full service” EUV process for Intel, offering support for Intel’s complete range of cell libraries.

Of all of the Xeon processor architectures outlined today, Sierra is arguably the most important for Intel. Intel’s competitors in the Arm space have been offering high density core designs based on the Neoverse architecture family for a few years now, and arch-rival AMD is going the same direction this year with the planned launch of its Zen 4c architecture and associated EPYC “Bergamo” processors. Intel expects an important subset of their customers to focus on maximizing the number of CPU cores over growing their overall socket counts – thus making data center CPU revenue more closely track core counts than socket counts – so Intel needs to meet those demands while fending off any competitors wanting to do the same.

Clearwater Forest: Second-Gen E-core Xeon In 2025 on Intel 18A Process

Finally, in an all-new disclosure for Intel, we have our first details on the part that will succeed Sierra Forest as Intel’s second-generation E-core Xeon processor. Codenamed Clearwater Forest, the follow-up E-core part is scheduled to be delivered in 2025, placing it no more than 18 months after Sierra Forest.

Similar to how Sierra is Intel’s first Intel 3 part, Clearwater Forest is slated to be the first Xeon produced on Intel’s 18A process – their second-generation RibbonFET process, which last year was moved up in Intel’s schedule and will be going into production in the second half of 2024.

At two years out, Intel isn’t disclosing anything else about the chip. But its announcement today is to confirm to investors that Intel is committed to the E-core lineup for the long-haul, as well as to underscore how, on the back of the 18A process, this is the point where Intel expects to re-attain process leadership. Meanwhile, Intel has also confirmed that there won’t be any Xeons made on their early 20A process, so Clearwater Forest will be Intel’s first RibbonFET-based Xeon, period.

Finally, it’s worth noting that with the latest extension to Intel’s CPU roadmap, P-core and E-core Xeons are remaining distinct product lines. Intel has previously commented that their customers either want one core or the other on a CPU – but not both at the same time – and Clearwater Forest maintains this distinction.

Xeon Scalable Generations
Date AnandTech Codename Abbr. Max
Node Socket
Q3 2017 1st Skylake SKL 28 14nm LGA 3647
Q2 2019 2nd Cascade Lake CXL 28 14nm LGA 3647
Q2 2020 3rd Cooper Lake CPL 28 14nm LGA 4189
Q2 2021 Ice Lake ICL 40 10nm LGA 4189
Q5 2022 4th Sapphire Rapids SPR 60 P Intel 7 LGA 4677
Q4 2023 5th Emerald Rapids EMR >60 P Intel 7 LGA 4677
H1'2024 6th? Sierra Forest SRF 144 E Intel 3 ?
2024 Granite Rapids GNR ? P Intel 3
2025 7th? Clearwater Forest CWF ? E Intel 18A ?
? Next-Gen P ? ? P ?

AI Accelerators & FPGAs: Capturing Market Share At All Ends

While the bulk of today’s presentation from Intel is focused on their CPU roadmap, the company is also briefly touching on the roadmaps for their FPGA and dedicated AI accelerator products.

First and foremost, Intel is expecting to qualify (PRQ) 15 new FPGAs across the Stratix, eASIC, and Agilex product lines this year. There are no further technical details on these, but the products, and their successors, are in the works.

Meanwhile, for Intel’s dedicated AI acceleration ASICs, the company’s Habana Labs division has recently tapped-in their next-generation Gaudi3 deep learning accelerator. Gaudi3 is a process shrink of Gaudi2, which was first released back in the spring of 2022, moving from TSMC’s 7nm process to a 5nm process.  Intel isn’t attaching a delivery date to the chip for its investor crowd, but more details will be coming later this year.

All told, Intel is projecting the market for AI accelerators to be at least a $40 billion market opportunity by 2027. And the company intends to tackle the market from all sides. That means CPUs for AI workloads that are still best served by CPUs (general computer), GPUs and dedicated accelerators for tasks that are best served by highly parallel processors (accelerated computer), and then FPGAs bridging the middle as specialist hardware.

It’s interesting to see that, despite the fact that GPUs and other highly parallel accelerators deliver the best performance on large AI models, Intel doesn’t see the total addressable market for AI silicon being dominated by GPUs. Rather they expect the 2027 market to be a 60/40 split in favor of CPUs, which given Intel’s much stronger position in CPUs than GPUs, would certainly be to their advantage. Certainly, CPUs aren’t going anywhere even for AI workloads (if nothing else, something needs to prepare the data for those GPUs), but it will be interesting to see if Intel’s TAM predictions hold true in 4 years, especially given the eye-watering prices that GPU vendors have been able to charge in recent years.

Comments Locked


View All Comments

  • DannyH246 - Wednesday, March 29, 2023 - link

    Yawn...another roadmap, another marketing article from
    This time things won't be delayed. We promise.
  • sonofgodfrey - Wednesday, March 29, 2023 - link

    Some typos: Granite Forest ? (AMD is petrified of that core :) ) and Clearwater Falls ?
  • Ryan Smith - Wednesday, March 29, 2023 - link

    Thanks. It's a lot of codenames to keep straight, and a lot of these are similar to local (Oregon) locales...
  • JKflipflop98 - Tuesday, April 4, 2023 - link

    That's because they are Oregon locales. We name them after natural features near where they were designed. There's lots of "bridges" in Israel.
  • mode_13h - Thursday, March 30, 2023 - link

    > Emerald Rapids will be built on the Intel 7 process. This means that the bulk of any
    > performance/efficiency gains will have to come from architectural improvements.

    It's probably the same Intel 7+ process that Raptor Lake uses. Do we know how much of Raptor Lake's improvements were simply due to the node refinement?
  • Bruzzone - Thursday, March 30, 2023 - link

    I have data that contributes to the inquiry "what is the 'performance gain' from architectural improvements" and I will answer that questions comparing Alder and Raptor desktop 900KS, 900K, 900KF for the G fall out notation and 600K sent to down bin.

    This is more a yield than performance analysis however K in relation KS on KS frequency bump for Alder + 300 MHz and for Raptor + 200 MHz in relation to spec is telling of the process for performance improvement.

    900K = an index of 1 and 900 KS, 900KF and 600K are all compared to 900K at index of 1 (compared against itself) and that index (other SKU comparison against K) is on channel supply data as a proxy for yield.

    Data sample is weekly and begins the week KS is introduced for Alder and Raptor respectively within the overall envelope of same base process subject SKU comparison looking for a sign of process improvement.

    The index shows the difference in supply volume subject ease of manufacturability.

    13900K = 1
    12900K = 1
    13900KS = 0.101 shows 197% improvement over Alder KS to achieve + 200 MHz
    12900 KS = 0.034
    13900KF = 0.069
    12900KF = 0.065 approximately the same
    13600K = 0.117
    12600K = 0.109 shows 7.7% more fall out

    We can also look at the SKU split for the sample beginning the week KS is introduced through last week;

    13900KS = 12.89%
    13900K = 76%
    13900KF = 3.21%
    13600K = 7.89%

    12900KS = 2.45%
    12900K = 83.36%
    12900KF = 5.63%
    12600K = 8.65%

    For power efficiency we can look at the SKU power split and in this assessment over the full run from day 1 of the introductions;

    Alder shows dynamic power within bottom of mid range

    8P+8E, 150 to 241W = 1.07%
    8+8, 125 to 241W = 38.26%
    8+4, 125 to 190W = 24.63%
    6+4, 125 to 150W = 4.83%
    8+8, 65 to 202W = 4.45%
    8+4, 65 to 180W = 4.10%
    6C, 65 to 117W = 17.19%
    4C, 58 to 89W = 0.78%
    2C, 46 to 55W = 0.66%
    8/6/4/2 at base 35W = 4.03%

    Raptor shows dynamic frequency

    8P+16E, 150 to 253W = 7.1% and configuration on 8+16
    8+8, 125 to 253W = 69.77% shows top bin frequency improvement
    6+8, 125 to 181W = 7.46% new configuration
    8+8, 65 to 219W = 8.39% shows an improvement over Alder 8+8
    6+8, 65 to 154W = 3.65% new configuration
    6+4, 65 to 148W = 0.84% down from Alder but meager volume
    4C 58 to 89W = 2.48% and bottom bin quad improvement
    8/6/4 at base 35W = 0.31%

    You decide on power efficiency there’s sufficient data here to extend the analysis.

    Between the two generation samples Alder ramp volume is 88.6% and Raptor run down volume is 11.3% so one can say Raptor for a lower volume improves on the top SKU frequency or Intel mine operations are working overtime in the sort room.

    Mike Bruzzone, Camp Marketing
  • Bruzzone - Friday, March 31, 2023 - link

    For comparison, Xeon Ice; by cores, power distribution, base and max frequency.

    Core Grade Split

    40 = 4.12%
    38 = 3.10%
    36 = 7.4%
    32 – 22.94%
    28 = 11.87%
    26 = 1.74%
    24 = 8.1%
    20 = 3.05%
    18 = 3.08%
    16 = 11.94%
    12 = 10.61%
    10 = 9,57%
    8 = 11.5%

    Power Distribution

    300W = 4.5%
    270W = 7.63%
    265W = 1.19%
    250W = 4.85%
    240W = 0.99%
    235W = 1.79%
    230W = 4.03%
    225W = 1.19%
    220W = 1.19%
    205W = 26.11%
    195W = 0.98%
    185W = 9.27%
    165W = 3.97%
    150W = 4.93%
    140W = 4.74%
    135W = 5.18%
    120W = 8.82%
    105W = 9.41%

    Base Frequency GHz

    3.6 = 1.09%
    3.5 = 0.42%
    3.4 = 0.35%
    3.2 = 4.63%
    3.1 = 2.53%
    3.0 = 5.41%
    2.9 = 4.39%
    2.8 = 14.88%
    2.7 = 0.23%
    2.6 = 6.56%
    2.5 = 0.44%
    2.4 = 11.13%
    2.3 = 13.44%
    2.2 = 9.08%
    2.1 = 11.12%
    1.0 = 14.31%

    Max Boost GHz

    4.0 = 1.86%
    3.9 = 0.45%
    3.7 = 1.37%
    3.6 = 21.84%
    3.5 = 24.47%
    3.4 = 31.65%
    3.3 = 9.11%
    3.1 = 9.26%

    Mike Bruzzone, Camp Marketing
  • mode_13h - Friday, March 31, 2023 - link

    Thanks for the info, Mike!

    Hey, I have a question that seems right up your alley. What do you make of the i5-12600 (non-K) pricing? It launched at $223 and now sells for $250 (Walmart) to $258 (Newegg), or more. It's the largest and fastest incarnation of the P-core only die (6+0). The i5-12600K is based on the big 8+8 die and frequently sells for less than its non-K namesake. Does this smell right, to you?
  • Bruzzone - Friday, March 31, 2023 - link

    mode_13, new on ebay around $180 to $200 any 12600_ pick your flavor. On 12600 hexa only, 65 to 117W v 125 to 150W for K_ so how about a lower priced board for 12600. For $250 to $258 at Newegg and Walmart? I just checked Newegg now $207 so maybe Walmart at $250 still has not updated their page with a competitive price. The high volume price is $111 ($223/2) however on a volume Alder i9/i7 purchase so many 12600 were likely thrown into the sales package for nothing everything below 600K is generally thrown in as sales close. The check is Alder full line Average Weighed Price $1K at $417 / 2 for a high volume procurement = $208 so at this full line product SKU buy in (i9 = 42.6%, i7 = 28.8%, i5 = 24.1%, i3 = 3.8%, Pentium = 0.27%, Celeron = 0.44%) procurement would say no to 12600 as underwater at $233 end sale and there would be a negotiation around the SKU procurement price which is likely n/c. For a tray this isn't the case it's the traditional buy 10 and get 1 free < 10%. I seriously doubt Walmart buys by the tray. I consider Neweeg and Wallmart price similar any OEM these days on competitive desktop market. Specific OEMs its different encompassing mobile and Xeon SKUs negotiated within and around the desktop buy in. mb
  • mode_13h - Saturday, April 1, 2023 - link

    Thanks for the reply. The $207 price on Newegg is a 3rd party seller that I don't 100% trust, and not sure if Intel will offer warranty claims from. If I'm going to take those risks, I'd rather save more money and go the ebay route. Speaking of the ebay route, I have a search query that excludes any ES (Engineering Sample) chips, which I don't trust.

    The reason I like the i5-12600 is that it typically has the fastest single-thread performance among the 65 W-rated models, probably due to the smaller ring bus. It's also the second cheapest model that supports ECC memory, which obviously limits me to more expensive W680 boards.

Log in

Don't have an account? Sign up now