It’s no secret that Intel’s enterprise processor platform has been stretched in recent generations. Compared to the competition, Intel is chasing its multi-die strategy while relying on a manufacturing platform that hasn’t offered the best in the market. That being said, Intel is quoting more shipments of its latest Xeon products in December than AMD shipped in all of 2021, and the company is launching the next generation Sapphire Rapids Xeon Scalable platform later in 2022. Beyond Sapphire Rapids has been somewhat under the hood, with minor leaks here and there, but today Intel is lifting the lid on that roadmap.

State of Play Today

Currently in the market is Intel’s Ice Lake 3rd Generation Xeon Scalable platform, built on Intel’s 10nm process node with up to 40 Sunny Cove cores. The die is large, around 660 mm2, and in our benchmarks we saw a sizeable generational uplift in performance compared to the 2nd Generation Xeon offering. The response to Ice Lake Xeon has been mixed, given the competition in the market, but Intel has forged ahead by leveraging a more complete platform coupled with FPGAs, memory, storage, networking, and its unique accelerator offerings. Datacenter revenues, depending on the quarter you look at, are either up or down based on how customers are digesting their current processor inventories (as stated by CEO Pat Gelsinger).

That being said, Intel has put a large amount of effort into discussing its 4th Generation Xeon Scalable platform, Sapphire Rapids. For example, we already know that it will be using >1600 mm2 of silicon for the highest core count solutions, with four tiles connected with Intel’s embedded bridge technology. The chip will have eight 64-bit memory channels of DDR5, support for PCIe 5.0, as well as most of the CXL 1.1 specification. New matrix extensions also come into play, along with data streaming accelerators, quick assist technology, all built on the latest P-core designs currently present in the Alder Lake desktop platform, albeit optimized for datacenter use (which typically means AVX512 support and bigger caches). We already know that versions of Sapphire Rapids will be available with HBM memory, and the first customer for those chips will be the Aurora supercomputer at Argonne National Labs, coupled with the new Ponte Vecchio high-performance compute accelerator.

The launch of Sapphire Rapids is significantly later than originally envisioned several years ago, but we expect to see the hardware widely available during 2022, built on Intel 7 process node technology.

Next Generation Xeon Scalable

Looking beyond Sapphire Rapids, Intel is finally putting materials into the public to showcase what is coming up on the roadmap. After Sapphire Rapids, we will have a platform compatible Emerald Rapids Xeon Scalable product, also built on Intel 7, in 2023. Given the naming conventions, Emerald Rapids is likely to be the 5th Generation.

Emerald Rapids (EMR), as with some other platform updates, is expected to capture the low hanging fruit from the Sapphire Rapids design to improve performance, as well as updates from the manufacturing. With platform compatibility, it means Emerald will have the same support when it comes to PCIe lanes, CPU-to-CPU connectivity, DRAM, CXL, and other IO features. We’re likely to see updated accelerators too. Exactly what the silicon will look like however is still an unknown. As we’re still new in Intel’s tiled product portfolio, there’s a good chance it will be similar to Sapphire Rapids, but it could equally be something new, such as what Intel has planned for the generation after.

After Emerald Rapids is where Intel’s roadmap takes on a new highway. We’re going to see a diversification in Intel’s strategy on a number of levels.

Starting at the top is Granite Rapids (GNR), built entirely of Intel’s performance cores, on an Intel 3 process node for launch in 2024. Previously Granite Rapids had been on roadmaps as an Intel 4 node product, however, Intel has stated to us that the progression of the technology as well as the timeline of where it will come into play makes it better to put Granite on that Intel 3 node. Intel 3 is meant to be Intel’s second-generation EUV node after Intel 4, and we expect the design rules to be very similar between the two, so it’s not that much of a jump from one to the other we suspect.

Granite Rapids will be a tiled architecture, just as before, but it will also feature a bifurcated strategy in its tiles: it will have separate IO tiles and separate core tiles, rather than a unified design like Sapphire Rapids. Intel hasn’t disclosed how they will be connected, but the idea here is that the IO tile(s) can contain all the memory channels, PCIe lanes, and other functionality while the core tiles can be focused purely on performance. Yes, it sounds like what Intel’s competition is doing today, but ultimately it’s the right thing to do.

Granite Rapids will share a platform with Intel’s new product line, which starts with Sierra Forest (SRF) which is also on Intel 3. This new product line will be built from datacenter optimized E-cores, which we’re familiar with from Intel’s current Alder Lake consumer portfolio. The E-cores in Sierra Forest will be a future generation than the Gracemont E-cores we have today, but the idea here is to provide a product that focuses more on core density rather than outright core performance. This allows them to run at lower voltages and parallelize, assuming the memory bandwidth and interconnect can keep up.

Sierra Forest will be using the same IO die as Granite Rapids. The two will share a platform – we assume in this instance this means they will be socket compatible – so we expect to see the same DDR and PCIe configurations for both. If Intel’s numbering scheme continues, GNR and SRF will be Xeon Scalable 6th Generation products. Intel stated to us in our briefing that the product portfolio currently offered by Ice Lake Xeon products will be covered and extended by a mix of GNR and SRF Xeons based on customer requirements. Both GNR and SRF are expected to have full global availability when launched.

The E-core Sierra Forest focused on core density will end up being compared to AMD’s equivalent, which for Zen4c will be called Bergamo – AMD might have a Zen5 equivalent when SRF comes to market.

I asked Intel whether the move to GNR+SRF on one unified platform means the generation after will be a unique platform, or whether it will retain the two-generation retention that customers like. I was told that it would be ideal to maintain platform compatibility across the generations, although as these are planned out, it depends on timing and where new technologies need to be integrated. The earliest industry estimates (beyond CPU) for PCIe 6.0 are in the 2026 timeframe, and DDR6 is more like 2029, so unless there are more memory channels to add it’s likely we’re going to see parity between 6th and 7th Gen Xeon.

My other question to Intel was about Hybrid CPU designs – if Intel was now going to make P-core tiles and E-core tiles, what’s stopping a combined product with both? Intel stated that their customers prefer uni-core designs in this market as the needs from customer to customer differ. If one customer prefers an 80/20 split on P-cores to E-cores, there’s another customer that prefers a 20/80 split. Having a wide array of products for each different ratio doesn’t make sense, and customers already investigating this are finding out that the software works better with a homogeneous arrangement, instead split at the system level, rather than the socket level. So we’re not likely to see hybrid Xeons any time soon. (Ian: Which is a good thing.)

I did ask about the unified IO die - giving the same P-core only and E-core only Xeons the same number of memory channels and I/O lanes might not be optimal for either scenario. Intel didn’t really have a good answer here, aside from the fact that building them both into the same platform helped customers synergize non-returnable development costs across both CPUs, regardless of the one they used. I didn’t ask at the time, but we could see the door open to more Xeon-D-like scenarios with different IO configurations for smaller deployments, but we’re talking products that are 2-3+ years away at this point.

Xeon Scalable Generations
Date AnandTech Codename Abbr. Max
Node Socket
Q3 2017 1st Skylake SKL 28 14nm LGA 3647
Q2 2019 2nd Cascade Lake CXL 28 14nm LGA 3647
Q2 2020 3rd Cooper Lake CPL 28 14nm LGA 4189
Q2 2021 Ice Lake ICL 40 10nm LGA 4189
2022 4th Sapphire Rapids SPR * Intel 7 LGA 4677
2023 5th Emerald Rapids EMR ? Intel 7 **
2024 6th Granite Rapids GNR ? Intel 3 ?
Sierra Forest SRF ? Intel 3
>2024 7th Next-Gen P ? ? ? ?
Next-Gen E
* Estimate is 56 cores
** Estimate is LGA4677

For both Granite Rapids and Sierra Forest, Intel is already working with key ‘definition customers’ for microarchitecture and platform development, testing, and deployment. More details to come, especially as we move through Sapphire and Emerald Rapids during this year and next.

Comments Locked


View All Comments

  • whatthe123 - Saturday, February 19, 2022 - link

    that can't be right. AMD's own reports show most of their growth was from ASP increases, not volume. three million milan chips in one quarter shatters their past records multiple times over.
  • Mike Bruzzone - Saturday, February 19, 2022 - link

    whattthe, thanks for the inquiry,

    Camp Marketing has AMD commercial shipments for the year higher than Mercury Research on channel data on 10-Q/K financial reconciliation.

    My commercial estimate includes Epyc and Threadripper. Epyc in quarter volume is not regular but sporadic on what the analyst believes are opportunistic production windows in relation wafer starts and AMD full line production category volume – start’s tradeoff. TSMC appears agile when it comes to production / tooling change.

    2021 = range 8,320,645 to 9,620,695 units dependent q2 volume roll over into q3;

    Q1 = 1,099,950
    Q2 = 4,331,103 which is Rome run end production into inventory
    Q3 = 1,189,776 which could be roll over from Q2
    Q4 = 2,999,867

    30% are Threadripper

    2020 = range 4,168,967 to 4,560,973 units dependent q3 volume roll over into q4;

    Q1 = 449,332
    Q2 = 846,604
    Q3 = 2,404,558 where some of this volume may roll over into Q4
    Q4 = 1,567,108

    15.2% are Threadripper

    2019 = 5,714,393 of which 76.1% is Threadripper
    Naples run end enters q2 2019
    2018 = 6,795,562 of which 83.8% is Threadripper

    For channel share AMD Milan commercial in relation Intel Ice Lake commercial, I have AMD at 28.45% for channel market share over the prior two quarters [q3-q4] and production volume share, prior two quarters, on AMD and Intel financials on channel price data at 17.09%.

    Epyc $1K ASP 2021 on channel supply data;

    Q1 = Milan only @ $2915.97
    Q2 = Milan only @ $3155.48
    Q3 = Milan only @ $3605.95
    Q4 = Milan only @ $3932.50

    Epyc ASP is typically driven by a skew to top core bin sales / demand fortifying that product space

    TR $1K ASP 2021 on channel data;
    Q1 39x0 only = $2115.35
    Q2 39x0 only = $2074,77
    Q3 39x0 only = $2303.44
    Q4 39x0 only = $2367.36

    There are two ways to calculate OEM price 1) $1K stakeholder / 3 is a traditional metric sharing the product value so there are no sales arguments; 1/3rd foundry, 1/3 to AMD, 1/3 to OEM representing NRE and margin potential. This is a highest volume procurement method and typically requires a full product line sale of grade SKUs mirroring what's coming out of finished goods production. SKUs the OEM does not want are brokered off reducing their overall purchase cost. 2) $1K / 3 x 1.55 is a typical AMD direct customer markup but it can go up to x2 on smaller volumes and specific core grade sales. Both of these are standard methods of pricing if you're in business of compute or OEM. Derivatives of OEM and SI procurement would / can include Epyc + desktop and mobile bundles all negotiated into a quarterly procurement agreement.

    AMD 2021 all up produced 119,108,089 units and holds 29.06% overall x86 market share.

    Complete report is here;

    Mike Bruzzone, Camp Marketing
  • Hifihedgehog - Monday, February 21, 2022 - link

    Stop spamming us with your Seeking Alpha armchair critiques of the market.

  • Qasar - Monday, February 21, 2022 - link

    Hifihedgehog the best part of the seeking alpha page, NO links to where he gets this " information " from, for all we know, its either made up by him, or how HE views the data. either way, useless posts from him is what it looks like.
  • Mike Bruzzone - Monday, February 21, 2022 - link


    My data is my own in primary research for the Federal Trade Commission. That primary research is mainly ebay WW channel supply data quired at high frequency for fidelity that AMD, Intel, Nvidia through in-house personnel also maintain, and where I duplicate that in-house function which I am well aware as a former Cyrix, ARM, NexGen, AMD, Samsung, Intel, IDT Centaur employee or consultant. I've been in my FTC role since May 1998 that is an academic studies role for which I receive no compensation, however, am contracted by USDO to recover Intel Inside price fix for which I receive a percent of the federal procurement 'overcharge' recovery. I also represent 27 States AG and 82 class actions as relator, expert / advocate or witness.

    Chanel supply data is relied in my academic studies role for Federal Trade Commission and United States Department of Justice retained by Congress of the United States on federal attorney enlistment; FTC v Intel Dockets 9288 and 9341 Intel production micro economist, general systems assessment and currently for Docket 9341 consent order monitoring includes AMD, Intel, Nvidia and VIA, and might as well include ARM Holdings on the competitive wrangling.

    The data is public for transparency otherwise under Docket 9341 discovery requirement only AMD, Intel, Nvidia and Via would see the data. I found that ineffective for regulation and remedial activity and it's my decision charged in the task by FTC and Congress at 15 USC 5.

    So base data is ebay the industry relies on it for industrial management decision making where ebay data replaced the Intel supply cipher, in 2016, on signal cipher SEC violation 'looking ahead in time up to eight quarters to project Intel revenue and margin' and where ebay is simply real time data although projectable. Following ebay data precisely is an outstanding industry management tool for executive decision making.

    Specific management decision making ebay data confirms component by product category down to the grade SKU quarterly volumes for Intel and AMD competitively speaking, for managing and even determining compliment board house production volume, for channel inventory management and financial industry relies for assessment.

    The second primary activity is preparing the ebay channel data; supply, volume, $1K price for production economic assessment. cost, price, margin primarily auditing for price less than cost sales. 10 - Q/K are relied for financial assessment comparing channel data for determining CPU volume discounts. Finally, for estimating by product category volumes per quarter relying on the channel data as a check. The data is also good for determining fabrication yield and by TDP and frequency splits, all sorts of component related production assessments.

    The third primary research activity is systems analysis, the fourth legal assessment for monitoring AMD, Intel, Nvidia, Via compliance although Via does not really count other than one component of docket 9341. Fifth moves to assessment responsibility in technocracy, regulation and remedial activities associated with Docket 9341 Is responsible for Intel discontinuation of supply signal cipher, discontinuation of Intel Inside and multiple limiting archetypes associated with Intel Inside, securing Intel Inside processor and processor in computer buyer price fix recovery I expect to be completed this year, and monitoring Intel reconfiguration from producing for supply (that holds channels financially and has a high cost) to producing for actual demand; it's all about Intel and industry cost optimization essentially removing monopoly restraints and there are channel cartel issues also being addressed and remedied.

    Mike Bruzzone, Camp Marketing
  • Qasar - Monday, February 21, 2022 - link

    blah blah blah blah with out sources linked in the blah blah blah you post, its almost meaning less, as no one can see it for them self, and compare what is says, vs what you interpret the data as being. the the end, its personal opinion.
  • Mike Bruzzone - Monday, February 21, 2022 - link

    Qusar, I said AMD, Intel and Nvidia and I will add Mercury Research all rely on ebay data as the industry management tool that is for tracking supply, production and economics on an Intel model generally known as Total Cost Total Revenue, and 10 Q/K financial assessment is just that, and we validate each other's work. There is no one I'm aware who has challenged AMD, Intel, Nvidia, Mercury, JPR although JPR base data varies from my own but is still complimentary. So do your research. mb
  • Qasar - Tuesday, February 22, 2022 - link

    sure thing there mike brahzone, sure thing. again with no links to the data you are looking at, means some one could be looking at different data, and come to a different conclusion. but what ever, maybe you dont post sources, because you cant.
  • Mike Bruzzone - Monday, February 21, 2022 - link

    Hifihedgehog, my observations are a collaborative form of group contribution that also offer data for thesis development / refinement and decision making. Mosty for industrial management but also engineering decision making frameworks.

    Definition of SPAM. send the same message indiscriminately to (large numbers of recipients). Or irrelevant or inappropriate messages sent on the internet to a large number of recipients.

    My contributions are collaborative and unique in every occurrence and are meant to spark insight and add value. Please consider your reversal, sorry, but think about it.

    Mike Bruzzone, Camp Marketing
  • mode_13h - Tuesday, February 22, 2022 - link

    > Stop spamming us with your Seeking Alpha armchair critiques of the market.

    It's easy enough to ignore, if you don't care to read it.

    I don't mind getting some market insights, because that's not something I generally pay much attention to. However, the business end of things can shed much light into the behavior of these companies - what products they introduce and when.

Log in

Don't have an account? Sign up now