Intel has revealed the first details of its upcoming fifth-generation Xeon processors, codenamed Sierra Forest and Granite Rapids, at the Hot Chips 2023 conference. The new chips will feature a tile-based design with separate compute and I/O chiplets, as well as new core architectures and memory technologies. Intel claims that the new Xeon chips will offer significant performance, efficiency, and scalability improvements over the current generation.
Sierra Forest: The First Efficiency-Core Xeon Processor
Sierra Forest is the first Xeon processor to use Intel’s Efficiency cores (E-cores), which are smaller and simpler cores that can deliver high throughput and density for cloud and hyperscale workloads. Sierra Forest will have up to 144 E-cores per socket, making it the most core-dense Xeon processor ever. Sierra Forest will also be the first Xeon processor to use the Intel 3 process node, which is expected to provide better power efficiency and transistor density than the Intel 7 node.
Sierra Forest will support up to 136 lanes of PCIe 5.0 or CXL 2.0, which are the latest standards for high-speed interconnects between processors, memory, and accelerators. Sierra Forest will also support up to 12 channels of DDR5-6400 memory per socket, as well as a new memory technology called MCR (Memory Compute Resource), which can provide up to 40% more memory bandwidth than standard DDR5 DIMMs.
Sierra Forest is on track for launch in the first half of 2024, and Intel has already shipped the first samples to customers. Intel also demonstrated a working prototype of Sierra Forest at the Hot Chips event, showing all 144 cores running a workload.
Granite Rapids: The Next Performance-Core Xeon Processor
Granite Rapids is the successor to Emerald Rapids, which is the fifth-generation Xeon processor that uses Intel’s Performance cores (P-cores). P-cores are the traditional Xeon cores that offer high per-core performance and AI capabilities for demanding workloads. Granite Rapids will have up to 48 P-cores per socket, based on a new microarchitecture called Redwood Cove.
Granite Rapids will also use the tile-based design with separate compute and I/O chiplets, but it will use the Intel 7 process node instead of the Intel 3 node. Granite Rapids will support up to 136 lanes of PCIe 5.0 or CXL 2.0, as well as up to 12 channels of DDR5-6400 or MCR memory per socket.
Granite Rapids will offer up to 3x the performance in mixed AI workloads compared to the current generation, thanks to a new feature called AMX2 (Advanced Matrix Extensions 2), which is an enhanced version of Intel’s matrix multiplication instructions that can accelerate deep learning operations. Granite Rapids will also offer up to 2.8x more memory bandwidth than the current generation, thanks to a new feature called MCDRAM (Memory Compute DRAM), which is a high-bandwidth memory technology that can be integrated on the same package as the processor.
Granite Rapids is expected to launch in 2024, after Emerald Rapids, which is scheduled for release in late 2023. Emerald Rapids will be compatible with the same platform as Sapphire Rapids, which is the fourth-generation Xeon processor that is currently ramping up in production.
Intel’s Dual-Track Roadmap for Data Center Innovation
Intel’s new Xeon roadmap shows that the company is pursuing a dual-track strategy for its data center products, offering both P-core and E-core processors for different customer segments and needs. Intel says that this approach will enable it to deliver more innovation and flexibility for its data center customers, as well as to compete more effectively with its rivals such as AMD and Nvidia.
Intel also says that it has resolved the issues that caused delays in its previous generations of Xeon processors, and that it is confident in its process technology roadmap. Intel plans to introduce its next process node, called Intel 18A, in 2025, and use it for its next-generation data center products, codenamed Clearwater Forest and Diamond Rapids.
Intel’s data center business is one of its most important and profitable segments, accounting for about a third of its revenue and more than half of its operating income. However, Intel has faced increasing competition from AMD, which has gained market share with its EPYC processors that offer more cores and better performance per watt than Intel’s Xeon processors. AMD also has a roadmap that includes new generations of EPYC processors based on advanced process nodes from TSMC.
Intel hopes that its new Xeon roadmap will help it regain its leadership position in the data center market, as well as to expand into new areas such as AI and edge computing. Intel says that it expects its data center business to grow at a compound annual growth rate (CAGR) of 10% from 2021 to 2025, driven by the increasing demand for cloud, enterprise, and network infrastructure.