Skip to main content

An Introduction to Die-to-Die Interconnects

Key Takeaways

  • Die-to-die interfaces provide seamless data transfer between silicon dies within a single package, offering enhanced power efficiency and bandwidth compared to traditional chip-to-chip interfaces.

  • The structure of die-to-die interconnects involves a PHY and controller block, enabling the connection between the interconnect fabric of two dies, supporting advanced packaging technologies.

  • Die-to-die interfaces find applications in scaling SoCs, splitting large SoCs, aggregation of functions, and disaggregation for improved performance, cost reduction, and process node optimization.

Silicon dies on a wafer

Die-to-die interconnects have a variety of useful applications

Die-to-die interconnects, a revolutionary aspect utilized in integrated circuits, encompasses a functional block specifically designed to establish a seamless data interface between two silicon dies confined within a single package. By capitalizing on remarkably brief channels, die-to-die interfaces enable the connection of two dies within the package, yielding unparalleled power efficiency and remarkably elevated bandwidth efficiency that surpasses the capabilities of conventional chip-to-chip interfaces.

By capitalizing on these cutting-edge technologies, die-to-die interfaces pave the way for enhanced performance and remarkable advancements in data transfer within integrated systems. Read on as we discuss die-to-die interconnects, how they are structured, and how they work below.

Die-to-Die Interconnect Introduction Summary


Key Knowledge Bullet Points

Die-to-Die Interconnect Structure

  • Comprises a PHY and controller block for seamless connection between interconnect fabric of two silicon dies.
  • Supports advanced packaging technologies like 2D, 2.5D, and 3D configurations.

Die-to-Die Interconnect Inner Workings

  • Logically divided into physical layer, link layer, and transaction layer.
  • Logically divided into physical layer, link layer, and transaction layer.

Die-to-Die Interface Applications

  • Used in compute-intensive sectors like HPC, networking, hyperscale data centers, and AI.
  • Enables scaling of SoC designs, splitting large SoCs, aggregation, and disaggregation.

Die-to-Die Interconnect Structure

The die-to-die interconnect structure comprises essential components, namely the PHY and controller block, which seamlessly establish a connection between the internal interconnect fabric of two silicon dies

In essence, die-to-die interconnect IP emerges as the linchpin for successful solutions within multi-die approaches, empowering the realization of efficient and highly adaptable integrated systems.

The die-to-die PHY can employ various efficient architectures, such as high-speed SerDes or high-density parallel architectures, meticulously optimized to accommodate advanced packaging technologies like 2D, 2.5D, and 3D configurations.

Furthermore, this die-to-die interface is a pivotal catalyst for the ongoing industry shift from monolithic System-on-Chip (SoC) designs towards adopting multi-die SoC assemblies within a single package. This paradigmatic approach effectively addresses concerns surrounding the escalating costs and diminishing yields associated with small process nodes while simultaneously offering heightened product modularity and flexibility.

Die-to-Die Interconnect Implementation Example

An exemplary implementation of the die-to-die interconnect structure involves the utilization of chiplets, wherein a large ASIC is partitioned into smaller components, each dedicated to specific functionalities such as memory, I/Os, or analog functions. Consequently, the resulting ASIC appears as a simplified entity surrounded by complementary blocks, all interconnected using die-to-die interfaces. 

Another usage of die-to-die interconnects involves the modular separation of an SoC, with the SerDes Intellectual Property (IP), relocated onto a separate die. This segregation is facilitated through die-to-die interfaces, enabling effective communication between the SoCs and the SerDes chiplets.

Die-to-Die Interconnect Inner Workings

The inner workings of a die-to-die interconnect involve the establishment of a reliable data link between two silicon dies, akin to other chip-to-chip interfaces. The interface is logically divided into three layers: the physical, link, and transaction layers. The primary function of the interconnect is to create and sustain the link during chip operation while presenting a standardized parallel interface to the application, facilitating seamless connectivity with the internal interconnect fabric. Various error detection and correction mechanisms, such as forward error correction (FEC) and cyclic redundancy code (CRC) with retry capabilities, are incorporated to ensure link reliability.

The physical layer architecture of a die-to-die interface can be categorized as either SerDes-based or parallel-based.

  •  A SerDes-based architecture involves parallel-to-serial (or serial-to-parallel) data conversion, impedance matching circuitry, and clock data recovery (or clock forwarding) functionalities. It supports high bandwidths, up to 112 Gbps, utilizing signaling techniques like NRZ or PAM-4. The primary objective of a SerDes architecture is to minimize the number of I/O interconnects in simpler 2D-type packaging configurations such as organic substrates.
  • A parallel-based architecture employs multiple low-speed, straightforward transceivers in parallel, each comprising a driver and a receiver, utilizing forwarding clock techniques to simplify the overall architecture. It supports DDR-type signaling and effectively reduces power consumption in denser 2.5D-type packaging setups like silicon interposers.

Among the available options for ultra-short reach die-to-die IP implementations, SerDes interconnects are widely favored, especially in applications requiring multi-terabit throughput, such as high-speed Ethernet switches and integrated photonics systems. Achieving terabyte-level performance necessitates the utilization of eight lanes, each equipped with 100 Gigabit-plus SerDes technology, delivering remarkable efficiency of roughly a picojoule per bit.

Die-to-Die Interface Applications

Die-to-die interfaces find significant applications in various domains, particularly in compute-intensive and workload-heavy sectors such as high-performance computing (HPC), networking, hyperscale data centers, and artificial intelligence (AI). These interfaces are instrumental in achieving enhanced performance, product modularity, and process node optimization, thereby extending the reach of Moore's law.

One major use case for die-to-die interfaces is the scaling of System-on-Chip (SoC) designs. The objective of connecting multiple dies through virtual die-to-die connections is to increase compute power and create multiple SKUs for server and AI accelerators. This approach facilitates tightly coupled performance across the dies, enabling efficient scaling of computational capabilities.

Another important application lies in splitting large SoCs into multiple dies. As compute and network switch dies approach the limits of reticle size, dividing them into several smaller dies becomes a feasible solution. This division not only improves technical feasibility but also enhances yield, reduces costs, and extends the viability of Moore's law.

Die-to-die interfaces also play a crucial role in aggregation scenarios, where disparate functions implemented in different dies are combined to leverage the optimal process node for each function. This approach enables power reduction, improved form factors, and enhanced performance in applications such as Field-Programmable Gate Arrays (FPGAs), automotive systems, and 5G base stations.

Finally, die-to-die interfaces also support the concept of disaggregation, which involves separating the central chip from the I/O chip. This separation allows for easy migration of the central chip to advanced processes while keeping the I/O chips in more conservative nodes. This strategy lowers the risk and cost of product evolution, enables the reuse of I/O chips, and improves time-to-market in applications like servers, FPGAs, and network switches.

Overall, die-to-die interfaces serve as versatile solutions in various application domains, enabling scalability, modularity, and optimization while meeting the demands of compute-intensive and data-intensive systems.

Ready to harness the power of die-to-die interconnects and optimize your integrated circuits? With Allegro X Advanced Package Designer, you can take full advantage of this groundbreaking technology to achieve seamless data transfer, enhanced performance, and improved efficiency. Experience the future of advanced packaging and unlock the potential of your designs. Get started with Allegro X Advanced Package Designer today and stay ahead in the world of integrated circuits

Leading electronics providers rely on Cadence products to optimize power, space, and energy needs for a wide variety of market applications. To learn more about our innovative solutions, talk to our team of experts.