How Chiplets Provide a Path Forward for Custom Semiconductor Design
- SoC Design, ASIC Design
- Jul 9, 2019 7:33:00 AM
The Impending HSoC Revolution
This series of blogs looks at the future of the semiconductor industry and the current issues with cost and the resulting stagnation of innovation. The number of new custom ASICs is shrinking and FPGAs do not totally fill the void when it comes to high performance, power, and RF/analog capability. Heterogeneous Systems on Chip (HSoC) composed of individual chiplets, fabricated in their optimal processes and connected on a silicon substrate using heterogeneous 2.5D technology, is the future and the technology and business frameworks are evolving to make this paradigm a reality.
The previous blogs talked about the upwardly spiraling costs of ASIC design and potential solutions and suggested that heterogeneous design using silicon interposers and chiplets is the industry’s way forward. This blog explains chiplets in more detail.
How Chiplets Provide a Path Forward for Custom Semiconductor Design
The idea behind chiplets is simple. Rather than continue to incorporate all functions into a complex monolithic die as the industry has done for the preceding decades, the chiplet approach looks to disaggregate the functions into separate pieces of silicon called chiplets. Unlike most off-the-shelf chips, these chiplets may or may not be functional as stand-alone units and may need connection to another chiplet to provide value. The key technology that differentiates chiplets from existing chips is the use of a silicon interposer to connect them together. Multi-chip modules (MCM) consisting of several chips wire bonded or flip-chipped to an organic substrate have been around for many years and offer a number of advantages such as small form factor, cost, feature to process optimization, and the ability of a manufacturer to control a subsystem design. Silicon interposers take this a step further by continuing to shrink the package area, and greatly increasing the number of die-to-die interconnects while at the same time greatly reducing the signaling power.
Silicon Interposer
The key enabling technology that has led to the viability of chiplets is the silicon interposer and 2.5D integration technology. In this approach, chiplets are assembled next to each other and mounted on a piece of silicon which may be no different than what the chiplets themselves are made from. The use of silicon rather than an organic printed circuit board means that the chiplets can be placed far closer together and the number of interconnects can be far larger resulting in system partitions not possible in traditional approaches. Furthermore, the ultra-small features of a silicon interposer greatly reduce the interconnect resistance and parasitic capacitance leading to a substantial reduction in I/O power dissipation when compared to a discrete solution. This leads to the ability to approach SoC levels of power dissipation and size with a modularized approach. Figure 1 shows a simplified block diagram of a hypothetical SoC. Typical construction of most modern SoCs is to have a block level functional approach and have the blocks interconnected using a flavor of an Advanced Microcontroller Bus Architecture (AMBA) bus. This bus is a wide parallel bus allowing for single cycle data transfers and multiple masters with bus arbitration.
Figure 1 Typical SoC Block Diagram
Typically, when interconnecting multiple chips on an organic printed circuit board (PCB), standard often serialized interfaces are used which allows for large distances to be covered but adds overhead and logical partitions into the design. From the chiplet design perspective, the key is to enable similar architectures within a heterogeneous design as would be present in a monolithic IC. This means having a mechanism to extend the AXI bus across chiplet boundaries.
While there is currently no standard interface to connect chiplets together, there are a number of contenders. The comparison of emerging chiplet interfaces and their pros and cons is left to a future blog. Nonetheless, one leading candidate to become the standard is Intel’s Advanced Interface Bus (AIB). This bus is a massively parallel high- speed bus that takes advantage of the short transmission distances and extremely dense interconnect routing afforded by a silicon interposer. Since the AIB is a parallel bus and largely a physical layer interface (PHY), it is easily connected to an AXI bus providing low latency and straightforward logical operation. Figure 2 shows a potential disaggragation of an SoC into three chiplets using an AIB bus. As can be seen the basic architecture for the HSoC is the same as would be done on an SoC.
Figure 2 SoC Partitioned into Chiplets
Matching Function to Process
It no longer makes sense to integrate analog, RF, and dense memory functions in CMOS due to the fact that these analog functions do not scale in size resulting in disproportionate usage of very expensive silicon area. The smaller geometry CMOS processes are also not well suited for these functions. In fact, most modern memory is actually composed of memory chiplets stacked on top of a memory controller chiplet. For density, logistical, and manufacturing reasons, these are typically constructed using 3D technology. The same is true for legacy interfaces and functions. These functions were designed to work at lower clock frequencies and/or higher supply rails and the implementation in a cutting-edge process is a waste. It should also be noted that leakage power increases as the process geometry shrinks leading to standby power inefficiency when implementing circuits in a process node that is well beyond the need for that function.
Over the past 20 years, great strides in analog design have led to the ability to construct high quality analog circuitry in CMOS. Switching techniques where accuracy in time (utilizing the stability of the oscillator) replaces accuracy in matching, as well as delta-sigma techniques which relax accurate filtering requirements, have enabled designers to keep analog integrated. However, analog does not scale in area the way digital does - making it increasingly expensive to integrate. Furthermore, the dropping breakdown and rail voltages of advanced processes are beginning to prohibit some analog designs. By disaggregating the analog sections, they can be designed in a process that is not only optimal for its performance, but also cheaper. Such an approach also significantly de-risks a complex SoC.
Yield and Re-use
Another major issue driving disaggregation of large SoCs is that of yield. As die get bigger, defect density-driven yield drops. This yield drop can have a substantial effect on the product cost. By using a chiplet approach where all the die can be fabricated in the optimal process for the specific chiplet function and the chiplet size can be kept to a minimum, the costs associated with process and yield can be effectively managed. Additionally, chiplets drive IP re-use in the most fundamental way – they reuse tested production silicon. In this model, the verification, validation, and qualification times are substantially reduced and TTM and final quality are greatly improved. The use of predeveloped silicon and fundamentally simpler sub-blocks leads to a more predictable and less risky development process. When considering a family of chips, the heterogeneous approach allows amortization of cost across multiple designs significantly reducing total development costs. Finally, IP licensing costs can be reduced. Rather than paying a new licensing cost for a given piece of IP for every new SoC development, the reuse of a chiplet leads to effectively just paying a royalty cost every time a chiplet is used. This leads to more of a pay-as-you-go approach and eases the burden on low volume customers. Use of chiplets represents a step forward in the IP re-use model employed today. Rather than licensing IP or a hard macro, chiplets represent a step forward where the IP has been reduced to practice and tested with knowable performance.
Figure 3. Chiplet vs. External Chip Comparison
Upgradeability
One last advantage of heterogeneous design is the ability to upgrade a device and to a significant extent, future-proofing a product. The ability to create multiple feature-differentiated products from a collection of chiplets is also compelling. For example, by swapping out a memory chiplet, new applications can be tackled, or different price points can be attained. As advances in cybersecurity protocols happen, a security chiplet can be swapped out thus addressing new threats without having to redesign a whole SoC. Finally, as interfaces upgrade their standards a new interface chiplet can be swapped out. From a practical standpoint this also can hold feature creep in check as the whole design is not being re-opened, but only the targeted part that needs to be changed. From a military point of view, the HSoC can be updated to swap out a chiplet without the need to requalify the whole board, which can also lead to significant savings and dramatic benefits in fighting obsolescence issues.
What’s Next
In this blog, I’ve talked about how chiplets and heterogeneous design can solve many of the current issues facing SoC designers, namely NRE, risk, time-to-market, and upgradeability/maintainability. In future blogs I will discuss the state of the chiplet ecosystems, the various initiatives the kickstart the HSoC revolution, and some of the challenges facing heterogeneous design.
Leave a comment