In the past few decades, the global semiconductor industry’s growth has been mainly driven by the demand for cutting-edge electronic devices such as desktops, laptops, and wireless communication products and the rise of cloud-based computing. Growth will continue with new application drivers for the high-performance computing market segment.
Below are the 5 Trending Innovations That Will Shape the Future of Semiconductor Technology.
CMOS transistor density is based on Moore’s Law, which will continue for the next eight to ten years. It will be enabled mainly by advances in EUV patterning and novel device architectures that will allow logic standard cell scaling.
Extreme ultraviolet (EUV) lithography was introduced in the 7nm technology node to pattern some of the most critical chip structures in one single exposure step. Beyond the 5nm technology node (i.e., when critical back-end-of-line (BEOL) metal pitches are below 28-30nm), multi-patterning EUV lithography becomes inevitable โ adding significantly to the wafer cost.
With the innovation as mentioned above, Transistor density follows the path mapped out by Gordon Moore. But node-to-node performance improvements at fixed power โ referred to as Dennard scaling โ have slowed down due to the inability to scale supply voltage.
Researchers worldwide are looking for ways to compensate for this slowdown and further improve the chip’s performance. The aforementioned buried power rails are expected to offer a performance boost at the system level due to improved power distribution.
2D materials such as tungsten disulfide (WS2) in the channel promise performance improvements because they enable more aggressive gate length scaling than Si or SiGe. A promising 2D-based device architecture involves multiple stacked sheets, each surrounded by a gate stack and contacted from the side. Simulations suggest these devices can out-perform nanosheets at scaled dimensions targeting 1nm node or beyond.
The total memory IC market forecast suggests that 2020 will be a flat year for memory relative to 2019 โ an evolution that can partly be related to the COVID-19 slowdown. Beyond 2021, this market is expected to start growing again. The emerging non-volatile memory market is expected to grow at >50% compound annual growth rate โ mainly driven by the demand for embedded magnetic random-access memory (MRAM) and standalone phase-change memory (PCM).
NAND storage will continue to scale incrementally, without disruptive architectural changes in the next few years. Today’s most advanced NAND products feature 128 layers of storage capability. The 3D scaling will continue with additional layers potentially enabled by wafer-to-wafer bonding.
For DRAM, cell scaling is slowing down, and EUV lithography may be needed to improve patterning. Samsung recently announced EUV DRAM production for its 10nm (1a) class.
We see more examples of systems being built through heterogeneous integration leveraging 2.5D or 3D connectivity in the industry. These options help address the memory wall, add functionality in form-factor constrained systems, or improve large chip systems’ yields. With the slowing logic PPAC (performance-power-area-cost), smart, functional partitioning of SoC (system on chip) can provide another knob for scaling.
A typical example is high-bandwidth memory (HBM) stacks, consisting of stacked dynamic random-access memory (DRAM) chips that connect directly through a short interposer link to a processor chip GPU or CPU. More recent examples include die-on-die stacking in Intel’s Lakefield CPU or chipset on the interposer in the case of AMD’s 7nm Epyc CPU. In the future, we expect to see many more of these heterogeneous SoCs โ as an attractive way to improve system performance.
Heterogeneous integration is enabled by 3D integration technologies such as die-to-die or die-to-Si-interposer stacking using Sn micro bumps or die-to-silicon using hybrid Cu bonding. The state-of-the-art Sn micro bump pitches in production have saturated at about 30ยตm.
With an expected growth of above 100% in the next five years, edge AI is one of the chip industry’s biggest trends. As opposed to cloud-based AI, inference functions are embedded locally on the Internet of Things (IoT) endpoints that reside at the edge of the network, such as cell phones and smart speakers. The IoT devices communicate wirelessly with an edge server that is located relatively close. This server decides what cloud server (typically, data needed for less time-sensitive tasks, such as re-training) will send data and what data gets processed on the edge server.
Compared to cloud-based AI, data needs to move back and forth from the endpoints to the cloud server; edge AI addresses privacy concerns more efficiently. It also offers advantages of response speeds and reduced cloud server workloads.
Today, commercially available edge AI chips โ the chips inside the edge servers โ offer efficiencies in the order of 1-100t operations per second per Watt (Tops/W), using fast GPUs or ASICs computation.