Why is Moore's Law dying? How did Moore's Law die?
Why is Moore's Law dying? How did Moore's Law die?
While the demise of Moore's Law is a growing concern, innovations by key players continue every year. On March 24, 2023, Gordon Moore passed away. Gordon Moore invented Moore's Law. He was one of the founders of Intel Corporation. Born in San Francisco in 1929, Moore proposed "Moore's Law" in 1965, founded Intel Corporation in 1968, and was awarded the National Medal of Technology by President Bush in 1990. He was a dual figure, embodying both scientist and billionaire. The core of Moore's Law is that the number of transistors that can be placed on an integrated circuit roughly doubles every 18 to 24 months. In other words, processor performance roughly doubles every two years, while the price halves. The importance of Moore's Law lies in its profound impact on the development of modern computers and electronic products. It provided a stable guiding principle for the semiconductor industry, promoting rapid technological development and cost reduction. This has led to continuous upgrades in modern electronic products such as computers, mobile devices, digital cameras, audio and video equipment, making them increasingly powerful while remaining affordable. In recent years, a view has circulated in the tech world that "Moore's Law has failed." While chip manufacturers have used various methods to keep pace with Moore's Law, they cannot avoid the fact that the doubling effect of Moore's Law has begun to slow down; there are always physical limits to continuously shrinking chip size. In 2021, then-Intel CEO Pat Kissinger publicly stated that "Moore's Law is still valid," predicting that it would continue for the next decade, and even that the pace of updates would be faster. Here's everything you need to know about Moore's Law: what it means for processors, why people say it's dying, and how companies are looking for solutions. A Descriptive Law of How the Chip Industry Has Operated for Decades According to Moore's Law, if the industry can produce a processor with 1 million transistors in one year, then within two years, it is possible to produce a chip with 2 million transistors. This is largely related to how chips are manufactured through process nodes. Each new process should be denser than the last, which is why the industry has been able to meet Moore's Law predictions for decades. So why keep increasing the number of transistors instead of making larger chips every year? Because a single chip can only be so big. The largest chip ever manufactured in bulk is 800mm², easily fitting in the palm of your hand. Therefore, higher density is needed to pack more transistors into a chip. For most of history, foundries were able to introduce new process nodes every one or two years, keeping Moore's Law alive. Furthermore, new nodes improved frequency (sometimes simply called performance) and energy efficiency, so using the latest processes was generally desirable for companies, unless they were doing something fundamental. How Moore's Law Died The industry expected new nodes every year, but by the 21st century, a worrying sign emerged: the end of Dennard scaling, which predicted that more compact transistors would be able to achieve higher clock speeds. However, around the 65-nanometer mark in the mid-2000s, this no longer held true. At such tiny sizes, transistors exhibited new behaviors that physicists could not have foreseen. But the end of Denard's miniaturization technology pales in comparison to the crisis faced by almost every foundry in the world around 32 nanometers in the early 2010s. Shrinking transistors below 32 nanometers is extremely difficult, and for years, Intel was the only company to successfully transition to the 22-nanometer node, the next major upgrade after 32 nanometers. Intel's competitors didn't catch up until the mid-2010s, but by then, the industry had changed dramatically. The graph above shows the number of companies that were able to manufacture industry-leading nodes in specific years and generations over the years. This number had been declining for years, but seemed to stabilize in the late 2000s and early 2010s. Then, when companies began to realize how difficult it was to surpass 32-nanometer technology, they conceded defeat. At that time, 14 leading foundries reached the 45-nanometer node, but only six reached the 16-nanometer node. Today, only three of those foundries remain in the lead: Intel, Samsung, and TSMC. However, many predict that Samsung or Intel will eventually join the ranks of those that fall. Even companies capable of developing these new nodes cannot match the generational gains of older nodes. Making chips denser is becoming increasingly difficult; for example, TSMC's 3nm node failed to shrink its cache, a disastrous outcome. While density gains have decreased with each generation, production costs have risen, causing the cost per transistor to stagnate since 32nm, making it more difficult to sell processors at lower prices. Performance and efficiency improvements are also not as significant as before. All of this combined means Moore's Law is dead. This isn't just about failing to double the number of transistors every two years; the problem lies in rising prices, performance bottlenecks, and the inability to easily increase efficiency as before. It's a survival issue for the entire chip industry. Even with Moore's Law's impending demise, how can companies meet its expectations? While the demise of Moore's Law is undoubtedly a growing concern, every year key players innovate, many seeking ways to completely circumvent the manufacturing problems that have plagued the industry for years. While Moore's Law talks about transistors, its spirit remains alive simply by meeting the traditional performance improvements of generation after generation, and the industry has a wealth of tools available that didn't even exist a decade ago. AMD and Intel's chiplet technology (Intel calls it "tiles") has proven to not only meet Moore's Law's performance expectations but also those of transistors. While a single chip can indeed only be that big, theoretically you can add many, many more chips to a single processor. A chiplet is essentially a small chip that pairs with other chiplets to form a complete processor. AMD adopted chiplets in 2019, doubling the number of cores the company offers in desktops and servers. Nvidia, on the other hand, proudly declares Moore's Law dead and bets everything on artificial intelligence. Performance can easily double or more by accelerating workloads with AI-enabled Tensor Cores, so Nvidia hasn't even touched chiplets. However, artificial intelligence is undoubtedly a software-intensive solution. DLSS, NVIDIA's AI-driven resolution-upgrading technology, requires collaboration between game developers and NVIDIA to be implemented in games, and DLSS is not without its flaws. Besides these two, the only other option is to simply improve the processor's architecture and achieve higher performance from the same number of transistors. Historically, this path has been difficult for companies; while new generations of processors have brought architectural improvements, performance gains have typically been only single-digit percentages. In any case, chip designers may need to focus more on architectural upgrades from now on, as this is not just a phase.