Image: Shutterstock
What Is Moore’s Law?
In 1965 Gordon Moore observed the number of transistors in a dense integrated circuit will double every 18 months, (which he later revised to two years) thereby increasing processing power. In 1968, Moore went on to co-found Intel with Robert Noyce and his observation became the driving force behind Intel’s success with the semiconductor chip. The fact that Moore’s Law survived for over 50 years as a guide for innovation surprised Moore himself, and in a 2015 interview, he describes a couple of potential obstacles related to further miniaturization: the speed of light, the atomic nature of materials and growing costs.
Nevertheless, technologists have internalized Moore’s Law and grown accustomed to believing computer speed doubles every 18 months as Moore observed over 50 years ago and, until recently, that was true. However, Moore’s Law is becoming obsolete. Why? What is Moore’s Law’s greatest limitation? And what alternatives do we have?
Moore’s Law Definition
Moore’s Law refers to the observation that the number of transistors in a dense integrated circuit doubles about every two years.
More From Intel's Front LinesI Failed to Acquire Cisco. They Were Better Off Without Us.
Moore’s Law and the Microprocessor
First, a little background: a CPU (central processing unit) performs basic arithmetic operations. A microprocessor incorporates features of a CPU on a single integrated circuit, which itself consists of transistors. Nowadays, a CPU is a microprocessor (consisting of a single circuit) with billions of transistors. For instance, an Xbox One has 5 billion.
The first Intel microprocessor, Intel 4004, had 2,300 transistors, each 10 microns in size. As of 2019, a single transistor on the mass market is, on average, 14 nanometers (nm), with many 10 nm models entering the market in 2018. Intel managed to pack over 100 million transistors on each square millimeter. The smallest transistors reach 1 nm. It doesn’t get much smaller than that.
An explanation of Moore's Law. | Video: CuriousReason
Threats to Moore’s Law and Limits to Innovation
Atomic scale and skyrocketing costs
The speed of light is finite, constant and provides a natural limitation on the number of computations a single transistor can process. After all, information can’t be passed quicker than the speed of light. Currently, bits are modeled by electrons traveling through transistors, thus the speed of computation is limited by the speed of an electron moving through matter. Wires and transistors are characterized by capacitance C (capacity to store electrons) and resistance R (how much they resist flow of the current). With miniaturization, R goes up while C goes down and it becomes more difficult to perform correct computations.
As we continue to miniaturize chips, we’ll no doubt bump into Heisenberg’s uncertainty principle, which limits precision at the quantum level, thus limiting our computational capabilities. James R. Powell calculated that, due to the uncertainty principle alone, Moore’s Law will be obsolete by 2036.
In fact, there may already be enough reason to ask, ‘Is Moore’s Law dead?’ Robert Colwell, former director of the Microsystems Technology Office at the Defense Advanced Research Projects Agency, uses the year 2020 and 7 nm as the last process technology node. “In reality, I expect the industry to do whatever heavy lifting is needed to push to 5 nm, even if 5 nm doesn’t offer much advantage over 7 (nm), and that moves the earliest end to 2022. I think the end comes right around those nodes.”
Another factor threatening the future of Moore’s Law is the growing costs related to energy, cooling and manufacturing. Building new CPUs or GPUs (graphics processing unit) can cost a lot. The cost to manufacture a new 10 nm chip is around $170 million, almost $300 million for a 7 nm chip and over $500 million for a 5 nm chip. Those numbers can only grow with some specialized chips. For example, NVidia spent over $2 billion on research and development to produce a GPU designed to accelerate AI.
Speaking of Chips...How Semiconductor Shortages May Affect Your Operations
The Future of Moore’s Law and Computing
Quantum Computing
Taking all these factors into consideration, it’s necessary to look for alternative ways of computing outside of the electrons and silicon transistors that Moore’s Law depends on.
One alternative, which continues to gain momentum, is quantum computing. Quantum computers are based on qubits (quantum bits) and use quantum effects like superposition and entanglement to their benefit, hence overcoming the miniaturization problems of classical computing. It’s still too early to predict when they will be widely adopted, but there are already interesting examples of their use in gaming. The most pressing issue for quantum computing is scaling quantum computers from dozens of qubits to thousands and millions of qubits.
Learn From a Built In Quantum Computing ExpertHow to Write Pseudocode
Specialized Architecture
Another approach is specialized architecture tuned to particular algorithms. This field is growing very quickly thanks to large demand from machine learning. GPUs have been already used for AI training for over a decade. In recent years, Google introduced TPUs (tensor processing units) to boost AI and right now there are over 50 companies manufacturing AI chips including: Graphcore, Habana or Horizon Robotics, and most leading tech companies.
Moore's Law is ending...so, what's next? | Video: Seeker
FPGA
In practice, FPGA (field-programmable gate arrays) mean that a piece of hardware can be programmed after the manufacturing process. FPGAs were first produced in 1985 by Seiko, but different re-programmable hardware can be traced back to the 1960s. FPGAs are coming into fashion recently, especially with their use in data centers by both Intel and Microsoft. Microsoft also used FPGAs to accelerate Bing search. A similar concept to FPGAs are ASIC, application-specific integrated circuit. Lately, they were extremely popular with cryptocurrency mining.
More From Przemek ChojeckiA Beginner's Guide to NFTs and Cryptoart
Spintronics, Optical Computing, and More
Yet another alternative to classical computing and Moore’s Law is to replace silicon or electrons with something else. Using the spin of electrons instead of their charge gives rise to spintronics, electronics based on spins. Wide use of spintronics are still in the research phase, with no mass market models. Scientists are also currently researching optical computing — or using light to perform computations. However, there are still many obstacles to building an industrial optical computer.
Finally, we’re seeing an increasing number of experiments with non-silicon materials. Compounded semiconductors combine two or more elements from the periodic table, like gallium and nitrogen. Different research labs are also testing transistors made from silicon-germanium or graphene. Last but not least, some researchers are exploring biological computing, using cells or DNA as integrated circuits, but this is still a work in progress.
To move beyond Moore’s Law we need to go beyond the limits of classical computing with electrons and silicon and enter the era of non-silicon computers. The good news is there are plenty of options, from quantum computing, to miracle materials like graphene, to optical computing and specialized chips. Whatever the path forward, the future of computing is definitely exciting! Rest in peace, Moore’s Law.