The evolution of computers and other such gadgets over the last few decades has been exponential. With a new model coming out just as you purchase the last one and companies releasing an updated more luxurious and advanced version of gadgets almost every year or even twice a year. We can only go back to say a big thank you to Gordon Moore for his 1965 prediction that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975.
When it proved correct in 1975, he revised what has become known as Moore’s Law to a doubling of transistors on a chip every two years. While this law stands to be more of a prophecy than any actual law or principle, it has held true for a really long time. But it seems that’s about to change.
A transistor produces, directs and amplifies an electrical signal using three leads, a source, a drain and a gate. When voltage is applied to the gate, an incoming current at the source is allowed to pass through the gate to the drain. If you take the voltage away from the gate lead the current won’t be able to pass through.
Moore’s Law helps us find a way to compute logical values, 1 and 0 in computer terms. This is based on whether there is a voltage applied to the gate and the source leads or not. Connect the drain lead of one transistor to the source lead or the gate lead of another transistor and suddenly it is possible to produce wonderfully complex logic systems.
Moore’s argument was that with the new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” Economically, It was a beautiful bargain, the more transistors you added, the cheaper each one got. He also realised that there was plenty of room for engineering advances to be made and increase the number of transistors you could affordably and reliably put on a chip.
Compared to the neurons in the human brain, transistors are responsible for the effective functionality of almost all devices from watches to laptops and even medical advances. If you can fit more transistors on a chip, then you can increase the chip’s abilities and efficiency.
Soon these cheaper, more advanced and powerful chips would become what economists like to call a general-purpose technology. One so important that it spawns so many other innovations and advancements in multiple industries. A few years ago, this information technology made possible by integrated circuits was credited with a third of US productivity growth since 1974. Almost every recent technology, from laptops to smartphones and GPS, is a direct reflection of Moore’s law. Not only that, but it has also fueled the breakthroughs in artificial intelligence and genetic research in recent years. It did this by giving machine-learning techniques the ability to sift and run through large amounts of data in record time to find answers.
Every year since 2001, MIT Technology Review has released a list of the 10 most important breakthrough technologies of the year. This list is almost exclusively filled with technology that is only possible because of the computation advances described by Moore’s Law.
Moore wrote that “cramming more components onto integrated circuits, would lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.”
The general idea is that if we continued adding more transistors to chips it would lead to great discoveries. Decades after, tech companies, governments, academic and medical researchers and so much more people poured more and more money into this idea that kept diversifying into varying ideas and breakthroughs with uncanny accuracy.
The End Of Moore’s Law
The questions scientists are asking now, however, is what’s the next step? What happens when Moore’s Law comes to an end and from all indications, that might already have happened. Many scientists argue that we are no longer at the rate of technological growth Moore predicted and things are gradually in a decline. While the chip industry has kept Morre’s LAw alive so far, the delay in Intel releasing it’s transistor technology in 2016 and pushing it to 2017 was a downer. That and the fact that they increased the time frame for releasing future generations of chip technology has people asking if the end time for technological advancement with transistor chips is here and what should be done next.
Even if Intel and other chip producers can squeeze out a few more generations of advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over.
The processing power of transistors are declining and more transistors can’t be put on chips is due to the following factors;
- Electrical leakage: As transistors get smaller they become more energy efficient. But the smaller they get, the harder it is for the channel carrying the electrical current through them to contain it.
- Heat: The difficulty in carrying electric current through the transistors makes it produce excess heat, which doesn’t just affect that transistor but a lot more.
- Leakage: The heat causes leakage in the transistors and billions of transistors leaking at the same time can threaten the integrity of the entire chip. To reduce this leakage and prevent overheating, the processor reduces the amount of voltage is absorbs or throttle the number of transistors it uses, which limits the processing power of the chip.
- Economics: The doubling of transistors on each chip over time increases the amount of heat generated. Big businesses are the large purchasers of advanced processing chips housed in large server rooms. This increases the heat generated in the server rooms and the cost of cooling these rooms can be expensive.
As businesses try hard to extend the life and performance of their equipment in order to save money, chipmakers responsible for fulfilling Moore’s Law bring in less revenue to devote to research and development.
Neil Thompson and Charles Leiserson in a new paper, document ample room for improving computational performance through better software, algorithms, and specialized chip architecture. One such idea is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.
This dramatic increase in circuit complexity was due to the steadily shrinking size of transistors over the decades. Initially measured in millimetres in the 1940s, the dimensions of a typical transistor in the early 2010s were more commonly expressed in tens of nanometres, a reduction factor of over 100,000. Transistor features measuring less than a micron (a micrometre, or one-millionth of a meter) were attained during the 1980s when dynamic random-access memory (DRAM) chips began offering megabyte storage capacities.
With the entry of the 21st century, these features approached 0.1 microns across, which allowed the production of gigabyte memory chips and microprocessors that operate at gigahertz frequencies. Moore’s law continued into the second decade of the 21st century with the introduction of three-dimensional transistors that were tens of nanometres in size.
Paolo Gargini, chair of the road-mapping organization, said: “By the early 2020s, even with super-aggressive efforts, we'll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” That time is here already and the chips are down, (pun intended).
Alternatives To Moore’s Law
Future technology will have to be more efficient, powerful and effective. But ensure this, we will need something far more powerful than the current chips in use. There are a few ideas for silicon-ship replacements we should look at.
Big and small tech startups are in a race to deliver the first quantum computers which will use the power of quantum physics to deliver unimaginable processing power with qubits, which are far more powerful than silicon transistors. Before all this happens though, physicists have to prove that quantum machines are better and more effective at completing tasks than regular computer chips.
Nanomagnetic logic transmits and computes data by using bistable magnetization states, lithographically attached to a circuit’s cellular architecture. It works the same way the regular silicon transistors work but instead of switching the transistor on and off to create binary code, the magnetization is switched off to do this instead. The binary information can be processed using dipole-dipole interactions (the interaction between the north and south pole of each magnet).
Nanomagnetic logic is ideal for the environment and takes away the issue of leakages because it does not rely on electrical current and there is very low power consumption.
Graphene and Carbon Nanotubes
While it may take years before it is available for commercial sale, graphene is extremely strong, can conduct electricity and heat, is one atom in thickness with a hexagonal lattice structure, and is available in abundance.
Graphene does not generate binary codes like silicon conductors so it can never be used as a switch. This means that graphene conductors can never be switched off. Graphene and carbon nanotubes are still very new. Whilst silicon-based computer chips have been developed for decades, graphene’s discovery is only 14 years old. If we would be replacing silicon-based chips with graphene in the future, a lot of research still needs to be done.
In spite of all this, it is still the most ideal replacement for silicon-based chips. Gadgets like foldable laptops, super-fast transistors, phones that can’t be broken and so much more can, in theory, be achieved with graphene.