Skip to Content

More than Moore, greener Moore

Nicolas Gaudilliere
June 18, 2020
capgemini-invent

We continue with our ‘Words of the Day After’ series with an emphasis on: Moore’s law.

In the same way as geometry in space, the explanation of exponential functions has been one of these bittersweet memories reminding us that our brain loses agility with age. A famous mathematician (A Barlett) described this deadlock:

“The greatest shortcoming of the human race is our inability to understand the exponential function.”

The form of an exponential function has no equivalent in the visible world. Let’s say that it has the form of a skateboard ramp, or the two ascending curves in the Eiffel Tower. A long slide towards infinity. We are losing our natural bearings. None of the applications of exponential functions in physics can be seen by the naked eye.

The characteristic of an exponential trajectory is to surprise our predictions. It quickly catches up with us, even when we run. To sum up, it outflanks us. That is what Moore’s law is about, and it has been driving the evolution of digital technologies since the mid-1960s.

Moore’s law, metronome of the day before

The semi-conductor sector has followed a step-by-step path of exponential progress since an Intel engineer, Gordon Moore, mentioned it for the first time in 1965. With hindsight, the formulation of this law has stabilised in a simple formula: on average, due to miniaturisation, the number of transistors stored in microprocessors doubles every 18 months.

Moore’s law is not a law. It does not respond to any sort of natural or irremediable physical force. And there is nothing magical about it. It has been the manifestation of a particularly rich and long-lasting technological deposit, silicon (sand) and its properties to control current. These properties were first put to good use in transistors (invented in 1947), then in integrated circuits grouping together several transistors (1958), then in more sophisticated chips – microprocessors (1971). In increments, for 70 years, the semi-conductor industry has always been able to put more transistors in chips, or reduce their size, or to improve their layout.

Moore’s law became part of our lives. It changed our consumption habits through the rapid obsolescence of our radios, our TVs, our computers, our telephones, etc. It took nearly 50 years to equip most households with a telephone. Internet and mobile telephones were adopted by the majority of households within less than 15 years. Since 2007, the deployment of smartphones has taken half the time.

Moore’s law makes us dizzy. Can you tell me, for example, in view of its exponential trajectory (doubling every six months), by which factor the storage capacities of microprocessors have multiplied over a decade? … … More! … … … Sixty-four of course.

Moore’s law has outrun us. In 1996, the American federal authorities built a supercomputer, ASCI Red, to have the fastest machine in the world, at a cost of 55 million dollars. My son’s game console today has superior capacities to this super machine.

Not only has Moore’s law outrun us, it has also taken away industrial empires. It was the bankruptcy of Kodak in 2012, the difficulties of musical production or of Blackberry.

Moore’s law has even worn out our units of measure. The metric system had to be revised in 1991 to reflect the growth of Internet Protocol traffic[1]. A new revision will soon have to take place because the prefixes created in 1991 have already been used up.

With Moore’s law, digital technology has become a lifeblood common to all technologies, and Moore’s law is the speed of propagation of this lifeblood. This accelerated growth has strong similarities with previous revolutions of general-purpose technologies. Like electricity or the railways in the past, digital is devouring the world. It is transforming everything it touches to such an extent that we could extend the famous phrase uttered by Marc Andreessen in 2011 to “Digital is eating the world”.

Moore’s law has found new growth relays. Microprocessors can turn off power. This gives them the ability to transmit a binary signal. 0 or 1. The characteristic signal of digital information. This signal has gradually been extended to all forms of information. Its encoding in the form of bytes, which we call digitisation, has made it possible to considerably widen the field. Images, sounds, words, locations in space, languages, etc., all this information has found its benchmark. As Intel says:

Moore’s law has become the “metronome of the modern world”.

From More than Moore to greener Moore

We are convinced that a central question will give structure to the day after. As the accelerator of progress is being pressed like never before, whose foot is on the pedal?

One thing is sure. The semi-conductor industry will continue to be a strategic element of this. But it will not only be because of silicon technologies. For this industry, Moore’s “law” is passing. Its deposit will soon be depleted. The companies in this industry (Intel, Samsung, Texas instrument, Toshiba, etc.) have continuously sought to postpone this deadline, which they call “The Wall”. In an unprecedented act, the companies even grouped together in the early 1990s to form a federation, the International Technology Roadmap for Semi-Conductors (IRTS), tasked with tracking each one’s progress. This federation defines strategic orientations and guides research. For example, it accompanied the birth of 3D transistors. This strategy has paid off[2].

The physical limit of silicon chips is now close. A size of two to three nanometres per transistor is considered to be the impassable horizon for the industry. It will be reached in around 2022. Long before then, the industry will face the rising costs of drawings (photolithography) which implies increased miniaturisation. After 50 years of good and loyal services, Gordon Moore is therefore getting ready to make way for his replacement. He recognised it nine years ago at a conference. At the time, he admitted that the explanatory power of his law would be eroded in around… 2017. We have therefore gone beyond it.

In 2017, precisely, Intel decided to postpone the transition to 10 nm transistors. This decision undoubtedly spelled the end of Moore’s law. In March, in a white paper, the semi-conductor consortium (IRTS) recorded that Moore’s law was dead. The title was bold: “More than Moore.

What will “More than Moore” be? In the strict field of semi-conductors, the possibilities today are numerous and exploratory.

They massively depend on the progress of Artificial Intelligence. The network effect specific to these technologies is a recent extension of Moore’s law in the data economy. When a data platform increases its ability to reproduce the world’s information, its explanatory power grows more quickly than the simple effect of the increase of the connected data. The Israeli company, Waze, for example, developed a road navigation and information application thanks to GPS data. This company operates in a very competitive market. It had the great idea of using maps developed by its own users. Navigation takes place in real time and the traffic situation is taken into account. Users can report an accident, roadworks, a hazard, congestion or a speed camera at any time. This type of application is demanding in terms of use. It is of little interest if 10% of drivers use it. But it becomes extremely powerful if more than a majority of the population uses it. The issue is similar today with contact tracing applications, such as StopCovid, which we have developed in France. The size of the network alone allows for exponential growth.

Strategies are also being put in place to redesign chips in the period of AI by optimising architectures. Nvidia is particularly known for its research on processors specialised in AI. The company’s new Ampere A100 GPU architecture, announced in May 2020 is dedicated to machine learning and to HPC markets. It is the largest chip ever made in a 7 nm semi-conductor process. This new-generation chip will make it possible to decentralise computing power (edge computing).

Other solutions are being explored, such as the production of neuromorphic processors (which reproduce the brain’s functioning). Last year, Intel announced it had produced a computer with 8 million neurons. Another possible solution is the use of carbon nanotubes to overcome the limits of silicon, and a recent technological breakthrough allows to produce such transistors by using the standard equipment of a factory dedicated to traditional chips.

We believe that the ecological dimension must be at the heart of the choices that will be made. An ecological revolution has become necessary. Making an electronic chip needs a considerable quantity of raw materials. The “ecological backpack” of a chip, i.e. the quantity of raw materials required to make it, is 20 kgs, for a chip weighing 0.09 g! Mobile telephones are on average replaced every 18 months. PCs are replaced every three years. On the American soil, 3.2 million tonnes of electronic waste is generated each year. A study recently published in the Nature magazine shows that there are ecological solutions, such as cellulose nano-fibres, “wooden chips” .

However, no convincing alternative to silicon seems to be emerging. But alternatives to the digital signal are possible. Quantum computers are central to these strategies. Since last year, there has been a clear acceleration of progress in the matter (some would say “exponential”) by AWS with “Braket”, its Quantum as a Service offer; Google and its 54-qubit Sycamore quantum processor; Microsoft and its Azure Quantum service; or IBM, which is selling the first quantum computer “for everyone”, the Q System 1. These projects not only challenge our ability to imagine what is going on, but let’s admit it, our ability to understand it. In October last year, Google announced it had achieved “quantum supremacy” which, to put it simply, is the moment when quantum power makes it possible to perform operations that are not possible with a conventional computer (not within a reasonable time, at least), and for which, in theory, the number of qubits required must exceed 50. Even if the targeted use case during the Google experience arouses scepticism for generalised use, this display of power leads us to imagine how we can multiply the possibilities of transmitting information within the framework of quantum machine learning processes.

We are at a turning point. And it was precisely at this turning point that the US President made the decision, on 15 May, to cut the chains of global semi-conductor production. This decision may appear trivial, but it isn’t. It consists in subjecting all semi-conductor sales to the Chinese company Huawei to federal authorisation, with extra-territorial effect. To access the American market or to incorporate American technologies, companies cannot trade with the Asian leader.

This decision could have the paradoxical effect of marginalising American companies at a time when they need to adhere more than ever to the technological frontier. It will especially prevent Moore’s law from continuing its trajectory.

So fundamental in the transformation of our everyday lives, yet barely present in our discussions, the debate on the future of semi-conductors is the sign of a period that has lost  its grip on the future. On account of running after technological progress that has never ceased to gain speed, a form of technical autonomy has been established. The day after means no longer letting it advance by itself, regardless of its economic, social and environmental challenges, and to take back control of it. A bit like how we’re taking back control of our children’s schoolwork.


Sources

[1] The prefixes indicating quantities stopped at yotta, i.e. 10 to the power of 24. At the 19th conference on weights and measures, they were expanded. We are now in the age of “zettabytes”. Cisco estimates that global IP traffic reached 1.3 zettabytes in 2016.
[2] See in particular for the 2000s, Michael Kanellos, “Moore’s Law to roll for another Decade”, CNET


Authors

Nicolas Gaudilliere

Chief Technology Officer, Capgemini Invent
A CentraleSupelec engineer, Nicolas started his career in the 2000s. He initially worked as a cybersecurity consultant, before helping to set up Cloud service platforms for telecom operators and major integrators. In 2015, he joined Capgemini Invent as CTO to focus on the organizational and human transformations required for adopting numerous technological innovations such as IoT, cloud, AI, blockchain, 5G, and quantum. Today, Nicolas oversees the Telco, Media, and tech sector, supporting customers in optimizing their business strategy to seize new growth opportunities, streamlining their industrial models, and expanding their technological innovation policy, all while helping them achieve their sustainable development goals.

Etienne Grass

Managing Director at Capgemini Invent France
Etienne serves as the Managing Director of Capgemini Invent for France. Having joined the Capgemini Group in 2017, he dedicated his initial four years to the Public Services sector in France and later expanded his scope globally within Capgemini Invent. Prior to his current role, he played a pivotal role in leading the BLEU sovereign cloud project for the entire Group.