It’s possible to invent something without understanding why it works. Primitive people lit fires without knowing anything about thermodynamics. The pioneers of flight copied birds, they didn’t start with theoretical aerodynamics. AI is similar – we can build systems that display intelligence, but we’re a long way from understanding what intelligence is.
Why does this matter? Because early iterations of inventions are often inefficient. The first plane wings worked more by accident than by design. Now that we know why a wing works on a theoretical level, we can build much more efficient and effective ones. The same is probably true of AI.
We know the AIs we build are inefficient because there are seven billion intelligent biological machines walking around that use far less power with far more impressive results. The human brain operates on just 15 watts. You would need a multi-megawatt power station to run enough silicon processors to emulate the complexity of a brain. Interns are a lot cheaper and already have 80 billion neurons installed.
Stealing fire from the gods
There is widespread discomfort about the ecological impact of training and running AIs. AI is becoming a common tool in enterprise, which may mean greater IT energy consumption overall. AI also promises to be a powerful tool in pursuit of the UN’s Sustainable Development Goals, but it’s counterproductive if the tools we use to achieve the SDGs create a spike in energy consumption at the same time.
My colleague Gunnar Menzel focuses on this point in a recent article about the spectacular growth in AI power consumption for training. He’s absolutely right, and something needs to be done about it, but there is more to the story.
The cutting-edge AIs that grab news headlines, for instance GPT-3 or Watson, are definitely power guzzlers in the training phase. These are the systems that do new and mind-boggling things like beating chess grandmasters, or mistaking a butterfly for a washing machine. But there are only a handful of them and the advances they make become available to everyone, often open source.
These high-profile AI projects are akin to Prometheus stealing fire from the gods – a dramatic and costly adventure but once it’s done, everyone has access to fire without getting their livers eaten.
The AI systems built for most business and public sector applications are far less complex than titans like GPT-3. You don’t need a cutting-edge genius AI to run accounting processes or supply chain analytics. So is there a problem?
All AIs need to be trained. This means storing and manipulating large amounts of data to refine model accuracy, which takes electrical power. Looking at data is the AI equivalent of pushups and 5K runs – it’s how they get fit to do their job.
Ordinary business AIs may not be on the scale of AI mega projects, but there are a lot more of them, and their number is growing week by week. Individually they don’t have the scary energy budgets of a GPT-3, but collectively they add up to a sustainability problem.
The clear business benefits of AI mean it is almost certain to become a big part of digital operations. For an enterprise interested in being competitive and ecologically responsible, the question is not if AI should be used, the question is how it can be used sustainably.
If AI is used purely as a way of increasing efficiency for the sake of greater productivity, it is likely to conflict with sustainability priorities. The challenge is to balance the energy spent on the productivity benefits of an AI system with gains that also make the organization more sustainable.
It’s a complex balancing act.
This is the puzzle of conflicting consideration I and others at Capgemini have been working through. And we’ve come up with some concrete answers.
Build ethical AI that includes sustainability as one of its key ethical considerations. We discuss this in depth in our recent AI and the Ethical Conundrum report. This includes the ambition to use AI in a way that improves an organization’s sustainability not just its productivity.
Build “green” AI. This is a narrower category than ecologically responsible AI because it refers only to minimizing the carbon footprint of digital systems. It includes designing low-carbon algorithms – those with the lowest possible energy requirements – as well as emphasizing the energy efficiency of the physical infrastructure AIs operate on. It also extends to accurately measuring and reporting energy use.
Champion AI for good. So far in this article I’ve concentrated on ways of using AI to improve sustainability within commercial enterprises, but there are many ways AI can directly support sustainability initiatives in the public sector. Machine vision and sensing, and prediction and decision making are two core AI capabilities that have huge potential in helping to achieve the UN’s SDGs. For example, computer vision is being used a lot to gather important insights about the state of the planet from vast satellite imagery databases.
We believe it is essential for IT organizations to talk about and push innovation in these areas.
AI, like fire and flying machines, is here to stay. The question is no longer should we use it, the question is how we should use it. Will we burn down forests and take frivolous mini-breaks in Amsterdam, or will we increase the sum of human happiness with grilled sausages and evacuation flights from war zones? Today, perhaps more than at any time in history, we have a clear-sighted choice. Get in touch and let’s talk about how your organization can be part of that decision by achieving a sustainable AI implementation.