Professor Luciano Floridi, University of Oxford
Professor Luciano Floridi is the Oxford Internet Institute’s (OII) Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also the Director of the Digital Ethics Lab. Outside Oxford, he is Faculty Fellow of the Alan Turing Institute and Chair of its Data Ethics Group; and Adjunct Professor of the Department of Economics, American University, Washington D.C.
His research concerns primarily Information and Computer Ethics (aka Digital Ethics), the Philosophy of Information, and the Philosophy of Technology. Other research interests include Epistemology, Philosophy of Logic, and Philosophy of Science.
Capgemini’s Digital Transformation Institute spoke to Professor Floridi to understand his views on ethics in AI and the issues that organizations need to bear in mind when designing AI solutions.
Pace of change is the bigger challenge
AI is most often equated with job losses. What is your take on this debate?
I think there are a distraction and a real issue here.
The distraction is the fear that jobs will simply disappear completely and that this will be a major disaster. Why do I think it is a distraction? Because it’s based on two fallacies.
The first fallacy is that there is only a certain fixed amount of work to be done. This is not true. There is as much work as you want to do, depending on resources, time, who does what, skills, and so on. The example I have in mind is how much work you can do to clean your house. It’s bottomless. It’s just that at some point you draw a line and decide that it’s clean enough. So, there is no fixed amount of work as such.
The second fallacy is that AI will simply replace work, when it will also make work surface that was not economically viable to do in the past. For example, if I buy a new robot to cut the grass, the gardener will finally have time for the roses. Earlier, having time for the roses was not viable because it was too expensive. Having the robot cut the grass makes the job of taking care of the roses economically viable.
If you consider these two issues – fixed amount and viability – then you know that AI impact is actually much more complicated. The oversimplification is the distraction.
So, what is the real problem then?
The real problem is not the replacement in itself. It’s the pace at which the replacement is happening now and will continue to happen. Technologies in the past have replaced jobs slightly more slowly than anything we are seeing today. Today, from literally one year to the next, some jobs will become totally obsolete. If that is the case, how you re-train and re-skill the workforce – and how you develop social support systems to mitigate the impact – are the real issues. Future generations will have new jobs. There is no doubt about this. Computers will require human beings to handle them. Look at Audi for example. They have a one-to-one ratio – every robot introduced requires a human being. So, more robots may actually require more human workers. I am not worried about future generations, but more about us – the generation that is undergoing the shift. This will be traumatic, but more because of the pace rather than the nature of the phenomenon. Society needs to help those who will feel the brunt of the drastic changes.
What kind of timeline do you believe we are looking at when it comes to large-scale impact on jobs?
We will see the impact in the next 10 to 20 years, but exactly when depends on many variables that are highly unpredictable. For example, will we see an AI backlash – an opposition to AI similar to what we saw against genetically modified crops? The potential social reaction and legal impact are still very unclear, and this could affect the impact of AI quite dramatically. Will we see regulation, for example, on when automation is allowed in certain contexts? Think of all the current legislation we have when it comes to security in public transport. For example, the law may still require to have drivers on board of busses and taxis, as it does for airplanes. We might see similar legal frameworks with AI and automation.
But I would say that, within 20 years, the world will have profoundly changed.
You mentioned re-skilling. Is it up to companies to re-skill their staff?
This is a very important question. If we talk about skills that you acquire at an early stage of your career, this is very much a joint effort between companies and broader society, including the educational system. But when it comes to re-skilling someone in their fifties, for example, it seems to me that this is more on the societal side and less on companies. It is up to our society to help with the transition.
Companies should think of AI as a reservoir of smart solutions
In terms of organizational impact, will AI truly change the structure of large companies?
In the next three to five years – at the point when AI gets into the pockets of ordinary citizens – I suppose companies will start to experience AI on tap. The analogy here would be with cloud computing or electricity. You don’t produce your own electricity, you just take advantage of it. Likewise, you might just take advantage of smart solutions that can be deployed to solve specific problems. Companies that start thinking in terms of AI as a reservoir of smart solutions are going to be better placed than others to take full advantage of the new digital transformations. Even the structure of organizations will be affected by where they can deploy these AI solutions.
Which industries are best placed to benefit from AI?
The medical sector is in for major changes due to AI. I also believe AI will impact the security and safety sectors significantly. These include anything that needs supervision 24/7, such as monitoring or prediction of possible faults in airplane engines, or early signaling of potential threats. Basically, anything to do with management, safety, prediction and optimization becomes more efficient with AI.
Adapt AI to humans, not the other way around
What are some of the ethical considerations that organizations need to consider as they implement AI systems?
I fully subscribe to the usual discourse around privacy and protection of personal data with AI. At the same time, I believe that is not complete.
My first concern is that once we have real, everyday AI, we need to make sure that the design of smart environments does not result in us always being the ones who adapt to AI rather than vice versa. We are currently deploying smart agents that are rather rigid in what they doing. Think of a world where there are lots of Stage III or even Stage IV driverless cars. This will mean we will see humans adapting to artificial agents. While this is probably inevitable, I would say it is an ethical imperative to make sure that the malleable, adaptable and intelligent human agent in the partnership is not the one that adapts all the time, compared to the stubborn, hard-working and rigid AI agent.
Secondly, we are surrounded by agents that are gently nudging us in certain ways – to take our holidays abroad or read the next Harry Potter book. This constant gentle pushing and pulling is definitely affecting us. These constant reminders and suggestions are shaping us. How do we make sure that we are aware of that? In other words, that we know what we are doing and that this gentle nudging is reduced? We need to protect our autonomy.
What role should governments play in developing policies for these checks and balances?
I think the checks and balances are essential. I would like to see a normative framework with maybe an ombudsman or organizational self-regulation for AI. We need an authority where we can go and say ‘there was a mistake’, or ‘it wasn’t me’, or ‘can we check this because this isn’t quite right?’. Prevention and redressing of problems caused by AI must go hand in hand.
What AI application do you think will transform people’s lives?
If you look at good consumer technologies, they fall in two camps.
First, there are things that we do, but don’t want to do. For instance, imagine if AI can help us with a robotic arm that puts the dishes inside the dishwasher and take the dishes out of the dishwasher and back into the cabinet. That is a category of application that’s useful. So, anything that does things that we are forced to do these days, but we don’t have to do anymore, would be a great success.
Second, there is a category of things that we wish we could do, but we cannot or didn’t even know. Once a technology solved the issue, it dawned on us. A smartphone is not something that takes away something we do not want to do, it gives us something more to look forward to. In both cases, a good way of assessing an AI application is not whether people initially buy it, but whether they buy it again when it breaks down. So, the AI that will be successful is not the one I want, it is the one that I want again because of what it makes me not do, or what it enables me to do.