Skip to Content
DTR_Speaker-Banner_1920x480_Luciano-Floridi-1-1
Data and AI

The Virtuous Circle of Trusted AI: Turning Ethical and Transparent AI Into a Competitive Advantage

Luciano Floridi, University of Oxford

Luciano Floridi is the professor of Philosophy and Ethics of Information at the University of Oxford, and the director of the Digital Ethics Lab of the Oxford Internet Institute. Outside of Oxford, he is faculty fellow of the Alan Turing Institute (the national institute for data science) and chair of its Data Ethics Group.

The Capgemini Research Institute spoke with Professor Floridi to understand more about the philosophy underpinning ethical and transparent AI.


THE KEY ISSUES FACING ORGANIZATIONS IN ETHICAL AND TRANSPARENT AI

What is the magnitude of the challenge when it comes to AI and ethics in large businesses?

My whole career has been spent saying this is big. It’s big because we are finally seeing the maturity of this significant information transformation. With the invention of the alphabet we could record information, and with the invention of printing, we could not only record but disseminate that information. Today, with computers, we can automate the recording and dissemination of information.

We will be feeling the effects of what we are doing now for centuries to come, in the same way we are still feeling the effects of the Gutenberg revolution. I am not sure that organizations fully realize yet the enormity of this challenge.

Some companies are setting up ethics boards. Is this one way in which organizations can tackle this challenge?

It’s one of many ways in which the situation can be improved. Companies need to understand the problem and then design policies to deal with what is happening. For example, the external advisory board that Google set up to monitor for unethical AI use was a good step in the right direction. Of course, it is not the only step that needs to be taken; we need to make sure all possible efforts are explored to find the right approach. If the top 500 companies in the world were to create an ethics advisory council tomorrow, I would be happy. This would bring more awareness, more engagement, and more visibility to the issue. The value of visibility is often underestimated. It’s a step towards accountability.

One major risk is having companies become tired or skeptical of any approach to technological development, especially in AI. They start retreating behind the wall of pure legal compliance. That is the future I do not want to see.

How do you build greater awareness of the need for ethical and transparent AI among companies of all sizes?

I think there are two critical strategies. First, leading by example is crucial. Smaller companies or companies less engaged need to see large companies taking responsibility for ethical AI. These smaller companies will want to be on the right side of the divide.

Second, clarifying that “good business means good society and good society means good business” is so important. A company needs to understand that doing the right thing is a win-win situation. It’s good for business and it’s good for society. If you look at the ecosystem within which a large company is operating, in the long run, the healthier that ecosystem, the better the company will perform. That ecosystem requires financial and social investment. This approach needs a long-term vision that is over and above the quarterly return. A company must ask itself, do I want to be here for the next decade? For the next century?

TRUST AND COMPETITIVE ADVANTAGE

As organizations implement AI systems, how do you think they can gain the trust of consumers and their employees?

I think trust is something that is very difficult to gain and very easy to lose. One classic way of gaining trust is through transparency, accountability, and empowerment. Transparency so that people can see what you are doing; accountability because you take responsibility for what you are doing; and empowerment because you put people in charge. You say to them, “you have the power to punish me, or you have the power to tell me that something was not right.”

Transparency is perfectly reasonable, achievable, and should be expected. Until you understand what exactly is going on in your system, you must go back to the drawing board. If an engineer were to say that they couldn’t do something in the early stage of development, that product likely should not be released. Imagine if a nuclear physicist creates a nuclear system and they are not quite sure how it is going to behave, but still puts it on the market. This would be insane.

Can an organization that focuses on being ethical in their AI systems gain competitive advantage in the long run?

Absolutely! There is intangible value in brand reputation, credibility, and trustworthiness that will drive this advantage. Competition is necessary to this scenario. If there is no competition, there is less accountability and less need to be transparent.

How can academia play a role in ensuring organizations implement ethical AI?

Academia can add value and help a lot if engaged properly. To my mind that means allowing academia to conduct independent not-for-profit research for the advancement of our understanding and knowledge. A focus on scholarly and/or scientific understanding is part of the solution. We need this ingredient in the whole strategy.

I like the idea that around the same table you have experts from academia, experts from research and development in the industrial world, policymakers, experts from NGOs, and representatives from startups and civil society. Academia has a duty to provide advice and insight to support technological and business development that improves society.

ETHICAL AI REGULATION AND STANDARDS

Are organizations prepared for eventual regulation in ethical AI?

I think organizations are preparing and expecting it. Most large organizations today across the United States and Europe are talking about “duty of care” and AI (i.e. the duty to take care to refrain from causing another person injury or loss). We also hear a lot about the need for clear normative frameworks in areas such as driverless cars, drones, facial recognition, and algorithmic decisionmaking guidelines in public-facing services such as banking or retail. I shall be surprised if we will have this conversation again in two years’ time and legislation hasn’t already been seriously discussed or put in place.

More conversations about what the best legislation is should start now. I am happy with the first step we’ve taken at the European Union level, with a high-level expert group on the ethics of AI (disclosure: I am a member). I think this will help the development of not only the technology, but also normative rules and legal frameworks.

Do you think organizations can self regulate or is legislation necessary?

Putting this question as an either/or is common, but I reject that. We need both self-regulation and legislation as they are two complimentary tools. To win a tennis game, you need to play according to the rules, this is the law, but you also need to develop your skills through discipline and training, and have a winning strategy, and that is ethics and self-regulation. For example, there is no legislation today that forces a company to publish open source software for AI solutions, for example. While I think this would be a good idea, it would need to be done carefully, because it could also be misused. I like the idea that a company sees publishing and making their own software available as a matter of default as opposed to say, “the law doesn’t require it, therefore we’re not going to do it.”