Skip to Content

The “why” in building trust in AI

Capgemini
September 8, 2020

It’s not just the researchers who are worried about the trustworthiness of AI. AI software suppliers also want to have more trust in AI. The Capgemini Research Institute sees the lack of trust as a major reason why the adaptation of AI is hindered. The company states: “At Capgemini, we believe that ethical values need to be integrated in the applications and processes we design, build, and use. Only then can we make AI systems that people can truly trust.”

Explainable AI

One of the major reasons why AI is mistrusted is the “black box” character of machine learning. Machine learning is currently the most successful application of AI. However, its learning algorithms conceal its inner workings. It is impossible to precisely pinpoint where and when a decision has been taken. When we do not know how an AI algorithm functions, how can we trust the outcomes? Hence, the development of interpretable or explainable AI.

This branch of AI research tries to find methods to open the black box. The methods involved try to reconstruct how the AI works in specific applications and situations, and based on that knowledge, we can assess whether the AI is doing what it is expected to do, or when it has taken the wrong turn. When we want to base our decisions on machine intelligence, we need to make sure that machines operate with fairness. This can be checks when the AI offers transparency. Transparency makes it possible for us to evaluate how the AI works. AI should be explainable, so we investigate why it took certain decisions.

AI also has to offer privacy. It should protect our privacy and offer safety and security for its users and stakeholders. AI should incorporate robustness, the AI must be resistant to risk, unpredictability, and volatility in real-world settings. “Designers and users of AI systems should remain aware that these technologies may have transformative and long-term effects on individuals and society,” says The Alan Turing Institute. Many more factors increase the trustworthiness of AI-applications. For the moment, these five factors give a clear indication what has needed to gain trust in AI.

Capgemini puts great emphasis on trust. Accountability, transparency, fairness, etc. are seen as ways of gaining trust in AI. When businesses don’t trust (the outcomes of) AI, they will not buy it. For this reason, Capgemini is developing the Trusted AI Framework – an ethical AI lifecycle with checkpoints.

Capgemini’s Trusted AI Framework (© Capgemini)

Is it enough?

Is fairness and transparency on these topics enough to gain trust in an AI application? I am afraid not. Because most of these factors describe the what. What does the AI do? What doesn’t it do? What should it do? They describe for instance that the system behaves ethically in the sense that everyone involved is treated equally, that experts can audit its behavior, that it doesn’t have “a hidden agenda.” The AI must act ethically.

However, this is not enough. Next to the what question, we should also ask how we’re going to use AI, and why it is needed for that purpose. I am quite sure that it is possible to have an AI that behaves openly and fairly, but has an application that is not ethical in the first place.

Currently, there is a discussion in many countries about using apps to trace the spread of the coronavirus. The purpose is to determine who an infected person contacted, so we can stop the spread of the virus. In the Netherlands, the first attempt to create such an app failed because privacy, safety, and security were not safeguarded. A new attempt is currently under way.

But the essential topic we have to discuss in the first place, is the question: will such an app truly help in solving the problem? Will it be effective? How can the technology fulfill its purpose? How will it be the right solution for the problem? Are the side effects acceptable? These are design questions about the how. How should we do to deal with the problem? How can AI be the best solution?

Not all apps built to flatten the curve will be AI-based. Nevertheless, you should always ask the same questions when creating an AI app. How is AI the best solution to tackle the problem at hand? And are other means, such as proven statistical techniques, better suited?

There are many guidelines around on how to build ethical AI. For instance, the “Ethics Guidelines for Trustworthy AI” from the European Union put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. It proposes human agency and governance for checking if he AI solution fulfills its purpose now and in the future. AI systems learn and will change their behavior over time.

Last but not least, the why question. AI is used for a certain purpose. AI is a means to an end. It cannot function on its own. It works within an environment – the context – where it serves a purpose. We should ask ourselves the question, why do we need this (AI-based) application in the first place. Do we need this system here and now? What is the legitimate reason for applying AI for this business case? Or even better, what is the legitimate reason for the business case itself?

With COVID-19 “flattening the curve” is a very good reason to develop any solutions that will help reaching that goal. I am afraid that in many businesses this is not that obvious. There are many good reasons, but also a lot of bad reasons to start with AI.

Trust lies in the “why”

Simon Simek talks about trust in his TED talk. Sepa4corporates summarizes: “Simon refers to the break between the what and the why as the ‘split.’ What the company is doing will often continue to grow, but the ‘why element’ becomes blurry and therein lay the problems.” The same applies to technologies like artificial intelligence. Simon states that technology cannot build human relations and trust. I say that we can only trust technology when we have a clear picture on the goals of the application of AI.

Why, how, what of Ted-talk Simon Sinek published in a model (source: CC BY-SA 4.0, Nick Nijhuis via Wikimedia)

Before building a COVID-19 trace app, we should first discuss why such an app is helpful or even necessary. We should determine if an app is indeed the best way to go forward. When that question is answered satisfactorily, we can discuss how we are going to do that and what we are going to trace. I am afraid the why is only questioned when the consequences of the how and what are unacceptable.

Explainability, fairness, openness, and all those factors of trust in AI can only be fully answered when we know the “why” of the system. For example, when we want to use facial recognition in the public space for crime prevention, we first have to make sure that this is needed in the first place. When applying this technology as the best way to tackle the problem, do ensure that it is effective and safeguards other interests. Be open about these objectives, design choices, and consequences for all involved. This is the only way to gain trust in any technology, let alone artificial intelligence.

To find out more about how we can help you, visit Capgemini’s Trusted AI page. You can also reach out to the author, Reinoud Kaasschieter, AI and ECM Solution Architect and Consultant at Capgemini.