Ga direct naar inhoud

A designer’s view on AI ethics (part 1 of 3)

Reinoud Kaasschieter
2019-08-12

Lanny Cohen of Capgemini calls upon us to embed ethics into all AI systems: “artificial intelligence (AI) needs to be applied with an ethical and responsible approach – one that is transparent to users and customers, embeds privacy, ensures fairness, and builds trust. AI implementations should be transparent and unbiased, and open to disclosure and explanation.”

But how do we do that? There are many discussions, articles, and blog posts on this topic online, but most of them are, by nature, very abstract. It is far from an easy subject. There are no simple rules or methods available to assess the ethics of AI. This three-part blog post strives to provide guidance for making such assessments. By using existing ethical frameworks for product design and conducting businesses, we can make our lives easier.

A fast trolley ride through ethics

Let begin the discussion on ethics and AI with two common philosophical perspectives:

  • Virtue ethics. Virtue ethics measure actions against some given set of virtues, with the goal being to be a virtuous person. In short, are the actions that are built into in the AI motivated by virtue?
  • The results matter, not the actions themselves. Whatever has the best outcome is the best action. In short: what will the outcome of the actions of the AI be?

First a few words about virtue ethics. The main question is: “Does the AI enhance our moral and societal values” such as honesty, equality, and care (for the environment, for example). I don’t want to elaborate on the virtues of virtue ethics here, but this type of ethics is mainly chosen because consequentialism is less effective for innovative technologies such as AI.

But frankly, most ethical discussions around AI are of a consequential nature. How do the consequences of the use of AI affect individuals, society, and the environment? Do the positive effects outweigh the negative? And, how do I weigh the consequences of using AI? This is not an easy discussion. Everyone should be familiar with the trolley problem, which is often used as an analogy for self-driving cars and the decisions the AI-based steering could face.

Lesson by Eleanor Nelsen

Imagine you’re watching a runaway trolley barreling down the tracks, straight towards five workers. You happen to be standing next to a switch that will divert the trolley onto a second track. Here’s the problem: that track has a worker on it, too — but just one. What do you do? Do you sacrifice one person to save five? (Source: TED-Ed)

Although I’m not all in favor of consequentialism as the main method of assessing the effects of the use of AI, it is certainly the mainstream way of thinking about AI in the Anglo-Saxon world.

The question is, how do we determine the consequences of using AI? We need to know what they are before we can weigh them. AI is mostly regarded as a black box. We can put things, such as pictures or sales figures, into the system and get some kind of output, for example descriptions of pictures or insights which markets to target.

But in order to determine if the input is processed according to our ethical values, we need to compare the results the AI gives us. In the end, it is only by studying the outcome in depth, that we can ascertain whether the system is working properly.

For example: Amazon’s recruitment system was biased against woman. Analysis of the recommendations made by the AI-based recruitment system showed that. But, the system itself didn’t reveal its thinking logic on its own.

It is important to recognise that, alongside the huge benefits that artificial intelligence offers, there are potential ethical issues associated with some uses. (Sir Mark Walport, UK Government Chief Scientific Adviser)

In order to avoid haphazard detection of defects, such as bias, we need to add functions to the AI systems. These functions will allow us to gain knowledge on how the AI thinks and argues. These functions have to be built into the AI system purposefully. I call these functions the attributes of an AI system that are necessary to have the preconditions for creating an ethical AI system.

Attributes we need for ethical AI

Many publications on ethics and AI focus on the attributes AI should have to be ethical. These attributes are, in fact, the features of any AI-based product or service. They allow us to check if the AI is behaving correctly and ethically. There are countless checklists out there, so allow me to present my (incomplete) version, which is based on one of Tin Geber’s lists:

  • We need understandable AI
  • We need explainable AI
  • We need meaningful oversight
  • We need accountability for AI
  • We need defined ownership of AI.

(For a more complete list of attributes, please read the blog post by Alan Winfield.)

These attributes should be present in any AI implementation, but this is complicated since some AI techniques don’t allow for gaining that insight. For instance, deep learning algorithms are not explainable on a deep level by design. We can explain where and why we use deep learning in a certain application, but not how machine learning reaches a specific decision.

When the attributes above are present, we can start assessing whether the AI is behaving ethically or not or somewhere in between. We can start answering questions such as: Is my AI inclusive and non-discriminatory? Does my AI reach fair decisions? Can I explain to my customers how decisions are reached and what data is used to reach them? I’m aware that these are not easy questions to answer, but in order to build or apply AI routines that respect human rights and ethical values, they must be tackled.

Most companies’ AI algorithms and implementations will not be developed in house. The AI routines and services are bought as a ready-to-run application or service, a black box for the buyer of the AI – you put data in and you (hope to) get meaningful insights back.

It’s like driving a car. Most of us don’t know how our car works in detail. That’s not a problem when we’re driving under normal circumstances. But when the car starts to falter, it turns out that we’re clueless about the cause and even more clueless about how to fix it. In the meantime, accidents can happen, and the chance of them happening only increases when we aren’t aware of the defect in the first place.

I expect the same will happen for many AI implementations in real life. AI is bought or downloaded from a software supplier and an organization just uses it or integrates it within an existing software system. We expect the AI to behave correctly under normal circumstances and we expect that the AI will tell us when it’s broken. Without having to know the internal workings of the system in detail. Besides, software suppliers aren’t keen on revealing the intellectual property they’ve invested in their AI solutions.

So, how can we determine the ethicality of an AI solution without knowing all the details of the AI we’re using? As I mentioned earlier, sometimes the nature of AI algorithms doesn’t allow for these kinds of analyses, so how can we use AI in an ethical manner, even if we bought the AI functionality off the shelf?

AI isn’t on its own

What I find odd about the current discussions on ethics and AI, is that AI is treated as a standalone phenomenon – as though AI can perform tasks without an environment. But we all know that AI can only function with input from the outside world. In most cases this is data – lots of data.

Presently, AI can only thrive in a data-rich environment. AI is, as such, not equipped to interact with its environment. As Kathryn Hume, VP of integrate.ai, puts it: “So today, these algorithms – I like to consider them like idiot savants. So they can be super intelligent on one very, very narrow task.” We can only use AI effectively when we apply it within a broader system. Or, to put it differently, AI is embedded within a broader software application. That can be a standalone one, like an app you download on your mobile or integrated in step or task inside a business process, like reporting within an ERP-package.

Artificial intelligence isn’t Frankenstein’s Monster knocking at your front door. AI will enter your house in a Trojan Horse through the back door. (The author)

That is one of the reasons we, as ordinary consumers, aren’t really aware of the presence of AI in the products and services we use. AI is embedded in products such as Spotify and Facebook or for business users, embedded in applications such as Salesforce and SAP. Within those larger products, AI performs some specific functions together with a lot of other, non-intelligent functions of the application.

When we want to analyze the behavior of AI, we consider the AI apart from its context and use it within those applications. We should review the working of the entire application, including the AI-based functions. We should take into account that AI enables specific functionality in those applications. And we should consider that AI can only add value within the context of that application.

This all sounds very theoretical. And, I must admit that most discussions about AI and ethics are very theoretical. But to bring things down to earth, being aware that AI only functions within a product or service makes it possible to be more practical about ethics.

In the next part of the blog series, I’ll describe how we can determine the ethicality of AI when it is used within products and services. Because AI cannot be used on its own, it can be incorporated in a product, for example an app. By assessing the ethics of the product using design frameworks, we implicitly also assess the AI used in that product.

For more information on this connect with Reinoud Kaasschieter.