In the previous blog post, I described how we currently see the ethics of artificial intelligence. We saw that it can be quite difficult to assess the ethicality of AI. But by reviewing AI within the context of use, for example a product, we can make our lives easier.
Because AI is used within products and services, we can use the existing frameworks for design and business ethics. These frameworks already exist for a longer time and offer the possibility to assess the ethicality of products and services.
AI hidden in a product
As an industrial designer, I’ve always been interested in how to make products that are not only useful and valuable on its own, but also have value for people, society, and the environment. It doesn’t matter whether this product is physical (a pen or mobile device), or virtual (a software app). As long as these products are sold and distributed to the general audience – to us as consumers – ethics for product design apply. The essence of the design of these kinds of products is that the manufacturer of the products doesn’t know his consumers personally. It’s the task of the designer to fill that gap and design products that are useful for his targeted audience.
At first the designers have to understand their responsibility towards the environment and society. What is ethical? (Prof. Michael Hardt, University of Lapland, Finland)
Ethical frameworks for product design put the responsibility for designing good products, products that are ethical, at the designer’s desk. As Dennis Hambeukers, Strategic Design Consultant @zuiderlicht, states: “Ethics is now part of the job for a designer.”
When we assess the ethics of a product, we don’t focus on the components alone. We focus on the product as a whole. The behavior of a product isn’t just the sum of the components, it is the sum of the interactions of the components with each other and the outside world, in most cases the human operating the product.
As with any pyramid-shaped structure, the layers in the Ethical Hierarchy of Needs rest on the layer below it. If any layer is broken, the layers resting on top of it will collapse. If a design does
not support human rights, it is unethical. If it supports human rights but does not respect human effort by being functional, convenient and reliable (and usable!), then it is unethical. If it respects human effort but does not respect human experience by making a better life for the people using it, then it is still unethical. (source: Smashing Magazine)
To establish the ethicality of a product containing AI, we should not focus on the AI alone, but on the product as a whole, as it presents itself to the user. “Does this make the evaluation not more complex?” you might ask. Not really, because it places the AI in context of the product. Of course, we should establish how and when the AI affects the behavior of the product, but that puts the AI in perspective. If the product contains the effects of AI in an ethical manner, that’s all right.
If an AI gives you a biased product recommendation, you can easily dismiss it because there are alternative and easy-to-use ways to select the right product for you. If a biased product recommendation system lets you only select from a list the AI selects for you without showing alternatives – suggesting the list is exhaustive – that’s a problem.
Once we have determined the scenarios where AI becomes unethical, we can mitigate that behavior Firstly, by improving the AI. But risks will remain. Secondly, we can accomplish this by reducing the consequences of the unethical behavior. We can filter out unwanted outcomes by the AI. That can be quite cumbersome. But we can also downplay the consequences by allowing the AI to be overridden, by reducing the impact by using it for augmenting the user only. The user can override the decision made by the AI, somewhat like ignoring the directions of your satnav. And at the end, we’ll have to assess the product as a whole, based on the ethical framework we’ve chosen to use.
Design for Values
There are several frameworks to establish the ethicality of a product, for example the “Design for Values” program at the University of Technology in Delft. The authors of this quite elaborate framework state: “[…] technological developments in the 21stcentury, whether necessary to meet our challenges or made possible through new breakthroughs, only become acceptable when they are designed for the moral and social values people hold.”
Almost all design ethics frameworks focus on values and virtues. The whole of the product and its effects on the environment of the product – human, society, and nature – should be taken into consideration. And it really doesn’t matter whether this product contains AI or not. That being said, we should be aware that the behavior of products changes over time, for example, by wear and tear, and with AI, through the continuous learning of the algorithms.
The “Design for Values” program distinguishes 11 values that are important in design. All of these values also apply to artificial intelligence:
- Accountability and transparency
- Democracy and justice
- Human wellbeing
Capgemini put great emphasis on trust. Accountability, transparency, inclusiveness, etc. are seen as ways of gaining trust in AI. When businesses don‘t trust (the outcomes of) AI, they won‘t buy it.
“Trust is the foundation of every transaction in life.” (Tamara McCleary, CEO at Thulium.co)
For our analysis, we shouldn‘t think in a hierarchy of values. You cannot say beforehand that one value prevails over the other. The method wants you to appraise your design to all values, weigh them. and deal with any conflicts. Based on this analysis, you can derive norms, and from them design requirements for your AI design.
It‘s beyond the scope of this blog to describe the method in detail. But I want to emphasize that the impact enhancing or violating a value can have different effects. For example, your recruitment system uses AI to select candidates and this system is biased. The biases effects your organization foremost because it violates your value of inclusiveness. And it will lead to bad publicity. But when the system enhances the value of inclusiveness, it can draw better candidates to your organization because these candidates want to work for your inclusive company.
Design frameworks put humans at the center. Products should help humans. This is called human-centered design. Trine Falbe says: “Human-centered design is a framework as well as a mindset. At its core, working ‘human-centered’ means involving the people you serve early and continuously in the process, i.e. using research to establish the needs of these people, understanding what problems they have, and how your product can help solve these problems.” By putting humans at the center, we can assess the ethics of a product better. It’s not only about not harming the user, it’s more about delivering what the users expect from the product.
When you realize that AI is only a means to achieve certain product features, you should assess the ethicality of these features first. And then determine whether using AI – with all its hiccups – will fulfill the feature in a way it puts the interests of the humans first.
How to do that? To quote Capgemini’s design agency, Fahrenheit212: “Appoint more ‘corporate philosophers’ and help train employees and students alike on design ethics”. You should cultivate a culture where ethics are present. “The ethical culture in an organization can be thought of as a slice of the overall organizational culture.” as Ethical Systems states. In the next episode of the blog series, we’ll also see how this culture will help dealing with ethical issues surrounding AI.
In the next episode of this blog, I will discuss how we can assess the ethics of AI used within business processes. When we enhance our processes with AI, how can we establish if the result is ethical?
For more information on this connect with Reinoud Kaasschieter.