Skip to Content

AI and ethics – seven steps to take

Lee Beardmore
13 Jan 2021

Seven easy to implement steps that should form part of the ethical development, deployment, and management of your AI systems.

In the first article in this series, I outlined the importance of ethics in artificial intelligence (AI), and I also gave a few highlights from research recently conducted by the Capgemini Research Institute, showing customer attitudes and business responses to AI.

In the second article, I considered the practical preparations that businesses need to make for its morally justifiable implementation.

In this, the final article, I am going to highlight the seven steps that should form part of the ethical development, deployment, and management of your AI systems.

Step #1 – define purpose and assess potential impact

Organizations need to satisfy themselves that the core aim of the AI system is to benefit people or improve their lives, and that it is not driven solely by economic goals such as increasing profits. This core aim needs to be made transparent not just to internal audiences such as teams in development, sales, marketing, and compliance, but also to external stakeholders such as partners, contractors, and relevant regulatory and government bodies.

Alongside an assessment of potential benefits, organizations should also consider potential risks before any implementation. Such risks might include possible threats to people’s fundamental rights.

Step #2 – address sustainability considerations

Successful AI implementations can optimize business operations. This needn’t just mean improved margins and better productivity: it can also have implications for an organization’s broader goals, such as equality and inclusion, and also such as reducing environmental impact. If such improvements are possible, they shouldn’t be mere by-products: they should be actively sought out and factored in as development goals.

What’s more, AI has its own carbon footprint, whether it’s on-premise or in the cloud. This, too, needs to be a development consideration.

Step #3 – embed diversity and inclusion

The broader the mix of people engaged in the AI system development lifecycle, the better. Organizations should aim to build teams from a variety of racial, gender, and demographic backgrounds. Diversity of discipline should also be a factor, bringing together people of different viewpoints and educational backgrounds.

Also, tools now exist to evaluate fairness and to identify and correct bias in AI systems and machine learning models. Organizations can and should use such tools to correct bias in datasets by focusing on the training data. What’s more, they should ensure that AI testing covers all appropriate demographics, so as to avoid any group or groups of people being inadvertently disadvantaged by the outcomes of an AI application.

Step #4 – enhance transparency

Tools also exist to analyze the processes being used by AI systems and to explain not just simple outcomes, but entire models. Some approaches go further still, and provide a benchmarked evaluation of an AI model under various conditions.

Adopting these tools and approaches can help organizations to be clear to users, regulators, and the general public about the origins of their models, their use, and any limitations those models may have.

Step #5 – humanize the AI experience

Where possible, it’s a good idea to keep real people involved in AI processes. For example, tag-teaming a human agent with a virtual assistant on customer service calls can help to stop ethical issues arising in the first place. No organization should want its customers to feel that they have lost agency, or that their basic rights have been compromised.

Step #6 – ensure technological robustness

Many of the resilience issues that relate to AI are true also for technology in general. For instance, AI systems should be resilient to attacks or mishaps, and wherever possible should be backed by fallback plans in case of failure. Data should be accurate; results should be reproducible; and regular testing and monitoring can ensure that AI models are behaving as expected, before go-live and after.

However, there are other areas of technological robustness that are specific to AI. The nature of the integrity of its datasets is a case in point. It’s a good idea for each such dataset to be accompanied by a datasheet that documents key variables such as composition, collection process, and recommended uses. This will help AI developers to work more effectively with AI algorithm users such as sales and marketing teams, and help them understand the impact of their decisions.

Step #7 – empower customers with privacy controls

Giving customers control over their personal data isn’t merely a courtesy, or even a sign of good corporate citizenship. In some parts of the world – notably, in the EU – it’s a legal requirement. The General Data Protection Regulation (GDPR) obliges businesses to meet all kinds of customer obligations upon request, including seeing how and when personal data is being used, and for what purpose; opting out of an AI-based system in favor of human intervention; and allowing users to change the weight of individual data attributes so as to influence AI output – for example, to increase recommendations in line with actual rather than AI-derived personal preference.

If such obligations need to be put in place for EU residents, multinational organizations may conclude that, to show both fairness and consistency, it might make sense to make the same provision for customers elsewhere.

The benefits of being frictionless

All of these steps are easier to implement, and are more likely to succeed, when the organization can act as a cohesive whole – when it can seamlessly and intelligently connect its processes and people as required.

At Capgemini, we call this the Frictionless Enterprise. It’s an approach that dynamically adapts to changing circumstances, and it’s therefore ideally suited to addressing AI systems, and the ethical considerations that flow from them. It enables organizations to monitor and manage not just the technology and the datasets, but the diversity of the teams developing them. It also helps businesses to respond to the concerns of their customers, of regulatory bodies, and of other external stakeholders, and to demonstrate a commitment to human fairness, to sustainability, and to transparency.

For more on how organizations can build ethically robust AI systems and gain trust, read the full paper entitled “AI and the Ethical Conundrum.”

Read other blogs in this series

To learn more about the frictionless enterprise model, and the role it can play in helping your organization develop and maintain an ethical approach to AI, contact: lee.beardmore@capgemini.com