Why addressing ethical questions in AI will benefit organizations

Organizations must adopt ethics in AI to win the public’s trust and loyalty

Artificial intelligence may radically change the world we live in, but it is the ethics behind it that will determine what that world will look like. Consumers seem to know or sense this, and increasingly demand ethical behavior from AI systems of organizations they interact with. But are organizations prepared to answer the call?

Ethical AI is the cornerstone upon which customer trust and loyalty are built

In the new report from the Capgemini Research Institute, Why addressing ethical questions in AI will benefit organizations, the Institute surveyed 1,580 executives in 510 organizations and over 4,400 consumers internationally, to find out how consumers view ethics and the transparency of their AI-enabled interactions and what organizations are doing to allay their concerns. We found that:

  • Ethics drive consumer trust and satisfaction. In fact, organizations that are seen as using AI ethically enjoy a 44-point NPS® advantage compared to those seen as not using AI ethically.
  • Among consumers surveyed, 62% said they would place higher trust in a company whose AI interactions they perceived as ethical; 61% said they would share positive experiences with friends and family.
  • Executives in nine out of ten organizations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with examples such as collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in banking and insurance. Additionally, almost half of consumers surveyed (47%) believe they have experienced at least two types of uses of AI that resulted in ethical issues in the last 2-3 years. At the same time, over three-quarters of consumers expect new regulations on the use of AI.
  • Organizations are starting to realize the importance of ethical AI: 51% of executives consider that it is important to ensure that AI systems are ethical and transparent.

How to address ethical questions in AI?

In the given scenario, can organizations work towards building AI systems ethically? The findings suggest that organizations trying to focus on ethics in AI must take a targeted approach to making systems fit for purpose. Capgemini recommends a three-pronged approach to build a strategy for ethics in AI that embraces all key stakeholders:

  1. For CXOs, business leaders and those with a remit for trust and ethics: Establish a strong foundation with a strategy and code of conduct for ethical AI; develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues
  2. For the customer and employee-facing teams, such as HR, marketing, communications and customer service: Ensure ethical usage of AI application; educate and inform users to build trust in AI systems; empower users with more control and the ability to seek recourse; and proactively communicate on AI issues internally and externally to build trust
  3. For AI, data and IT leaders and their teams: Make AI systems transparent and understandable to gain users’ trust; practice good data management and mitigate potential biases in data, and use technology tools to build ethics in AI.

Clearly, AI will recast the relationship between consumers and organizations, but this relationship will only be as strong as the ethics behind it.

Subscribe to receive an advance copy of new reports from the Capgemini Research Institute

Report - Ethics in AI

File size: 1.63 MB File type: PDF

Infographic – Ethics in...

File size: 823.72 KB File type: PDF

Sound Bites

Luciano Floridi, professor of Philosophy and Ethics of Information and director of Digital Ethics Lab, Oxford Internet Institute, University of Oxford3

Trust is something very difficult to gain and very easy to lose. But a classic way of gaining trust, with AI interactions in particular, can be summarized in three words: transparency, accountability, and empowerment. That means transparency so that people can see what you are doing; accountability because you take responsibility for what you are doing, and empowerment because you put people in charge to tell you if something you did was not right or not good.

Patrick Hall, senior director of Product, H20

If a data science team is working on a machine learning project that will be affecting humans, I think that they have both ethical and commercial responsibility to do basic disparate impact analysis."

Marija Slavkovik, associate professor at University of Bregen

Today, we don't really have a way of evaluating the ethical impact of an AI product or service.

About the Capgemini Research Institute

Capgemini Research Institute

Capgemini’s #1 ranked in-house think tank on all things digital

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information