AI and ethics – why is it important?

Publish date:

When customers feel comfortable being served by an organization’s AI systems, they are more likely to take greater advantage of the speed, simplicity, and personalization on offer.

In an article earlier this year, I briefly explored various factors in the implementation of an AI-driven business model. One of those factors was the need to be mindful of ethical considerations. It’s such an important area that I thought it worth writing a further short series of articles on this topic in its own right – especially now, when the recent pandemic has led to growth in the need for artificial intelligence (AI) in order to cope with the rising levels of online interaction.

Defining terms

First, perhaps, we should say what we mean by ethics as applied to AI. In its guidelines for Trustworthy AI, the European Commission High-Level Expert Group on AI identified seven guiding principles:

  • Human agency and oversight – AI systems should support human autonomy and decision-making
  • Technical robustness and safety – AI systems need to be dependable, and to provide a fallback plan in case something goes wrong
  • Privacy and data governance – AI systems should protect data, and should provide adequate mechanisms to maintain its quality and integrity
  • Transparency – the data, the system, and the models upon which AI is based should be clear and explicable
  • Diversity, non-discrimination, and fairness – an abiding undertaking to enable diversity and inclusion, and to avoid unfair bias
  • Societal and environmental wellbeing – AI systems should be designed to benefit everyone, now and in future. They need to be sustainable and environmentally friendly
  • Accountability – this principle underpins all the others. It’s a continuing commitment throughout the lifecycle of AI systems to ensure responsibility for them and for their outcomes.

Patchy progress

To what extent are these principles being met? The Capgemini Research Institute recently conducted a survey of over 800 organisations and 2,900 consumers, exploring the approaches businesses adopt in their implementation of AI, and the effects these developments have on customer relationships.

The good news is customers increasingly trust their interactions with AI systems. Almost half of them (49%) said this was the case, which is a big rise from 30% in 2018, although they do expect those systems to be able to explain any results to them clearly, and they do expect organisations to hold themselves accountable if AI algorithms go wrong.

It’s also good to note that more organisations are aware of ethical biases and transparency issues; that they recognise the importance of putting an ethical charter in place; and that are making progress on the “explainability” of their AI algorithms.

The less good news is that progress on the accountability issue is patchy. For instance, just over half of organisations (53%) have a senior person in place with responsibility for the ethics of AI systems. Businesses are also struggling to make their AI algorithms transparent or auditable. In addition, siloes seem to exist between AI developers and the rest of the organisation as far as understanding ethical parameters is concerned.

A business imperative – and a business benefit

Getting AI ethics right isn’t just a moral responsibility: it’s a business imperative. Research shows that in the last two to three years, almost 60% of organisations have attracted legal scrutiny and 22% have faced a customer backlash because of decisions made by their AI systems.

What’s more, there are growing customer concerns about particular ethical matters. For example, in 2019, three-quarters of survey respondents (76%) believed organisations were being fully transparent about how personal data was being used – but in 2020, that figure has dropped to 62%.

It’s clear, therefore, that organisations need to pay serious attention to these issues. However, the business imperative isn’t just about addressing and avoiding potential negatives – it’s also about turning the ethical application of AI to positive business advantage. When customers feel comfortable being served by an organisation’s AI systems, they are more likely to take greater advantage of the speed, simplicity, and personalisation on offer. It’s good for customer loyalty – and it’s good for brand value, too.

In the next two articles in this series, I’ll be exploring how organisations can move to ethically robust AI systems – and the contribution Capgemini’s Frictionless Enterprise concept can make.

For more on how organisations can build ethically robust AI systems and gain trust, read the full paper entitled: “AI and the Ethical Conundrum.”

Lee Beardmore  has spent over two decades advising clients on the best strategies for technology adoption. More recently, he has been leading the push in AI and intelligent automation for Capgemini’s Business Services. Lee is a computer scientist by education, a technologist at heart, and has a wealth of cross-industry experience.

Related Posts

Customer Experience

Social change, recession and reinvention: opportunity in a post-pandemic world

Date icon July 28, 2021

Post-COVID-19 business models must be founded on understanding and listening to the...

Microsoft

Exploring the Visualisation of SAP Data using Microsoft Power BI

Date icon July 22, 2021

Microsoft Power BI is a market leader in analytics, with excellent customer satisfaction...