Building trust in AI Systems

Publish date:

How do we ensure AI systems are always deployed for the benefit of the individual, society and environment.

For an organisation to build consumer trust in their AI systems, ensuring ethical standards and practices are crucial. Capgemini’s Ethical AI Guild in the UK is a new initiative dedicated to helping clients understand the ethical issues surrounding AI and apply best practice when building and using AI systems.

What is AI ethics?

The UK Office for Artificial Intelligence defines Artificial Intelligence (AI) as ‘the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence’ (source). Many businesses have adopted the use of AI systems which are, in essence, intelligent agents; autonomous entities acting on their behalf. As with any entity that acts on behalf of a business, such as an employee, the entity is expected to uphold the company’s values and the law. A key difference, however, is an AI system can’t be held accountable because it’s not a legal entity. The law is yet to catch-up with technology, so how do we ensure AI systems don’t do harm now.

Enter AI ethics. The Alan Turing Institute (ATI) defines AI ethics as ‘a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies’ (source). AI ethics was borne out of an effort to prevent AI systems from causing harm to individuals and society, which the ATI classifies into 6 categories:

  • Bias & Discrimination
  • Denial of Individual Autonomy, Recourse, and Rights
  • Non-transparent, Unexplainable, or Unjustifiable Outcomes
  • Invasions of Privacy
  • Isolation and Disintegration of Social Connection
  • Unreliable, Unsafe, or Poor-Quality Outcomes

While issues in any one of these categories will seriously undermine customer trust in an organisation, they can have a massive impact on customers’ lives.

A framework for ethical best practice

The Capgemini Research Institute (CRI) recently released it’s ‘AI and the Ethical Conundrum’ report which looks at how organisations are addressing ethical issues and how they can build customer trust in AI systems. For the report, a survey of over 800 organizations and 2900 consumers was conducted to examine the risks faced for customer relationships and the extent to which ethical principles have been operationalized. The results were as expected, customers are more trusting of AI in general than last year but expect more transparency from organisations about its use as well as for them to take responsibility when things go wrong. Moreover, it showed progress on ethical practices within organisations has suffered, with only 53% having a leader responsible for it. This has led to legal scrutiny for nearly 60% of respondents in the last 3 years.

In response, the CRI proposed a framework to facilitate the development of ethically robust AI systems. It draws from the “Ethics Guidelines for Trustworthy AI” produced by European Commission’s High-Level Expert Group on AI, Capgemini’s core values and our experience delivering AI systems across the globe.

Figure 1: A framework to build and use ethically robust AI systems (Source: Capgemini Research Institute Analysis)
Figure 1: A framework to build and use ethically robust AI systems (Source: Capgemini Research Institute Analysis)

The framework (see figure 1) has five components, four pillars and a foundation, which can be summarised as follows:

  • A foundation of ownership, governance and diversity

The CRI suggests five steps to achieve this:

  1. Assign a member of the leadership to be accountable for ethical AI, and its actions, within the organisation.
  2. Create a comprehensive ethical charter detailing the acceptable AI working practices.
  3. Set up a governance body to implement measures of accountability.
  4. Conduct regular ethics audits on AI systems to ensure all issues are captured.
  5. Build diverse leadership, governance and development teams.

 

  • Strengthen trust in AI

The first pillar focuses on customer trust in how a business uses AI. It advocates two approaches:

  1. Improve the explainability, transparency, fairness, and auditability of all AI systems.
  2. Have human involvement and oversight at all stages of an AI system’s lifecycle.

 

  • Protect people’s privacy

The general public are increasingly more aware of their data footprint and it’s commercialisation by AI. Alongside data protection & privacy practices (e.g. GDPR), empowering customers by providing more control over AI interactions is key to maintaining and increasing customer relationships.

  • Ensure robustness

How secure, accurate and consistent an AI system is, defines its robustness, which enables other pillars. Auditing requires the ability to faithfully reproduce outcomes. Accuracy is needed to facilitate clarity and transparency of AI driven decisions. Security must be upheld during both the development and deployment of an AI system to protect customer data and maintain accuracy and consistency.

  • Use AI for good

Many applications of AI are focused on improving commercial returns. For example, in the social media industry AI is used to optimise engagement which increases advertising revenue. This is not beneficial to the customer and can be detrimental. The final pillar urges that AI be proactively deployed for the benefit of the individual, society and environment.

Next Steps

The framework provides organisations with the guidelines for how to build and deploy trusted and ethical AI, but this is far from the complete story. The following will be key artifacts to enable the framework:

  • An assessment of governance structures and practices, as well as where in an organisation responsibility over AI, its actions and its ethics are best placed.
  • A software toolkit for testing the explainability, transparency, fairness, and auditability of AI systems.
  • AI interaction design patterns to provide consistent customer empowerment.
  • An architectural ‘playbook’ defining technical best practices for robust AI systems.
  • New metrics to measure the impact of AI systems other than revenue.

To help clients utilise the framework, Capgemini UK’s Ethical AI Guild is a centre of excellence for AI ethics within Capgemini providing guidance on ethical issues and practices. Made up of experienced AI practitioners, the guild looks to enable Capgemini to accelerate our clients’ journeys towards trustworthy AI for the benefit of all.

If you would like to know more about the Ethical AI Guild, or if you want to know how we can help you apply AI ethics, please email me at jake.luscombe@capgemini.com.

 

Author


Jake Luscombe

Jake is a senior data scientist in the Insights & Data practice, with over 5 years’ experience building analytical and machine learning solutions for the public sector. He is always on the look-out for what will become the future of analytics.

Related Posts

Artificial Intelligence

Fading Footsteps: The online tracking crackdown and its hidden opportunity

Date icon May 12, 2021

Google’s latest announcement confirms that it won’t provide an alternative solution to the...

AI

Perform AI: Leading in Action

Date icon April 26, 2021

To get the best business outcomes from data and AI, you have to constantly learn and share...

Artificial Intelligence

Transformation: Why Platforms and Data Literacy is Not Enough?

Date icon April 19, 2021

Platforms and data literacy alone will not result in meaningful improvements.<br...