Trusted AI

Why and how ethical values are embedded into the AI lifecycle

Now that the significance of AI is widely known, organizations are looking for ways to drive AI adoption. Aspects of AI Ethics and AI Trust are critical to ensure that solutions are robust and that outcomes are explainable, unbiased and auditable.

Nearly nine out of 10 organizations worldwide have encountered ethical issues resulting from the use of AI.

They are asking questions such as: What does a truly robust governance structure look like? What are the clear strategies around AI? Do these policies differ between countries?

At Capgemini, we believe that ethical values need to be integrated in the applications and processes we design, build, and use. Only then can we make AI systems that people can truly trust.

What is ‘Ethical’ AI?

According to the European Commission, AI ethics is an area calling for guidelines and enabling trustworthy AI projects that comply with fundamental rights, principles, and related core values, regardless of their specific nature.

Transparent AI:  As AI increasingly influences our lives, firms must provide contextual information about how AI systems operate so that people understand how decisions are made and can identify errors more easily. What are the challenges facing us?
  • Explainable AI: As more complex use cases are built; AI must be explained in language people can understand. Thought and design are needed in the explainability of decisions produced by AI tools. Explainability of AI is vital for customer reassurance and is increasingly required by regulators.
  • Robust AI: As AI systems are already being given significant autonomous decision-making power in high-stakes situations, they must be resistant to risk, unpredictability, and volatility in real-world settings. One faulty application or misplaced goal could cause the system to take action with catastrophic consequences.
  • Fair AI: Since AI systems learn what they know from training data, when these datasets inaccurately mirror society, reflect unfairness or institutional prejudice, those data biases can be replicated in the resulting AI systems. We need to ensure AI systems make recommendations that do not discriminate based on race, gender, religion, or other similar factors to ensure they are representative and achieve fair outcomes for all.
  • Private AI: AI systems must comply with the privacy laws that regulate data collection, use, and storage and ensure that personal information is used in accordance with the privacy standards and protected from theft.

What are the solutions?

It is important to note that there is no one-size fits all solution when it comes to addressing ethics and trust in AI – these issues are challenging and not likely to remain static. Each use case must be evaluated independently, but there are guidelines and frameworks that can start things off in the right direction.

Trusted AI Framework

Organizations must take a pointed approach to making systems ethically fit for purpose. Capgemini recommends implementing our Trusted AI framework (model below), where trust and ethics should are addressed using dedicated setups, technological tools and frameworks.

Client Stories

1. A bias challenge

A bank wanted to check whether its algorithm, which predicts credit risk, was unfairly discriminating against people due to their gender, race, or socioeconomic background when granting loans. They wanted to make sure the model was not turning away potential clients who were perfectly solvent.

Our solution

We developed an asset called Sustainable Artificial Intelligence assistant (SAIA). It recognizes, analyzes, and corrects bias throughout pre-processing, model building, and post-processing methods. SAIA can be applied to assess bias in different AI models.

The results

An AI model that does not take these biases into account while accurately assessing credit risk, and a solution that is ethical, compliant in design, and used at the bank

2. An Explainability Challenge

One of our clients recycles titanium, but before recycling it they check the titanium chemical composition in the laboratory. The chemical composition determines how it can be mixed with other recycled titanium alloys. Our client wanted us to determine alloy usability based on its chemical composition – a high level of explainability was required here.

Our solution

Based on historical data and simulations, Capgemini proposed a model to determine if an alloy is usable based on its chemical composition, and an optimization algorithm to find the best combinations of alloys. Using SHAP, a technique used in game theory to determine how much each player in a collaborative game has contributed to its success, we provided a visual, easy-to-understand list of factors that explained the estimated usability of an alloy based on a specific chemical element.

The results

Accuracy rates of 90% were reached for the usability classification model, which allowed our client to resell unusable alloys and thus optimize stocks and save time. A very high level of explainability was achieved, giving experts a valuable AI advisory tool, full transparency, and control over the final recycle vs resell decisions.

Trusted AI Framework- an ethical AI lifecycle with checkpoints

The Discovery Phase

  • Check team have signed our AI Ethics Charter
  • Ensure the team and dataset is representative
  • Assess any potential tech partner
  • Clarify model accountability with all partners involved
  • Assess partner capabilities using the same datasets

The Training Phase

  • Train model with a representative dataset
  • Document model training for transparency and traceability
  • Confirm model outcomes and ensure explanability

The Deployment Phase

  • Ensure model is reliable to be safe
  • Guarantee a clear plan to stop/adjust the model, in case of drift
  • Confirm clear accountability for ownership/monitoring/maintenance of the model