Trusted AI

Why and how ethical values are embedded into the AI lifecycle

All civil society is impacted by AI from companies and their employees to citizens thus the need to design and set-up Ethical AI.

Nearly 9 out of 10 organizations across countries have encountered ethical issues resulting from the use of AI.

64% of executives kick-off a long-term strategy to deal with AI ethical issues following a related concern that is raised in their organization

As the AI revolution is on its way, those numbers showcase a shift in companies’ mindset: AI everywhere but ethical and trusted AI! This shift is driven by 3 factors:

  • Citizens are clearly more concerned about the ethical dilemmas of AI
  • Employees are getting mindful about their employers’ use of AI
  • Regulators are preparing to issue new policies on AI

For example, the EU Commission drafted 7 principles that we believe could be applied regardless of any jurisdiction:

  • Human agency and oversight
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Accountability
  • Privacy and data governance
  • Technical robustness and safety

At Capgemini Invent, we believe that developing ‘ethical and trusted by-design’ AI goes beyond mere compliance with regulatory upcoming frameworks. It can:

  • Reduce implementation headaches down the track: 41% of executives are likely to abandon the AI system altogether when ethical issues are raised
  • Foster & reinforce trust with customers: +44 points (NPS®) advantage to organizations that are perceived as using AI ethically over others*

Setting up Trusted AI is a process that starts at the origination of any AI use case and continues during its run phase. At Capgemini Invent, we propose technical and organizational levers to ensure AI is ethically set through your use cases and organization.

Numbers are based on the Capgemini Ethical Study available here.

Trusted AI Framework- an ethical AI lifecycle with checkpoints

The Discovery/Partner Choice Phase

  • Check team have signed our AI Ethics Charter
  • Ensure the team and data set is representative
  • Assess any potential tech partner
  • Clarify model accountability with all partners involved
  • Assess partner capabilities using the same data sets

The Training Phase

  • Train model with a representative data set
  • Document model training for transparency and traceability
  • Confirm model outcomes and ensure explanability

The Deployment Phase

  • Ensure model is reliable to be safe
  • Guarantee a clear plan to stop/adjust the model, in case of drift
  • Confirm clear accountability for ownership/monitoring/maintenance of the model

Meet our Experts

Pierre-Adrien Hanania

Expert in Artificial Intelligence and Global Offer Leader for AI in Public Sector

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information