Nearly 9 out of 10 organizations across countries have encountered ethical issues resulting from the use of AI.
64% of executives kick-off a long-term strategy to deal with AI ethical issues following a related concern that is raised in their organization
As the AI revolution is on its way, those numbers showcase a shift in companies’ mindset: AI everywhere but ethical and trusted AI! This shift is driven by 3 factors:
- Citizens are clearly more concerned about the ethical dilemmas of AI
- Employees are getting mindful about their employers’ use of AI
- Regulators are preparing to issue new policies on AI
For example, the EU Commission drafted 7 principles that we believe could be applied regardless of any jurisdiction:
- Human agency and oversight
- Diversity, non-discrimination, and fairness
- Societal and environmental wellbeing
- Privacy and data governance
- Technical robustness and safety
At Capgemini Invent, we believe that developing ‘ethical and trusted by-design’ AI goes beyond mere compliance with regulatory upcoming frameworks. It can:
- Reduce implementation headaches down the track: 41% of executives are likely to abandon the AI system altogether when ethical issues are raised
- Foster & reinforce trust with customers: +44 points (NPS®) advantage to organizations that are perceived as using AI ethically over others*
Setting up Trusted AI is a process that starts at the origination of any AI use case and continues during its run phase. At Capgemini Invent, we propose technical and organizational levers to ensure AI is ethically set through your use cases and organization.
Numbers are based on the Capgemini Ethical Study available here.