Skip to Content

How can the public sector make its AI ethical by design?

Capgemini
3 Oct 2022

It’s time for the public sector to embrace the governance and ethics of its artificial intelligence (AI) systems.

In ethical AI we trust!

The volume of AI-powered contactless or non-touch interactions with organizations across both the public and private sectors has grown exponentially over the last few years, with the coronavirus pandemic fueling much of that growth. The experience we have gained from Covid-related quarantine and remote working has sparked new interest in using AI to improve business operations, make the workplace more enjoyable, and raise productivity.

With the adoption of AI solutions gaining momentum, it has been all too easy for the ethical aspects of AI to be overlooked. In response, there has been a strong global movement around AI regulations, which will continue with growing vigor in the upcoming years.

The AI Act proposed by the European Commission in April 2021 is perhaps one of the best-known regulations. Its purpose is to enforce the ethical and safe use of AI technology by separating AI solutions based on risk level. These risks span from unacceptable risk and high risk to lower risk AI applications, all depending on scale and the level of impact on society and citizens. The AI Act is serving as a baseline for other countries, such as the US and Canada, as well as the Global Partnership on Artificial Intelligence, to keep pace with the “AI industry”. In the Capgemini Research Institute (CRI) report AI and the Ethical Conundrum, we investigated the fundamental trust and ethics issues with AI-powered innovations. This research drew on a survey of executives in more than 800 organizations and 2,900 consumers.

AI and the ethical conundrum

The research showed that close to 40% of customers would shift to a more costly human interaction if they had a negative AI experience. This indicates that consumer/citizen adoption of AI would suffer, while causing costs to rise.

For public services, this means that the arrival of AI systems must go hand in hand with accountability, and a clearly delimited use for the AI system – whether it is about processing documents to help public servants on routine tasks, or a conversational AI for interactions with the citizen. This opportunity to extend or augment the capacity of human agents to resolve simpler queries from citizens and businesses will resonate with public sector leaders, given the pressure to do more with less, just as we are seeing across customer service teams in other sectors. Beyond customer service augmentation, we can already find public sector AI use cases in hospitals, transport, schools, law enforcement, border control, and more. In the past two years in healthcare and public services, we have seen governments and tech companies turning to AI tools to predict the spread of the coronavirus pandemic, as well as to guide policy decisions and healthcare services. By acknowledging what happens, understanding why it happens, and drawing conclusions on what will happen (predictive analytics) as well as what could be done (prescriptive analytics), AI has shown its potential to enhance decision making processes.

However, with the accelerated uptake of AI come several ethical concerns. AI bias has led to unacceptable gender- or racial discrimination, such as AI-based risk assessment tools in the criminal justice system that produced racial disparities because they were based on biased historical data. Unsurprisingly, most individuals interacting with the public sector (64%) expected AI to be fair and free from prejudice and bias against them or any other person or group. There is still a significant lack of trust in AI within the public sector. To earn that trust, a strong framework with clearly defined policies and standards is needed to ensure that public sector organizations are achieving responsible AI to the best of their ability.

Our recommendations for building ethical AI in the public sector

A strong foundation of leadership, governance, and internal practices is a good starting point for building ethical AI. This should extend from assigning an ethical AI officer with responsibility and accountability for ensuring ethical AI, to establishing a comprehensive ethical charter or code of conduct for defining AI, and conducting regular ethical audits of AI systems. All these internal building blocks are essential for ensuring citizen and employee trust in their AI systems.

When establishing these building blocks, we recommend that public sector organizations incorporate seven principles in their ethical AI system.

AI with carefully delimited (fixed) impact

Clearly outline the intended purpose of AI systems and assess their overall potential impact, notably on individuals, before adoption. Understand what the AI system will deliver, for whom, and to whom. Ensure that the AI solution is in line with the organization’s AI guidelines and contributes to the overall organizational goal/s. This can be achieved with the work on an ethical charter, and by organizing workshops with the workforce to define a common AI playing field.

Sustainable AI

Utilize AI’s power to help transform the public sector in addressing climate change, delivering food, and ensuring water security. AI design and development should be mindful of future generations, the environment, and all beings that make up our ecosystem, throughout the AI solution’s life cycle. This will ensure the AI solutions are sustainable and environmentally friendly in the long-term. CO2 calculations and other tools exist to ensure that the development of AI occurs in a sustainable way.

Fair AI

Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems, from building diverse development teams and drawing from a variety of racial, gender, educational and demographic backgrounds when deploying and overseeing AI algorithms, to screening the data used to train the AI system for bias. These actions will ensure that the AI solutions built by the public sector serve all intended users fairly – often a country’s entire population. Beyond continuously training the workforce on the risk of bias, projects can apply bias detection to monitor any incidents and rebalance the dataset.

Transparent and explainable AI

Enhance transparency with the help of cutting-edge technology tools to identify and combat ethical issues in AI. For example, simple outcome explainability is now provided by the vast majority of machine learning (ML) frameworks. The additional benefit of investing in these tools for the public sector is that a stronger foundation is established for upcoming regulatory requirements reporting. Explainable AI tools are now well known and already being used. Further, they allow, for example, public services to explain the outcome of AI-driven decisions, such as fraud detection or disease assessments.

Controllable AI with clear accountability

Humanize the AI experience and ensure meaningful human oversight of AI systems. For example, many AI issues can be avoided by introducing humans to take over when issues emerge or, better yet, when there are early signs of any imminent ethical issues before they cause a problem. For example, accountability rules (to identify who is responsible for what) and trackability principles could be embedded into the AI system design.

Robust and safe AI

Ensure the technological robustness of AI systems by examining them for security, accuracy, and reproducibility. For example, can you protect the AI system or data held from falling into malicious hands? Today, the openness needs to be balanced with resilient infrastructure and a clear cyber defense system, especially for public services handling personal data.

AI respectful of privacy and data protection

Empower users with privacy controls to take charge of their AI interactions, using data regulations and guidelines. For example, the EU’s GDPR guidelines include enabling citizens to seek clarification on any suspected data privacy breach and to see how and when their personal data is used, and for what purpose. Many emergent techniques today support organizations in their GDPR compliance while enabling them to leverage the full value of data, such as homomorphic encryption, differential privacy, or the use of synthetic data.

Ensuring ethical AI in the public sector – Capgemini brings the toolbox

As mentioned above, the use cases for AI in the public sector continue to grow at pace, from helping to detect fraud and better manage citizen requests, to informing operational decisions, such as where to best allocate certain resources based on a diverse set of criteria determined by ML models. Here are a few examples where Capgemini has helped clients solve ethical AI dilemmas.

Privacy-preserving technology for European welfare agency’s data

Through applying homomorphic encryption, we ensured that data remained encrypted while in use and thus eliminated critical risks for exploring the art of the possible, such as AI, while working with the likes of fraud detection.

Creation of synthetic data set for deep learning solutions at a Nordic government agency

Capgemini designed a solution for producing a synthetic data set with patient data to overcome privacy and compliance risks. The synthetic data set reflects the original data set with respect to statistical similarity and distribution.

Explainability tool for a fraud detection AI solution at European tax agency

A knowledge graph solution extended with explainable AI helped the agency understand the AI’s decision-making suggestions in fraud detection and is enabling more efficient resource allocation and faster decision making.

Find out more

Read the full Capgemini Research Institute report, AI and the Ethical Conundrum. For more insights on the public sector, follow us on LinkedIn

 

Authors

Sofie Andersson

Manager, Machine Learning, AI and Analytics, Capgemini Invent
“AI is already simplifying life for millions of citizens, supporting public sector decision making and improving efficiency every day. However, AI adoption poses ethical concerns – including the potential for bias and reduced transparency – which will be regulated globally in the near future. To maintain the trust of citizens and employees we must ensure our AI systems are ethical now – with governance tools and design principles that put fairness and sustainability at the heart of what we do.”

Melissa Hatton

AI Strategy Lead, Capgemini Government Solutions
“AI is not a one-size-fits-all solution, but it does offer vast possibilities. To design the right applications we must understand the organization’s objectives and constraints, put ethical considerations first, and democratize AI and data by empowering people across the organization to work more effectively with them. The goal of AI should always be to enhance the human experience without placing any group at a disadvantage.”

Pierre-Adrien Hanania

Global Offer Leader – Data & AI in Public Sector
As a member of the Public Sector team of Capgemini, I provide strategy and technology consultancy services on different aspects of digital transformation. The scope of my work covers all segments, whether defense and security, welfare and tax, public admin, or healthcare, building on a pluri-disciplinary vision and transversal levers to be activated across public services. At Capgemini, I lead the coordination of the European Public Sector, looking for ways to connect similar organizations across borders, according to their needs and digital journeys.