Ga direct naar inhoud

Ethical AI – Decoded in 7 Principles

Zhiwei Jiang

By Zhiwei Jiang, CEO, Insights & Data and Ron Tolido, CTO and Innovation Officer, Insights & Data, Capgemini

Data-powered enterprises create better customer experiences, run their operations more effectively, and always create new waves of innovation and growth. And the pinnacle of this thriving on data is in Artificial Intelligence, augmenting humans in developing insight, making decisions, and taking immediate, automated action.

Yet, as the Peter Parker principle so eloquently states, “With great power comes great responsibility”. Enterprises exploring the potential of AI need to ensure they apply AI the right way and for the right purposes. They need to master Ethical AI. Being a recognized leader in both ethics and AI, Capgemini has developed its Code of Ethics for AI – not only for internal purposes but also to share it with the outside world. Here’s a short introduction to its seven guiding principles.

Noblesse Oblige

The famous French writer Honoré de Balzac had a slightly different way of putting it. In one of his novels, he recommends specific standards of behavior to a young man, concluding:

“Everything I have just told you can be summarized by an old phrase: noblesse oblige

Whether you are a noble, wealthy, or powerful, this always comes with certain obligations towards society. Undoubtedly, had de Balzac lived today, he would have found it applicable to Artificial Intelligence as well. Every day, AI’s phenomenal innovation power becomes increasingly apparent, potentially affecting entire economies and spreading very visibly beyond the business sector to areas of daily life.

A challenge facing both business and society today is how to optimize the opportunities offered by AI, while addressing the risks and concerns that may come with it. It is in the hands of those at the forefront of applying Artificial Intelligence to business to address this ethical conundrum.

Consistently ranked as one of the most ethical companies globally and driving AI as one of its critical, transformative services to its clients, Capgemini must have the obligation to express its commitment towards the ethical application of AI. It has done so through the development and publication of its Code of Ethics for AI.

This code builds on established assets in the field, including the “Ethics Guidelines for Trustworthy AI” issue by the independent High-Level Expert Group on AI set up by the European Commission. It is combined with the group’s core values (particularly Honesty, Trust, Boldness, Freedom, Modesty). In addition, it takes a pragmatic approach towards daily application of AI in the field.

The code guides our organization on how to embed ethical thinking in our AI business. It also stimulates ethical reasoning and incites an open dialogue between our clients and other stakeholders. Crucially, it addresses how we embed ethical principles in the design and delivery of AI solutions and services while focusing on the intended purpose of the AI applications.

The code consists of 7 principles, all aiming to create human-centered AI solutions:

1. Carefully delimited impact

Designed for human benefit, with a clearly defined purpose setting out what the solution will deliver, to whom, and the impact on humans.

Here’s the very first – and rest assured, the most fundamental – ethical dimension to consider. Over the history of humankind, right from using a stick, tools have been used equally with positive and negative intent. AI should benefit society and individuals without ever causing harm. From the perspective of our corporate raison d’être, we aim to use AI solely to unleash human energy, building a more sustainable, and more inclusive future.

2. Sustainable

Consider developing AI mindful of each stakeholder to benefit the environment and all present and future ecosystem members, human and non-human alike.

Human-friendly AI is a necessity. But so is Earth-friendly AI. Our research shows that AI can help organizations fulfill up to 45% of their emission targets. And similar impact can be achieved in areas as diverse as health, food production, deforestation, and sea life. However, machine learning itself likes energy—a lot. Developing great AI algorithms comes with an environmental price – so in keeping the balance, sustainability will often trump smart.

3. Fair

AI should be produced by diverse teams, using sound data for unbiased outcomes and inclusion of all individuals and population groups.

Algorithms crave data. AI typically requires tons of data for learning. It reflects whatever data you feed it with – data that is potentially biased, discriminating, partisan, manipulated, or simply plain wrong. Intelligent tools can ensure fairness when training data and algorithms. Synthetic data and pre-trained models provide alternatives. Further, more diverse, inclusive teams create fairer AI. In the end, AI provides us with a crystal-clear mirror: If we don’t like what we see, we should act.

4. Transparent and explainable

With outcomes that can be understood, traced, and audited, as appropriateWhat if computer says no?

Many of AI’s breakthroughs also come with pitfalls. Learning systems often process complex patterns that are hard to grasp for the human mind. The resulting algorithms may be highly accurate but difficult to understand and explain. This is of great concern when people need to trust AI for split-second decision-making and actioning, or simply find themselves dependent on its verdicts. Full transparency and tireless communication are essential to soften up the cold, silicon heart of AI’s algorithms.

5. Controllable with clear accountability

Enabling humans to make more informed choices and have the final word.

We’ve all asked ourselves “What If Skynet Happens?” AI’s machine learning methods can be mysterious. Moreover, autonomous systems – seemingly deciding and acting independently – can steal our last illusion of being in control (although it sure felt good on Mars). AI should augment humans; hence humans should always have the final word on how AI works. Or when AI should no longer work, for that matter. There is no need for Asimov’s 3 Rules of Robotics; but do create accountable AI systems that are human-centered by design.

6. Robust and safe

Including fallback plans where needed.

We’ve come to rely on hyper-responsive, autonomous AI systems – deployed at the very edges of IT, on the ground, on the factory floor, in cars. So, we’d better ensure these systems are rock-solid. A trained AI model is a golden asset. We need to apply everything we have learned in DevOps – and more – to ensure superior quality during its lifecycle, from creating and updating to its deployment in real life. Still, systems break. Both fallback plans and an antifragile design mindset help create water-like resilience and robustness.

7. Respectful of privacy and data protection

When considering data privacy and security from the design phase, data usage is secure and legally compliant with privacy regulations.

The closer AI interacts with our daily lives, the more it may rely on our personal and sensitive data. It brings us to what is arguably at the very bottom of the ‘AI’ Maslow Pyramid. Where its top enables us to self-actualize and create, its foundation needs to ensure that data and algorithms are secure and compliant to rules and regulations. As always, these essential qualities should not be an afterthought but built-in to the core of conceptualization. First things first.

It’s up to every practitioner to seriously understand the ethical considerations of data and AI and then live and breathe them every day. AI the good way, AI for good purposes. There is no shortcut, no workaround. It’s a noble cause, you see. Our code for Ethical AI provides us with the compass to navigate promising waters that are often unexplored. Feel free to build on what we have initiated to create your own.