The practical steps every organisation can take for ethical AI adoption

Publish date:

From creating an AI charter to encouraging diversity, our Expert explains what organisations need to do to win the public’s trust and loyalty.

The Capgemini Research Institute (CRI) has released its latest report on AI and ethics. We all know how important this subject is, as AI becomes increasingly involved in making the decisions that impact us on a daily basis. The issue was also a central theme at the AI Summit in June, which provided a great venue to check the pulse of how AI is being seen by business leaders. Our report shows that only 25% of executives believe their organisation employs ethical AI systems and only 4% have guidelines for ethical AI. These are surprising statistics for a topic that has so many questions to answer, such as “How can we ensure AI systems are fair and free of bias?” “How do we put AI to use for the good of individuals and society?” “How do we respect the privacy of individuals and their data whilst still delivering personalised services?” “How do we make AI systems transparent and explainable?”

Wherever companies are on their AI journey, from small proofs of concept to scalable systems in production, it is necessary to think about these questions and start to implement policies and procedures to create an ethical AI culture. The CRI report focuses on what areas need to be addressed and I have summarised the guidelines and included a few ideas of my own to start putting ideas into practice:

Create an AI charter or code of conduct

Seventy percent of “Ethics-Savvy” organisations have started to develop AI guidelines or a code of conduct – but this is not enough. This first step is intended to lay out the practices to which every data scientist and, indeed, anyone dealing in data, will adhere. This should cover the purpose of using AI, what it will and will not be used for, standards for collecting training data, logging usage, and designing for privacy. There are several principles that can be used as a starting point and customised for each organisation’s needs, such as those from the Institute for Ethical AI and Machine Learning or, as mentioned in the report, from the European Commission’s Ethics Guidelines for Trustworthy AI.

Establish an ethics working group

Once people have signed up to the charter, there needs to be a mechanism in place to periodically review the code of conduct and discuss issues that arise or need to be addressed and recommend action. Any new AI system being developed should be reviewed by a member of the WG to ensure the CoC is being followed. The WG should be made up of a diverse group of people from across the organisation to bring different views to the table. A good way of doing this is to assign responsible roles for each department such as IT, Marketing, HR, etc.

Encourage diversity in AI practitioners

This topic has been heavily publicised within tech companies and also in mainstream news. As the report points out, “Organisations need to build more diverse data science teams (in terms of gender, ethnicity), but also actively create inter-disciplinary teams of sociologists, behavioral scientists, UI/UX designers who can provide additional perspectives in the design phase of the AI systems.” It is important to note however, that a diverse set of practitioners does not in itself mean that bias will be reduced. It is still possible to have a diverse group of people who have the same bias. Therefore, the next point is also needed.

Continue training to tackle bias in AI and humans

 It’s surprising just how many of us are unaware of the types of unconscious biases we have. These are not just the typical characteristics that are protected under law, but also include stereotyping, recency bias, confirmation bias and a whole host of other conditions that negatively affect the way we perceive and view people and society. It is only through training and initiatives aimed at exploring and removing these barriers that will reduce human bias. A great example of such an initiative is the #unstereotype initiative at Unilever, which resulted in a 35% reduction in stereotypical thinking in those that took part.

There is also a need for members of the data science team to be well aware of the ways in which bias can occur in AI systems, both at the data collection stage and model building stage.

Raise awareness of the responsible use of AI across the organisation

The steps mentioned above need to be communicated to the entire organisation to make sure everybody is aware of how AI is being used by the company. Training in understanding the basic concepts of AI and how it will impact employees is something that needs to be done on an ongoing basis as AI becomes more pervasive at work and at home in order to address the fears that are portrayed in the media and give a solid signal that this is something being given attention from the top levels of management.

Collaborate with other organisations in your industry

Finally, it is important to ensure that these policies and procedures are discussed among industry peers and built upon in a way in which small, medium, and large organisations can all share in the benefits that AI has the potential to bring. Democratising the technology across society is essential to ensure AI is available for all and not just kept as a tool to help the few.

The above points are just some of the things needed to try and positively influence the effect that AI will have on society but there is no consensus on many of these issues and it is by no means an easy task to accomplish. As one of the World’s Most Ethical Companies, Capgemini’s AI Centre of Excellence is actively promoting the use of ethical AI across business and putting it at the heart of our AI offerings. To find out more on any of the points in this post, read our report and talk to one of our experts.

Related Posts

Automation & AI

The human side of intelligent automation

Raj Malayathil
Date icon October 21, 2019

Three considerations for organisations to scale intelligent automation efforts through human...

Automation & AI

Why CIOs should consider Artificial Intelligence to manage their IT portfolio

Philippe Roques
Date icon October 10, 2019

What are the problems a CIO faces while analyzing IT assets and why should they consider...

Automation & AI

AI and the impact on the organisation

Date icon August 7, 2019

Artificial intelligence (AI) is one of the hottest topics in business right now. Yet despite...

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information