In the first post in this series, I covered the standard principles governing ethics in artificial intelligence (AI). I also gave a few highlights from research recently conducted by the Capgemini Research Institute, showing customer attitudes and business responses to AI. Finally, I outlined the obligations and benefits implicit in implementing AI systems.
In this article, I’m taking a look at various practical preparations that businesses need to make.
The buck stops here
If an organization is going to put measures in place that address the seven standard principles of ethical AI that we addressed in the previous article, it needs to decide who exactly is going to be responsible, and what the ground-rules will be.
Businesses need to show they are serious about this. They should assign responsibility for AI ethics to a designated leader, and they should give that person a clear remit. This remit ought to include the development of a comprehensive ethical charter that can provide a code of conduct for the development and responsible application of all AI systems within the organization.
A code of conduct works best when it’s more than a statement of intent. It should become part of the fabric of business operations, recognized, understood and adhered to by all those with roles that either influence or utilize the output of AI systems. This is especially important as the expansion of AI dramatically increases.
AI systems are delivered via various technologies. Their design should be governed by the code of conduct. This will help ensure an appropriate approach to the way the organization designs, builds, deploys, monitors, and uses AI models.
Audits and training
A code of conduct also works best when it’s monitored. It’s a good idea to conduct regular ethics audits. A vigilance team with complementary skills should be asked to take stock at different stages of an AI system lifecycle – for instance, at the proposal stage, before initial development, and during deployment. This team should also ensure that any pre-trained or plug-and-play AI models are suitable for use in the organization’s own specific circumstances. The scope and also the limitations of such models need to be clearly understood before any deployment.
Training is crucial. AI systems should be implemented not just because they’re possible, but because they are ethically justifiable. To this end, organizations need to invest not just in developing AI skills, but in training people to be mindful of issues such as data bias, cognitive bias, value-sensitive design, or human-centered design.
And of course, once the AI system is operational, monitoring should be a proactive activity. Transparency and explainability of AI predictions or decisions should be a key tenet of the code of conduct facilitated through appropriate monitoring.
Accountability – and next steps
There should also be an outward-facing element to these ground-rules. A prime example is the establishment of a governance body, so that if customers or employees raise concerns about AI systems with ombudsmen or other external regulators, the organization has a point of contact not just for these bodies, but for the concerned parties themselves.
In the last article in this series, we’ll consider these ground-rules to be in place, and we’ll look at the steps organizations need to take to develop and maintain a healthily ethical environment for the development and maintenance of their AI systems. We’ll also see how these considerations can be aligned with the principle of Capgemini’s Frictionless Enterprise.
For more on how organizations can build ethically robust AI systems and gain trust, read the full paper entitled “AI and the Ethical Conundrum.”
Read other blogs in this series
Lee Beardmore has spent over two decades advising clients on the best strategies for technology adoption. More recently, he has been leading AI-driven business transformation for Capgemini’s Business Services. Lee is a computer scientist by education, a technologist at heart, and has a wealth of cross-industry experience.