Skip to Content

AI is good, as long as we enact ethics controls

Capgemini
2020-01-07

The release of the movie War Games coincided with the start of my career in technology. The movie introduced many to the notion of artificial intelligence (AI) and the potential impacts it could have on our lives.

Fast-forward 36 years and we see intelligent algorithms playing prominent roles in everything from how we purchase products to how we defend our borders. Major advances in computing power and data storage, coupled with the increased digitization of formerly analog processes, have fueled unprecedented growth in computer intelligence solutions.

While most would argue these advances have greatly benefitted society, many are concerned over the ethical implications of machine-driven decision making. Just as we saw in War Games, machines will do what they are trained to do, even if that is detrimental to large segments of society.

Ensuring the safe and ethical operation of computer intelligence solutions is a significant concern for both corporations using these solutions as well as society in general. That means society must work on developing the necessary governance and control environment for AI solutions to ensure a safe and ethical state for all constituents.

As with any form of software development, outcomes of AI projects are impacted by the development ecosystem, the processes needed to migrate to a production state of operation, and the continuous audit of the end solution. However, ensuring the ethical state of an AI solution requires additional controls at various steps of the solution’s lifecycle.

Maintaining the proper development ecosystem for AI solutions begins with the development of what I call the AI Code of Ethical Conduct. This code outlines the steps all AI developers must follow to eliminate bias, promote transparency, and be socially responsible. The AI Code of Ethical Conduct should contain standards and practices to guide developers on such topics as auditability, accessibility, data management, delegation of rights, and ethical/moral responsibilities. The code will be reinforced with mandatory training for all developers to ensure they understand the organization’s ethical responsibility.

Also, organizations should focus on the recruitment and hiring of a diverse set of developers to help eliminate “group think” and to reinforce a culture of inclusion of thought in the development ecosystem. Finally, in cases where the outcomes of AI efforts have the potential to impact large segments of society, organizations should hire ethicists. Ethicists are specialists that educate and work with developers on ethical development practices.

With a proper development ecosystem in place, the next area of focus is the process of migrating AI solutions to production. In IT, the concept of a Quality Review Board (QRB) or Architecture Review Board (ARB) is commonplace. For AI solutions, a new governing body, the Ethical Review Board (ERB), is required. While establishing the governance framework to ensure ethical practices in the development and use of AI, the ERB also acts as the gatekeeper for new AI solutions moving to a production state. New solutions that do not pass ERB review are not allowed to move into production.

Once AI applications are in production, results must be continually audited to ensure compliance. These audits would review not only the algorithms but also the data feeding the algorithms. As AI algorithms learn through iteration, biases in the data would lead to biased “learning” by the algorithm.

While auditing and continuous testing to understand unexpected results is critical, it isn’t enough. In addition, feedback loops should be provided to users that operate outside the AI controls of the system. Feedback loops could be built into applications or accomplished using survey instruments.

In summary, establishing an operational AI ecosystem with the appropriate level of independence and transparency is mandatory for organizations building and operating intelligent solutions that have societal impacts.

AI ethics controls aren’t sexy or exciting and, let’s face it, had these controls been in place, War Games would have been a boring movie. But that’s what we want for society: nice safe outcomes.

Gary Coggins is an executive vice president at Capgemini and he can be reached at gary.coggins@capgemini.com