Skip to Content
DTR_Speaker-Banner_1920x480_061-copy

Making AI Accountable: Creating Openness and Transparency in AI

Ryan Budish, Harvard University

The Capgemini Research Institute spoke with Ryan to understand more about accountability in AI, ethical challenges, and the risks of these advanced technologies.


ACCOUNTABILITY AND TRANSPARENCY IN AI

What are the important themes that underpin ethics in AI?

“Ethics” as a term has a very specific meaning and body of scholarship behind it. But broadly speaking, organizations should be concerned about issues such as fairness, accountability, and transparency in AI. I think there is also a growing recognition of the importance of human rights as organizations deploy and use AI.

We need to ensure that AI is acting in such a way that we can hold it accountable and also respond if we determine it is acting in a way that we don’t believe to be consistent with our values and/or laws. There are multiple complementary approaches to doing this. For example, one can take a top-down, system-wide approach defining ex ante the standards by which we want to hold these systems accountable, which could be ethical, normative, or political standards. One can also take a bottom-up, generative approach looking at individual instances of technologies or applications and asking whether that specific AI system is operating in the way that we want it to. These approaches work together. On the micro-level, you’re ensuring that a particular system is operating in the way that the designers intended, without unintended, harmful effects. And at the macro-level ensuring that the system is operating in accordance with broad, systemwide standards that policymakers, ethicists, or society as a whole has put in place.

We see these various approaches play out on issues such as the use of lethal autonomous weapon systems. At the societal level there is vibrant debate about whether such systems should be banned outright as outside of the bounds of what a society accept or tolerate. Simultaneously, at the organization level several large AI companies have established their own set of principles, guidelines, or standards limiting the kinds of uses for which they’ll sell their technology.

If something goes wrong, such as a consumer or media backlash, who can be held accountable?

I don’t think AI is a special case. AI technology is not being deployed into a vacuum, but rather it’s being deployed into areas that already have quite a bit of laws and regulations.

For example, imagine if an autonomous vehicle doesn’t perform as it should and there is an accident where someone is hurt or dies. It’s not the case that, because AI was involved, no one knows what to do. In fact, there are legal liability regimes that already exist. There are regulations about consumer product safety and vehicle safety, and if any of those were violated, there is potential liability and recourse against the auto manufacturer and/or their suppliers. There are lots of tools that are already available, in most cases, the main issue is how those tools can be leveraged to properly ensure accountability.

Technology companies have been hit with AI and ethics questions recently, but what other types of organizations will be affected?

Outside of the companies that are leading AI-development, there are two categories of organizations that are facing similar, but not identical challenges: public sector and governments on the one hand, and non-AI or even non-technology companies on the other. Among these two groups, there is a lot of enthusiasm for trying to use AI technologies, but also a growing recognition that there is a lot of potential risk that comes with it. For example, the potential for AI to behave in a discriminatory way if biased data is fed into the machine learning system.

I think there’s a general understanding of this challenge but not enough knowledge of what to do about it. There are a lot of high-level principles promoting things like “AI should respect human rights” or “AI should not be discriminatory,” but there’s real sense that they don’t necessarily know how to bridge the gap between these high-level principles and what’s happening on the ground.

KEY CHALLENGES AND RISKS IN AI

What do you see as the greatest challenges and risk in AI and ethics?

I think the biggest challenge right now is the information asymmetry that exists between the people who are creating these AI technologies and the people who need to decide whether to use them, and how to use them.

Procurement professionals – who have long decided what kind of computers or software to buy – are now being asked to determine what kind of AI systems to purchase. Or they are being asked to make decisions about what type of data to give to third parties to create AI systems. In many cases, these people are not necessarily well prepared to assess the risk and opportunities of a particular AI system.

What kind of risks do traditional organizations face?

The risk is entirely dependent on where the AI system is being used. Some applications will have a very low risk, but there are others that will have huge substantial risk. For example, in autonomous vehicles, there is potential for risk to public safety. In the criminal justice system, there is a risk of unfairly incarcerating people for longer than they should be or letting potentially dangerous criminals on to the streets when they should be in jail. In healthcare, there could be a risk of improper diagnoses.

DELIVERING ETHICAL AND TRANSPARENT AI

Do you think ethical and transparent AI is realistic?

I think organizations have a lot of incentive to pay attention to it. Given that there is a growing understanding of the potential risks that AI systems can present, I think there is a desire to try to deploy these systems in a way that respects human rights, that preserves human dignity, and that is fair, accountable, and transparent.

And, in general, I think it’s realistic. But of course, there are compromises between explainability and how accurate a system might be. There will be some instances where having explainable AI will be so important that we will be willing to accept any compromise with how accurate the system might be. There will be other lower-risk circumstances where maybe an explanation is not as important, and so we can try to have greater accuracy.

What role can team diversity play in removing bias and discrimination in AI systems?

Improving diversity is incredibly important. In my opinion, one of the things that must happen is that the people who are being impacted by AI technologies must play a bigger role in helping to develop and govern those technologies. AI technologies cannot just be developed in a few places in the world and exported everywhere else. We need greater diversity in terms of the people who are developing the technologies. We need more diverse datasets to go into developing those technologies. We need greater understanding of the implications of these technologies for both good and bad across the public sector all around the world, so that information asymmetry is not an obstacle to good policymaking. I think that there is no one place that diversity and inclusion must be improved, but rather it must be addressed throughout the landscape.

RECOMMENDATIONS FOR ORGANIZATIONS

What concrete steps do organizations need to take to build and use ethical and transparent AI?

There is a lot that organizations can do. There are highlevel principles that exist and emerging standards they can adopt. A good first step is looking at that landscape of principles and emerging standards as a way to begin to understand and think critically about both the potential risks and benefits of AI systems.

The second step is to understand what gaps exist in their ability to address those risks and opportunities. For example, organizations can examine their resources and talent. Do organizations have data scientists on staff? If they do not have data scientists in their organization, how can they partner with local universities to develop a pipeline of data scientists? Are there people who can help them audit their datasets and, if not, where can they find those people? Are there people within the organization who understand how AI systems work? If not, can they partner with computer scientists and computer engineers at local universities?

Where do you think ethical AI accountability and responsibility should lie within private, non-technology organizations?

I don’t think there is one place. I think the responsibility must be shared, similar to the approach that organizations have taken for issues like human rights and privacy. Everyone in an organization has an obligation to respect the privacy of customers or to protect their data. Certainly, organizations have created positions like chief privacy officer to help ensure that the right policies and systems are in place. But the responsibility itself lies with everyone. The same goes for human rights. No one gets a free pass for violating human rights from the lowest person in the company all the way up to the senior executives and the board. The question of behaving ethically is similar in that I don’t think the responsibility lies with any one position.