Skip to Content
DTR_Speaker-Banner_1920x480_Daniela-Rus

Building trust in AI-based decision making by understanding the strengths and limitations of machines

Daniela Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). She serves as the deputy dean of MIT’s Schwarzman College of Computing, and as director of the Toyota-CSAIL Joint Research Center. Her research interests are in robotics, mobile computing, and data science. She is known for her work on self-reconfiguring robots, shape-shifting machines that can adapt to different environments by altering their internal geometric structure. She earned her PhD in Computer Science from Cornell University.
The Capgemini Research Institute spoke with Professor Rus to understand the ethical considerations in the design and deployment of AI systems.


ETHICAL AI CONSIDERATIONS FOR ORGANIZATIONS

What are the key issues at it relates to ethics in AI?

The ethics problem is broader than just the AI problem. In a system where a machine makes a decision, we want to make sure that the decision of that system is done in a way that ensures people’s confidence in that decision. For autonomous decision making, it is important that the machine’s decision can be interpreted and explained, so that people get justifications for how the system decided the way it did. So, if someone didn’t get a loan, why not? This kind of interpretability is critical. People need to be aware of how these systems work. Additionally, it is critical that the data used to train the system is correct and has checks for biases, because the performance of machine learning and decision systems is only as good as the data used to train them. Altogether, interpretability, explainability, fairness, and data provenance are the critical attributes that ensure trust in the decision of the system.

We can address the ethics problem at multiple levels: technologically, through policy, and with business practices. Technologists, policy makers, and business leaders need to come together to define the problems and chart a path to solutions. As a technologist, I would like to highlight that some of the solutions can be technological solutions: for example, fairness and level of privacy are becoming important metrics for evaluating the performance of algorithms. On the other hand, we also have to be aware that the current solution for machine learning and decision making have errors associated with them. While machines are good at some things, people are good at other things. Removing the person from the decision-making loop entirely will have consequences. My recommendation, therefore, is to have the machines and systems act as a recommender – providing recommendations to decisions and presenting supportive information for those recommendations. But, ultimately, there should be a person making those decisions.

What is the magnitude of the AI ethics and transparency issue in organizations today?

It is a huge problem. It is not something that the technical community has been thinking about from the very beginning. Computing as a field is very young as compared to other science fields, such as physics. The term artificial intelligence (AI) was coined in 1956 and the artificial intelligence academic discipline started shortly after that. That is barely over 60 years. As compared to other fields of study, the AI field is very young. In the beginning of a new field, people try to lay the foundation of the field to understand the problems that can be addressed, the solutions we can have, and the capabilities that can be introduced as a result of the field of study.

For AI, the focus has been on developing algorithms and systems that enable machines to have human-like characteristics in how they perceive the world, how they move, how they communicate, and how they play games. In recent years, we have started thinking about the societal implications of technology profoundly and seriously. This is a very important issue right now. Our society needs to be positively impacted by what we do.

However, there are cases where organizations and people take a tool that is designed for a certain positive purpose and use it for a negative purpose. How do we address those situations? We can’t stop technology from evolving and changing the world, but we need to stop and think about its consequences and come up with policies and provisions that ensure that what gets produced is used for the greater good.

What are some of things that organizations can do today, on a practical level, to work towards having ethical and transparent AI systems?

Organizations should start with understanding the technology. A lot of people use technology without understanding how it works and or how the data impacts the performance of a system. Another action companies can take is to identify their principle for adopting technology – things like fairness, inclusiveness, reliability and safety, transparency, privacy, security, and accountability. Companies should understand what it means to use AI for good and incorporate these attributes in their culture and in the education of the employees. For example, companies can create review panels to make sure that these principles are adopted and, if people have questions about them, they are answered. They can ensure that the latest technological advancements that address the safe use of technology are adopted and incorporated in the operation of the organization.

How can we actively prevent biases in AI systems, such as facial and voice recognition, for example?

The performance of a system is only as good as the data used to train the system. So, if we have a bias in the data, we are going to have bias in the results. There are numerous examples of companies with biased face and voice recognition systems that displayed discriminative behavior. It is important to put in place provisions to make sure that people don’t get discriminated against because the data used by the system was biased.

Another type of bias is in over-predicting what is normal and over-emphasizing the expected distribution of attributes in the data. Therefore, if the data is incomplete, we might not capture critical cases that are very prevalent and that make a difference in how the system operates in the world.

CASE IN POINT: SOLVING THE ETHICAL DILEMMAS OF AUTONOMOUS CARS

Autonomous cars pose a lot of ethical questions. How can the companies involved in designing these cars answer these questions?

First of all, the companies have to advance the technology. Today there are limitations to what autonomous vehicles can do. For instance, the sensors used by autonomous vehicles are not reliable, they do not work well in rain or in snow, and this causes big limitations wot the use cases of self-driving technologies. The companies train and experiment with their car products mostly in Arizona, where it never rains or snows. This is a serious limitation.

There are other limitations too – the vehicles do not have the ability to respond quickly enough or cope well with high speeds and congestion. They have trouble in humancentered environments because they do not understand human-centric behavior. They have trouble understanding the road context, especially when robot cars are on the same roads as human-driven cars.

These are issues that are being worked on but there are no good solutions yet. I would say that today’s autonomous driving solutions work well in environments that have low complexities, where things don’t move too much, where there is not much interaction, and where the vehicles move at low speed. So, if you think about three axes – the complexity of the environment, the complexity of the interaction, and the speed of the vehicle – the sweet spot today is around the origin of this system of coordinates. For level-five autonomy, that is providing autonomy anywhere anytime, we need technologies that can address high complexity, high speed, high levels of interaction, in other words we need to push the boundaries along all three axes. While there are many ways of getting there, the most important one is to have reliable technology. Once we have the reliable technology, then we can answer a range of questions. How do we regulate and at what level? If the car has an accident, who is responsible? Is it a manufacturer, is it the programmer, is it the owner? How do we begin to address such issues? These are very important questions.

ETHICAL AI REGULATION AND CODES OF CONDUCT

Do you foresee regulations in the areas of ethics in AI? Is legislation the way ahead or is it counterproductive?

At MIT, we have one organization that is devoted to studying this question – the Internet Policy Research Initiative (IPRI). IPRI’s activities include studying the policy and technology around data use and more broadly the use of algorithms to support decision making. Researchers are looking at what should be regulated and to what extent. There many deep questions around this issue. Take, for instance, self-driving vehicles. At the moment, we do not have legislation that addresses how to regulate the use of self-driving vehicles. This makes the development of the technology more challenging and slows the rate at which the industry can innovate products around this technology. However, in the case of autonomous vehicles, I believe that regulation is necessary. But, in the US for example, should vehicles be regulated by the federal government or at a state level? How can the policies be coordinated from state to state? These remain open questions, but once we have answers and policies, I think we will see product growth in the space of autonomous vehicles.

I am giving you a nuanced answer because I don’t think there is a single answer to this question. We must look at technologies by industry sector and figure out the policies and regulations for each sector. Altogether, most important is to have the building blocks of trust that can help assure consumers that they have the benefit of innovative products without sacrificing safety, security, fairness, or privacy.

How can organizations develop codes of ethics and trust for using machines in decision making?

There are attributes we should embrace as the basis of a code of ethics and trust. This might include explainability, interpretability, transparency, privacy, or guaranteeing fairness. It might also include descriptions of the data provenance and accountability for the information sources used to build the system.

We can think of these attributes as being generic, cutting across many different industry verticals. Then, each of these attributes would be instantiated to specific questions that are applicable to a vertical. For instance, the attribute that addresses explainability, interpretation, and transparency in the transportation sector might translate to “why did the car crash?,” or “was the mistake avoidable in any way?” In the field of finance, this might translate into “why didn’t I get the loan?” and in healthcare, “why this diagnosis?” In criminal justice, it might be “is the defendant a flight risk or not?” These are very different questions that address safety, transparency, and explanation for decision making. We can create similar tests for other attributes and verticals. For example, for privacy in finance – we might want to prove that a customer got the best deal without disclosing other consumer data. But how do we show that?

In creating a comprehensive code of ethics, it is important to focus on ensuring consumer confidence in decision making, especially for safety-critical applications. Before a person can drive a car, the person needs to pass a driver’s test. Maybe, for AI working on behalf of humans, we need analogs of the driver’s test to convince ourselves that the machine operates at a level of trust and robustness with which we are comfortable.

ROLE OF ACADEMIA

What role does academia have to play in ensuring organizations implement ethical practices in AI?

Academia has a very important role in establishing the foundation and principles, and in highlighting what is important to ponder upon. Academia can also provide support for decision-making processes in various industries. Some of this work falls in the policy space and some of the work falls in the technological space. Machines are better at some things and humans are better at other things. We need to figure out ways of tasking machines and people in ways that makes the most of both worlds so that the collective becomes much more powerful than machines working by themselves or people working by themselves.

In computer science, the measures defining how well a computer program performs were focused on the time and space required to compute. Now, we are beginning to consider other metrics – for example, what is the fairness of the algorithm? To use metrics such as fairness or privacy, we need to develop mathematical models that allow us to incorporate these properties in the algorithm evaluation. This methodology will result in algorithms that are guaranteed to produce a fair answer, or a system that is guaranteed to preserve privacy. We might even imagine generalizing from fairness metrics to other aspects of human rights.

I can’t say that we have a clear solution, but this is why the topic remains an area of research. How technologists, policy makers, and company leaders come together and incorporate their different objectives into something that encourages innovation for the greater good and enforces positive and constructive application of technology requires a level of understanding of policy, technology, and business. Co-training in technology, policy, and regulation law should be part of our future processes. Academia has an important role to play here.

What is the current focus of research in this field? How likely is it that the research will help solve issues on ethics and transparency?

I don’t think we have a silver bullet right now. This space remains a very important and exciting area of research. I believe a step forward is to identify the right attributes to be checked when involving machines in decision making. New approaches to trustworthy and robust machine learning engines will lend transparency and performance guarantees to the systems.

Advancing fields such as homomorphic encryption and understanding how to deal with bias in data is also very important. We have advanced technology to the point where we produce quintillion bytes of data every day, but in a world with so much data, everyone can learn everything about you. So how can we maintain privacy? Well, the field is working to develop technologies, such as differential privacy and homomorphic inscriptions, which will enable computation on encrypted data. When machines will be able to perform computations without decrypting data, we will have the benefits of the data-driven computation without revealing what is in each of those individual records. This is an example of a technological solution that could have a profound impact on the use of data in the future. Other solutions will necessarily have to be at the intersection of policy and business and technology.