Zum Inhalt gehen
DTR_Speaker-Banner_1920x480_BD-DETYNIECKI-1

Building Robust AI Through Interpretable AI Implementation

Marcin Detyniecki, AXA

Marcin Detyniecki is head of Research & Development and Group chief data scientist at AXA. With a Ph.D. in Artificial Intelligence from University Pierre et Marie Curie (UPMC) in Paris, he is a professor at the Polish Academy of Science (IBS PAN), and an associate researcher at Sorbonne Université computer science laboratory LIP6.

The Capgemini Research Institute spoke with Marcin to understand more about emerging ethical challenges in artificial intelligence and the role of governance frameworks in countering them.


TOWARDS INTUITIVE, FAIR AND ROBUST AI

What are the main issues you are confronting in your work at AXA – in particular around the issue of ethics in AI?

As chief data scientist of the AXA Group and, above all, head of Research and Development, my role is to produce technical solutions to the challenges facing the insurance industry. The human is very important in the insurance business. If Facebook or Google get a prediction of your appetite for a particular product wrong, it will not change your life. But, in the insurance industry, if you make a wrong prediction, it can have significant repercussions for individuals. Therefore, we invest time and money in doing fundamental research on three key topics: interpretability, fairness, and robustness.

First, interpretability allows you to explain decisions that are made by an algorithm that we called “a black-box model,” namely with high accuracy but not explainable. Second, fairness is about mitigating unwanted bias, which may lead to discrimination. Should they come from a non-representative sampling of the population or from an unintentional reconstruction by the AI of protected sensitive attributes, such as religion or race? Finally, robustness is around understanding and fixing the fact that machine learning can be tricked very easily, such as through adversarial attacks, where a minor non-important change of the overall input drastically changes the output of the AI. For instance, changing a few pixels on a “stop sign” image can trick the Ai into saying it sees a giraffe.

You speak of interpretability of AI, is it the same as transparency?

We focus our research on interpretability instead of transparency. This is because machine learning tends to produce complex systems. Here, if you bring in transparency, it will enable anybody to see the rules, but you will not necessarily understand anything – especially if you have millions of them. We work on interpretability to make sure that people can understand the impact of decisions made by an AI system. Nevertheless, in general, the widespread use of machine learning and artificial intelligence in our society requires a high level of transparency to ensure that practitioners and users alike are aware of how, when, and why systems behave the way they do.

Is there a governance mechanism or team that specifically looks at ethical issues in AI?

At AXA, we have an Ethical and Data Privacy Advisory Panel, which addresses the ethical aspect of AI through the lens of data privacy. It is very useful, because more often than not, it is not the AI that has an ethical issue, it is the input (i.e., the data) that poses the challenge. For example, should we use DNA information for pricing or not? Although the answer may seem straight forward it is a bit more complex than expected because, for instance, this information could be used to ensure that a non-curable version of a disease, today excluded of the coverage, would be now covered thanks to that information. These kinds of topics are discussed in this dedicated panel.

Moreover, we initially thought about having a specific code of conduct for AI, but then we decided that it would not be very effective as yet another guide with a generic nature. So, we decided instead to add a specific section on AI to the different internal rules and code of conduct. To drive attention and to cover the eventual transversal gaps, we also created an internal AI charter. We tried to ensure it has proof points and is a living framework that can evolve with time. This charter was an interesting exercise because it is the result of interaction and exchanges on these topics with very different people around the table, who were asked to get aligned on the topic. It has successfully provoked a positive momentum in AXA, which now is even shaping the thinking across industries.

TACKLING ETHICAL CONCERNS IN AI AT AXA

Are there any ethical concerns with respect to fairness or interpretability that have been surfacing in your work?

A first operational case that has brought some attention is considering the creation of very accurate, and thus in some sense, fair insurance product. To achieve this, we could use deep learning, but then the regulator will be not able to audit the way it is done today – since it is not an interpretable algorithm. That’s why it’s very important for us to keep in mind at all stages the ethical aspect and to keep investing in our research activities.

A second operational case is the use of machine learning to detect fraud. The algorithm is trained to flag suspicious people, based on previous examples. The list of suspects is then handed to a human expert who checks for fraudulent activity. The concern was that machine learning provides only a score, for instance eight out of 10 for an individual, but not an explanation. The experts complained that they do not even know what they are supposed to be looking at. This typically hinders adoption. Since this concern was detected, the R&D team has developed tools to provide helpful insights useful for our operators.

THE ROLE OF REGULATION AND TECHNOLOGY IN BUILDING ETHICAL AI

Are there any ethical concerns with respect to fairness or interpretability that have been surfacing in your work?

A first operational case that has brought some attention is considering the creation of very accurate, and thus in some sense, fair insurance product. To achieve this, we could use deep learning, but then the regulator will be not able to audit the way it is done today – since it is not an interpretable algorithm. That’s why it’s very important for us to keep in mind at all stages the ethical aspect and to keep investing in our research activities.

A second operational case is the use of machine learning to detect fraud. The algorithm is trained to flag suspicious people, based on previous examples. The list of suspects is then handed to a human expert who checks for fraudulent activity. The concern was that machine learning provides only a score, for instance eight out of 10 for an individual, but not an explanation. The experts complained that they do not even know what they are supposed to be looking at. This typically hinders adoption. Since this concern was detected, the R&D team has developed tools to provide helpful insights useful for our operators.

RECOMMENDATIONS FOR ORGANIZATIONS

What three concrete steps would you recommend that organizations take to start embedding ethics into AI systems?

The first step is to realize that ethical AI is important because it will allow you to drive adoption and develop sustainable technologies that will comply with current and future regulations. Furthermore, being ethical does not necessarily drive up costs. For instance, being bias-free does not mean that you are going to lose money. As a matter of fact, in the insurance case, it will just redistribute the global risk more fairly.

A second step would be to set up a team responsible for implementing ethical AI. This team needs to have a high level of sponsorship because it is an overarching, longterm challenge. Strong sponsorship from senior leaders is important. This team of people must be a multidisciplinary team that can understand the technical issues but also business processes, HR challenges, and compliance.

Lastly, companies need to be patient. The use of AI and its necessary ethical adoption is clearly an opportunity, but improving things and changing processes in society and, in particular, in large companies might cause resistance. The best way to ensure that ethical standards are maintained is by aligning the interests of all stakeholders around the noble purpose of what you are delivering and the associated ethical values. I really think this will happen since long-term sustainability in our complex and everchanging world implies a necessary alignment between the shareholder, the customers, and also the talent for which you are fighting.