Skip to Content
DTR_Speaker-Banner_1920x480_061

Designing ethical and transparent AI for healthcare

Saskia Steinacker is the global head of Digital Transformation at Bayer. She works closely with the Digital Transformation Board of the Group, which is composed of the three divisional heads and board members, the chief financial officer, the chief information officer, plus the digital leads. She has played a key role in developing the company’s digital agenda with a focus on new business models to accelerate growth. She is also a member of the High-Level Expert Group on Artificial Intelligence formed by the European Commission.

The Capgemini Research Institute spoke with Saskia to understand more about designing ethical and transparent AI in the context of the healthcare sector.


Defining Ethical and Transparent AI

What are your key responsibilities as they relates to AI at Bayer? Could you also talk about your role in the EU HLEG?

I lead the internal board that is driving Bayer’s digital transformation, which basically means transforming the value chain in all our divisions. Within Life Sciences, our focus areas are healthcare and nutrition and we see tremendous opportunities for AI in these areas. In particular, it is about developing digital health solutions as well as expanding the digital farming business. AI can help us to better fight diseases, such as cancer or strokes, and also feed a growing world population more sustainably.

Artificial intelligence is a key technology and its impact goes far beyond our business. Growth in computing power, availability of data, and the progress made in algorithms have turned AI into one of the most powerful technologies of our time. This power can be used for the good or for the bad. There are good reasons for concerns about self-determination and data privacy, as well as the impact on jobs and established business models. Risks and opportunities must be discussed in a broad social dialog and ethical questions must be taken into consideration. Trust in new technologies can only be gained by providing an ethical framework for their implementation. This is why I’m part of the EU Commission’s High-Level Expert Group (HLEG) on AI: to contribute to the development of such an ethical framework. This is what I’m standing for and this is what Bayer is standing for.

How does Bayer define ethics in AI and transparency in AI? If there is a definition, how did you arrive at it? If not, are you working on building a definition/guideline to create a common understanding on the topic at your firm?

I don’t believe that ethics should be defined by a single company. To define an ethical framework is a task for an entire society and should even be discussed at a supranational level, as digital technologies don’t care much for national borders. This is what makes the EU approach so compelling. We have different stakeholders with completely different backgrounds in the Expert group: from industry, society, and academia. This reflects the diversity of our society and gives us varied perspectives.

The “Guidelines for Trustworthy AI” we developed with the HLEG address a number of relevant key requirements. Trustworthy AI needs to be lawful, ethical, and robust with the aim of maximizing the benefits and minimizing the risk. The requirements AI systems need to meet are the following: human agency and oversight; technical robustness and safety, which includes accuracy, reliability and reproducibility; privacy and data governance; transparency; diversity, non-discrimination and fairness, which includes areas like avoidance of unfair bias; societal and environmental wellbeing; and, finally, accountability, which includes areas such as auditability.

The EU developed these guidelines and is currently in the piloting phase of the assessment list. This is a tool that will help companies to practically implement the guidelines. Once the pilot phase is completed at the end of 2019, the final assessment list will be published at the beginning of 2020, and companies should adapt their own guidelines accordingly.

Designing Ethical and Transparent AI

When implementing AI, why do you think it is important to follow ethical and transparent practices in design? Do you receive questions from your clients about this?

Acceptance of new technologies requires trust, and trust requires transparency. This holds especially true in critical areas such as healthcare and nutrition. With AI, we have the chance to shape a technology in a way that it is socially accepted and beneficial for the individual as well as for society as a whole. We have to admit that people have concerns with regards to self-determination, data privacy, as well as effects on the job market and current business models. These concerns have to be taken into consideration despite the excitement for new scientific opportunities through AI.

How do we address the issue of ownership in AI? Who is responsible if an AI system makes a wrong diagnosis?

Our goal in healthcare is not to let AI take decisions, but to help doctors make better decisions. AI has its strengths – analyzing huge amounts of data and generating insights that a human being wouldn’t have thought of before. It is able to identify certain patterns, such as radiological images, and supports the diagnosis of a doctor. AI is meant to enhance or augment the capabilities of humans. How AI is actually leveraged within the healthcare system has to be defined by the different players and ultimately by the regulators.

Experience at Bayer

Do you have a defined governance framework for tackling ethical issues at Bayer? How do you ensure that relevant teams in your organization are aware and responsive to issues of ethics and transparency in AI?

We do have our corporate values, and also well established internal compliance systems, as is always the case in highly regulated industries such as ours. It is early days in the implementation of AI in our sector and we are one of the first companies to test the assessment list that supports the guidelines for trustworthy AI, focusing on a concrete lighthouse case in pharmaceuticals. It’s a project where we try to help doctors identify patients whose cancer is likely the result of a special gene fusion in their tumor cells. It’s important to know this if you are to choose the right treatment – this is about precision medicine.

Bayer offers training programs to educate employees on topics such as AI or blockchain. Is there also a need for organizations to train employees on the topic of AI ethics too?

Absolutely. We have regular global webcasts on AI topics and our ethics sessions had a full house. We don’t have a full-fledged and dedicated AI ethics training program yet, but this could be developed once the assessment list on EU level is final and can be used. This will be helpful for employees who develop, implement, or use AI.

Regulations in AI

Do you see regulations as necessary for implementing ethical AI or is self-regulation the way to go?

I think we need a common framework first, which is binding for all players. Then an area-specific form of self-regulation could make sense. But it’s always about finding the right balance: It wouldn’t make sense to have a regulation in place that would make it impossible to develop AI solutions here in Europe.

Should there be GDPR-like regulation in this area? How can you build the right regulation practices that don’t stifle innovation?

This is exactly the balance that is discussed in the HLEG. If we figure that outright, Europe could be leading with ethical AI. Given the magnitude of the AI revolution ahead of us, there needs to be a certain degree of regulation with a special focus on ethical questions. This type of regulation needs to be binding for all players in a given market, and ideally, worldwide. At the same time, there is already an abundance of regulations that govern many aspects, such as data privacy. However, they are not always fit for the AI age, and AI brings a number of new ethical aspects to the table. We need very broad discussions at all levels of society, including the industry that is expected to develop these new solutions. Moving too quickly and creating overregulation will certainly make many players shy away from innovation in the future, meaning the future will happen elsewhere.

Recommendations and Outlook

What are the main risks facing an organization that does not take ethics in AI seriously?

Apart from immediate consequences – for instance, not being able to sell a solution in an increasingly ethicsconscious world – image and reputation loss are probably the most apparent immediate results. However, what is probably most important are the consequences, which reach beyond a single solution and a single company. With the AI solutions we build today, we will shape the future we as a society and as individuals will have to live in. So, you could even say that there’s the greater good at stake, and it starts with the people who build, use and deploy AI solutions today.

How can organizations make their AI systems transparent, ethical, and bias free? What concrete steps are necessary for this?

To help companies develop ethical and bias-free AI systems is exactly the aim of the HLEG guidelines. If you follow the key requirements, which are translated for practical use in the assessment list, you are able to develop, deploy, and use a trustworthy application. Additionally, companies should sensitize and educate their employees and take ethical aspects into consideration right at the beginning of an AI project. As with other topics, it’s always necessary to have diverse teams.

What is the one AI ethics policy that you would want every organization to adopt at the minimum?

There are many essential topics to consider for ethics in AI, and it’s challenging to prefer one over the other. Each of them has been long discussed in the HLEG. As a starting point, the most important aspect to me is the context of AI usage – it is certainly different if you use an application for a ticket machine versus a complex healthcare diagnosis system.

EU High-Level Expert Group Guidelines

In April 2019, the High-level Expert Group on Artificial Intelligence set up the European Commission released Ethics Guidelines for Trustworthy AI. These guidelines were aimed at promoting “Trustworthy AI”. According to the guidelines, “Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.” The guidelines also lists seven requirements that should be kept in mind while to develop trustworthy AI. These requirements are: “(1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability.”