Skip to Content
DTR_Speaker-Banner_1920x480_061-copy-2

Unlocking the true potential of AI through its ethical implementation

Cecilia Bonefeld-Dahl is director general of DIGITALEUROPE, the digital technology industry association that represents over 35,000 digital companies in Europe. She is a member of the European Commission’s High-Level Expert Group on Artificial Intelligence, a board member of the European Commission’s Digital Skills and Jobs Coalition, and a board member of the European Parliament-led European Internet Forum.

The Capgemini Research Institute spoke with Cecilia to understand more about the state of ethical and transparent AI in Europe.


DIGITALEUROPE AND ETHICAL AI

Can you tell us about DigitalEurope and its mandate in ethics and AI?

DIGITALEUROPE is the biggest association of tech in the world. We represent 36,000 tech companies in Europe, and we have 40 associations around the European territory. We also have a chamber of big global companies, such as SAP, Siemens, Bosch, Schneider, Microsoft, and Google. So, you can call it a collaboration partner, where we work with them to shape regulation on tech in Europe.

REGULATION AND GUIDELINES ON ETHICAL AI

Could you tell a bit more about the guidelines published by the EU’s high-level expert group and what is currently happening in that space?

When the GDPR was launched, it had been was discussed in the political environment for about seven or eight years. But, once it finally came into force, many companies, especially the SMEs, were not ready. The change in legislation slowed down European industry to a high degree. Learning from that, we realized that it is good to have regulation and guidelines, but they need to align with industry and companies. After developing the guidelines, we have launched a pilot where they are being tested by different companies, public institutions, and representatives from civil society. We are now collecting feedback on how we should implement these guidelines, and this runs until the end of 2019.

In parallel, we also have a series of sector workshops, where we look at different areas – such as the public sector, manufacturing, and health. We basically look at the high-level expert group’s guidelines and recommendations, and how we can implement ethical and trustworthy AI. We are listening closely and taking it on the ground to test the right way to work with it. If we don’t do it this way, we might just slow down innovation by dropping things on people that might not fit into the way they work. So, it is extremely important that we adopt an approach where industry and institutions are being asked for their views. It gives them an opportunity of working with something like trustworthy AI in a way that it is coherent with the real world.

Are there any milestone dates that the expert group has put for moving toward regulation?

It is not only about regulation but also looking at whether regulation is necessary or not. It is also about keeping an open mind and looking at existing regulations. The overall goal is not just about implementing trustworthy AI, but also about boosting its uptake and getting a competitive start on how to do this in Europe. So, the next big step is to have all the feedback, understand how we can work with trustworthy AI, and – if changes are needed – how we can handle those changes.

How do you think the approaches to regulation or guidelines on trustworthy AI will differ for Europe compared to the US and China?

It is basically about creating an approach to AI where we are all sure that its application is for the benefit of people, companies, and society as a whole. And I think in most cases, it is. For example, we can do amazing stuff with artificial intelligence in health, preventive medicine, and predicting life-threatening diseases before they break out. We need to make sure that the development is pushed in a direction where it is for good for the society and the company. So, the whole idea is to create a feeling of safety and trust around AI and its benefits.

What is your position with respect to regulations in this space?

I want to be sure that the companies and institutions sit down and look at their environment and the current laws and see if there are any missing links. Let us do a thorough exercise and ask people, and if we find something that is missing, let us add it in. If we find something that we need to interpret – or give clear guidance on how to apply the rules – let us do it. But let’s not do this simply for the sake of it. So, I am not against regulation, I just want things to be done right.

Do you think self-regulation in the form of organizations crafting their own code of conduct or policies around ethics in AI could be useful?

My first response is that a tool in itself is not bad, it depends on how the tool is used. So, self-certification is something that has been done for cybersecurity. And, giving people responsibility for their own actions is a very European thing, which seems to work very well and can be really powerful. But we need to be sure that we do not just talk about tools without knowing how to apply them. It will take at least around a year before we know exactly what the results are. So, I would say “give it time.”

Team diversity is one way to tackle bias in AI. What approaches can organizations adopt to have more diversity in their AI teams?

First of all, it is about creating an interest. The Commission has put a lot of money in the DigitalEurope program toward education of AI and cyber specialists. And, we also have more projects with people who retrain secretaries and PAs into cyber and AI specialists. It is also about teaching people right from elementary school. I think we have failed in discussing education and technology. We just started talking about it 10 years ago. I hope that in the next five years we actually take ourselves seriously and start training people in a different way, not just internally in the organization, but also all the way from elementary school.