On October 29, Capgemini participated in this year’s AI for Good Global Summit of the UN’s ITU (International Telecommunications Union), with a focus on how ethical AI can serve the citizen and help achieve the UN’s Sustainable Development Goals (SDGs).
With over 400 participants and 17 guest speakers, including a public sector CIO and three governmental AI policy officers, the event fostered dialogue on how best to unlock the potential of data and analytics with AI. With 80 countries registered, this momentum underscored that global issues need to be addressed on a global level.
AI4Good – At the crossroads between policy, tech, and society
The event took place at the crossroads of three main ideas:
- The current efforts made by governments to define and deploy (inter)national AI strategies
- The absolute need to conceptualize and shape the ethical dimension of this technology
- The search for sustainable purposes serving humanity.
Maikki Sipinen (EU Commission), Golestan Radwan (Ministry of Communications and Information Technology, Egypt), and Anna Roy (Government, India) all emphasized the effort to be made to shape a qualitative AI in a geopolitical setup in which technology is understood as an ecosystem of cooperation rather than a race to for world domination. In her closing remarks, Catelijne Muller, member of the EU High Level Expert Group for AI, elaborated on the most-needed guidelines for trustworthy AI.
Speaking of which: in the first breakout round on the ethical guidelines, several speakers touched upon the twofold dimension of defining and build ethics in AI. Whether non-discrimination, data privacy, explainability, or human agency – all sessions brought to light the need for principles and tools to be infused in concrete AI projects by all stakeholders – be it decision makers, data scientists, or citizens.
Ethics-by-design is a must if AI is to disrupt
The tooling aspect is of paramount importance for public services to deliver AI projects. Indeed, none of us could imagine an intelligent job-matching system that can’t explain why a certain job opening was assigned to a specific job seeker. The same applies to social benefit services that would discriminate against part of the population, or patient data analytics projects that would disregard data privacy compliance rules set by the GDPR. Embracing the full potential of AI means implementing the technology in a trusted way.
On the road towards trusted AI, we face many questions: Should AI always be explainable? Who should be responsible for oversight in AI? How do we make data privacy an accelerator and not a burden? The answers will follow and shape the picture of the data culture we want to live with.
How artificial intelligence can serve for good
Beyond the “how” of AI, the question of “what for” also occupied our webinar, in a second breakout round and within the very concrete context of the UN’s Sustainable Development Goals. With six parallel sessions covering the various fields of health, information, education, justice, zero hunger, and environment, AI experts, researchers, and policy offers came together to assess the potential of technology to serve society.
Twenty projects were presented, including a fire risk assessment project with satellite images, a bed availability monitoring tool for hospitals, and fake news detection efforts to tackle disinformation on social media.
In each of these sessions, surveys among the audience were conducted, leading to two main lessons: On the one hand, along with the intelligent use of data, AI is a potential for opening new doors in terms of operational efficiency and better insights in a complex world. On the other hand, an honest process of prioritization will need to assess the feasibility of AI ambitions, matched with their impact on society.
This process is crucial to avoid the traps of the so-called AI Valley of Death, where non-scaled AI prototypes still end up all too often due to a lack of solid data governance, the absence of concrete purpose, a lack of legitimacy, or simply unrealistic expectations.
From strategies and guidelines to best practices and outputs
After listening to the discussions held during AI4Good, I conclude that there are two steps AI actors across industries must take:
- Governments need to deploy their existing national AI strategies, building on the momentum in order to scale the industrial potential of AI.
- Doing this, delivering artificial intelligence will need to go hand in hand with a built-in ethical setup, transforming the discussion on the future we want into reality and standards.
To achieve this, all players – governments and industries, researchers, and citizens – need to be part of the same journey, leveraging a common toolbox.
Find additional details of the event here, with upcoming content in the coming weeks. Also stay tuned for an upcoming series of articles I’ll publish analyzing the respective webinar sessions.