Never has the potential of artificial intelligence (AI) for health been greater than during the pandemic that is currently affecting our planet. With high transmissibility, rapidly transforming symptoms, and vaccines being rolled out too slowly, COVID-19 has put entire governments and their citizens in a challenging situation. Resources have been strained and serious difficulties, both human and logistic, have emerged.
In assessing this new threat, which feeds off of the highly interactive society in which we live (Fujita and Hamaguchi, 2020), several digital technologies have been harnessed for the public health response. Foremost among them are high hopes for AI; hopes that echo the pre-existing importance of healthcare in recent national AI strategies. The AI4Health Taskforce of the Canadian government gives recommendations on how AI in the health sector can benefit Canadians (cifar.ca, 2020). Israel’s government focuses on using AI technologies to ensure efficient allocation of medicines, human resources and hospital beds (Berkovitz, 2019). The Korean Ministry of Science and ICT began to build a platform for using AI for drug development (Won Shin, 2020).
In light of this trend, the virtual workshop organized by Capgemini in partnership the International Telecommunications Union (ITU), the UN agency specialized agency for information and communication technologies, on AI for achieving the Sustainable Development Goals (SDGs), provided a snapshot of the role that AI and data play in augmenting healthcare professionals. Indeed, not only can data and automation help in the current pandemic, but their intelligent use can help further the realization of the bigger picture set by the UN through its SDGs. With three of these global goals aiming at a healthier world, AI promises to be an important accelerator while also facing several human and technological challenges.
The promises and achievements of an AI-transformed healthcare sector
AI for health promises to improve the life of key healthcare actors by offering a range of different applications across the four playfields of AI in the public sector.
- Detection of health anomalies
By enabling accurate analysis of various types of data (from demographic and socioeconomic to symptoms and treatment data), machine learning methods can significantly improve doctors´ diagnostic capabilities. There are numerous examples of how AI can help clinicians. For example, researchers at the University Hospital of Bonn are developing methods to use AI for the histological analysis of onchocerciasis to tackle river blindness, which affects millions each year in Africa.
Similarly, AI has emerged as a valuable diagnostic tool for COVID patients. While some AI models appear capable of distinguishing asymptomatic positive people from healthy individuals through forced-cough recordings (Chu, 2020), Capgemini engineers have recently implemented neural networks to identify CT scans with COVID-19 associated pneumonia.
- Helping healthcare professionals in the decision-making
Data and its intelligent use can bring precious insights to decision makers, be it to predict coming events or plan an appropriate response. This is crucial in the critical moments of a patient’s life. In order to help healthcare providers make faster and better decisions when it comes to managing trauma, TrauMatrix partners, including Capgemini Invent, have worked on six major real-time services to effectively address hemorrhages and traumatic brain injuries.
In a COVID-related context, and in the same spirit of enabling strategic decision making in the most uncertain situations, Capgemini Invent and Île-de-France Regional Health Agency have developed a data-driven instrument that is able to forecast the availability of hospital beds in the region and therefore support authorities in the allocation of beds. Eventually, STEP makes it possible to treat many patients while maintaining occupancy levels below critical thresholds.
In addition to managing the COVID-19 infection, AI has played a crucial role in helping different governments automate the tracing of infected contacts and implementing smart confinement strategies (Whitelaw et al, 2020).
- Intelligent automation of health administrative processes
An immediate application of AI is helping the healthcare sector manage their daily business by automating some of its most trivial administrative processes. In this area, natural language processing tools can be used to extract information from clinical records or digitalize medical files. This makes it possible to fully automate tasks that are time-consuming and routine-heavy, and gives practitioners more time for the most complex cases.
- Augmenting the interaction with patients
Finally, besides the improvements brought about by the above-mentioned AI applications, the patient experience could be further improved by the introduction of clinical chatbots that could schedule appointments and/or provide real-time advice and information (e.g. Corona-Help.UK NHS bot).
Here again, while not replacing the human in critical cases that require a personal touch, virtual assistants have helped challenged organizations such as emergency helplines face the rising number of calls and queries.
Weighting up the risks of embedding AI in the clinical practice
As shown, many aspects of healthcare delivery can be augmented by the implementation of AI. Yet, the introduction of AI in everyday health business is still limited (OECD, 2020). In fact, embedding AI in daily clinical practice requires weighting a number of technical, ethical, and regulatory aspects.
From a technical vantage, certain AI-for-health applications are still not sufficiently robust. One obstacle to scaling AI in the healthcare sector is the phenomenon of “weak AI,” i.e., narrowly focused AI applications that are trained using a specific set of data but do not work as well when the input is even slightly different from the training data. This can lead to misguided evidence and diagnoses, as happened in 2020 when an oncology department made unsafe and incorrect recommendations for cancer treatments by using synthetic cancer cases instead of real patient data (Gerke et al, 2020).
AI models are very dependent on the quality of data, from which they learn statistical irregularities (overfitting) and even assimilate the stereotypes of their scientists. This latter phenomenon can make an AI tool extremely discriminatory against specific groups based on factors such as race, sex, or socioeconomic conditions, leading to the emergence of a dangerous inequality of outcomes. For example, in 2019 an algorithm used by several health care systems classified “more expensive to treat” as “the patient is sicker.” In real life, however, unequal access to healthcare meant that healthcare providers using these metrics spent less on African-American patients than similarly sick white patients (Manyika et al, 2019).
In addition, the data used as “ingredients” for healthcare sector AI comprises sensitive information people do not freely share. Research notes “numerous problems associated with data ownership and privacy” in the field of AI for health, calling “for careful policy intervention” (Vinuesa et al, 2020). Not surprisingly, the respondents of our survey during the AI4Good ethical sessions affirmed that they were extremely protective of their medical data. Privacy and consent are therefore key aspects to consider when deploying AI in the everyday provision of health services.
Finally, the question of who and how is allowed to handle this data directly also relates to the issue of liability. Who should be accountable for a misdiagnosis done by an AI? How should this use of AI, bad or good, be regulated?
Beyond the hype – towards a realistic and ethically conscious use of AI for health
AI represents a transformative power for the healthcare sector and has great potential to improve health worldwide. Although it brings us closer to realizing important policy objectives such as the UN SDGs, this technology poses several challenges. If not properly addressed, these challenges may well amplify and entrench current socio-economic and geopolitical problems, which are surfacing even more swiftly as a result of the coronavirus pandemic.
In order to quell the hype surrounding innovative technologies such as AI and implement it in a way that can realistically keep the pandemic – and future similar challenges – at bay, the following steps are essential:
- Improving AI robustness by ensuring data quality and fostering research related to the use of AI for health
- Building AI on ethical principles, such as those conceived by the High Level Expert Group of the European Union or the OECD: AI for health should be fair, privacy-compliant, explainable, and secure
- AI solutions in healthcare should remain human-centric with respect to the patient – so as to increase his/her trust – and to practitioners, who should be kept constantly in the loop and adequately prepared for a new era of the healthcare labor market
- International collaboration and coordination in the sharing of data across the health industry and governance of AI are the final underlying requirements for a successful realization of AI’s potential in the field.