Skip to Content

Are we prepared for the medical AI revolution?

Capgemini
October 18, 2020

Imagine you are managing a hospital during a new pandemic. This disease has an unusually high hospitalization rate, and things are spinning out of control: you have a limited number of physicians, agonized after grueling long shifts; you have limited ventilators and medication, putting physicians in moral distress as they prioritize one patient over another.

A consultant comes to your door with a recommendation. “Would you consider our latest strategic offering, AI Doctor?” she says. “Artificial intelligence has progressed in ways you never could have imagined. Our AI Doctor ingests patient data such as X-ray images and listens to patients’ symptoms to make diagnoses. It makes all the hard choices – who gets intensive care, who receives the ventilators etc., so your doctors don’t have to. It also never gets tired! While your doctors are taking their days off, AI Doctor would happily stay in charge, 24/7, 365 days a year.”

You think this is a brilliant idea and call the procurement team right away. As you enthusiastically share AI Doctor’s functionalities with the team, they appear to be somewhat doubtful. “As you know,” the team leader explains, “our hospital has been sued quite a few times over medical malpractice, some attributed to human negligence, others to machine failure. If the AI makes decisions that are challenged in court, who should be held accountable for the legal consequences?”

Even with your rudimentary legal knowledge, you admit this is a challenge. How can we set up a mechanism to deal with AI’s ethical mistakes, when regulators are having a hard time catching up with the technological developments?

One way to think of it is that an AI agent shall not constitute a human being. Yet, the AI Doctor product could be originated from a tech firm, implemented by a system integrator, managed by a hospital, with its training datasets coming from numerous practitioners. The line of accountability remains unclear. Another solution, of course, is to treat the AI agent as a human – ensuring that it enjoys the legal rights we do. But that implies you losing control over AI Doctor. For instance, if it starts making mistakes, you cannot reboot it since that might constitute murder. As the manager of this hospital, simply thinking of it sends shivers down your spine.

While you are still contemplating the first challenge, the procurement lead comes up with another one. “Look, even if we do have a mechanism to deal with AI Doctor’s mistakes, we still need a mechanism to prevent one. How can we ensure that it is up to par on medical professionalism to make ethical judgments? It doesn’t have to pass the board, after all,” he exclaims.

You unwillingly agree. Biases in AI are not uncommon, and as a leisure technology enthusiast, you know they might originate from two sources – datasets and algorithms. On datasets, scholars have already warned the medical AI community on how the current lack of diversity in genomic data, as well as undertreatment of certain racial groups are fed into medical AIs as training data, affecting their performance. On algorithms, powerful neural networks are often used for complex applications such as medical AIs. However, compared to traditional algorithms such as decision trees, they are more opaque in nature and more difficult to explain. These “algorithm black boxes” make AI Doctor’s behavior more unpredictable, and less trustworthy in an industry where human lives are at stake.

As the debate continues, you look out into the hallway, where the patients are struggling and the doctors are restless. You cannot help but envision a scenario in which new AI regulations define clear accountability for humans, ethical physician practices are used as training data, and explainable algorithms help humans understand and improve medical AIs – things would turn out so much better for both your patients and physicians.

But as these grand challenges remain unsolved, at least in the near future, you will still be on your own in this painful struggle.

Author

Kevin Cao

Analyst – Graduate Program

Capgemini Hong Kong