Skip to Content

Can we stop intelligent technology from making mistakes?

Capgemini
2022-02-10

Self-driving cars – including fully autonomous prototypes and incrementally ADAS (Advanced Driver Assistance Systems) rely on cameras and lidar to collect data on their surroundings. Both use AI to make high-stakes decisions, such as ‘should I brake, swerve or accelerate?’ The wrong choice has real consequences for other vehicles and pedestrians.

This is perhaps the most high-profile example of a new technology that must take in its surroundings and make high-stakes decisions. Another example would be medical diagnostic tools, which capture subtle information in an MRI scan, a DNA sample, or real-time patient monitoring and use AI to advise on life or death interventions.

When we talk about AI, we mean the mathematical and statistical models that transform data into real-world decisions. If these models are presented with a large number and wide variety of training scenarios, they can become accurate and versatile. But they struggle with any scenario outside their learning. In unfamiliar situations, an AI can make bad decisions with human consequences.

For these technologies to be accepted, we need to build in mechanisms to recognize the limits of their knowledge and ensure appropriate action is taken when they step outside it – whether that is defaulting to a safe mode or handing over control to humans.

The limits of machines when processing the real world

AIs can ingest data on various scenarios, such as other cars on the road swerving, braking, skidding, and tailgating. They can learn to recognize these scenarios and react instantly, sometimes better than human experts. This capability is transforming many industries.

But a self-driving car is not really learning to recognize “erratic driving”—a human-centric concept. It is learning to recognize a series of sensor data in a high-dimensional space, a specific trajectory that corresponds to a type of event. This trajectory will have a slightly different shape for every example of a scenario. The AI can learn a signature within this cloud of data that permits it to say: ‘a car has swerved in front of me, and I need to activate a response’.

The industry’s enormous challenge is to make these learnings generalizable, so a car or a diagnostic tool can apply its knowledge to scenarios outside its training. We want a self-driving car that can make correct decisions even if it finds itself in a different country, in different weather, or surrounded by different types of cars and roads.  In other words, we want the AI to be able to say, ‘these new data look similar enough to my training data that I confidently recognize a car swerving’.

Right now, AI is bad at this. The data are just data, and the AI cannot separate the fact of a swerving car from other contextual factors, at least not without a great deal of human guidance during training and a larger amount of road test data than is economically feasible. Further, the AI will try to classify the situation as a known scenario even if the similarity is only partial.

As soon as an AI is presented with a scenario that does not appear in its training data, it becomes erratic. Just one sensor delivering an unexpected stream of values can make the current driving scenario unrecognizable. We have seen ADAS tricked by something as simple as stickers on road signs. For a more commonplace example of this problem, consider virtual assistants: they are good at following voice instructions to play songs or set timers but struggle with unusual requests.

How can we stop machines from making bad decisions in unfamiliar situations?

Maybe one day, AI will learn to recognize and navigate the human-centric context surrounding their data. Until then, if we want to use AI for high-risk/high-reward applications such as autonomous driving or medical diagnosis, we need to manage how they respond to scenarios outside their training.

One solution – and an area where we are actively conducting research – is understanding the limits of what the AI has learned. When the AI is trained, it builds an internal model of what it is trying to recognize or predict – but this model is only valid inside the cloud of data points shown. By drawing a boundary around that cloud, we can limit an application to classifying incoming scenarios that the AI can recognize.

A very “tight” boundary around the point cloud would mean that the AI only recognizes scenarios that are exactly the same as its training data. On the other hand, we might want some leeway to reflect real world variability, allowing the AI to make decisions for examples that are somewhat different from those it learned on. The skill is in managing this envelope for your desired application.

We can decide what the AI should do when encountering a scenario outside its boundary. A diagnostic or predictive maintenance tool might be designed to say: ‘I don’t know what this is, please seek expert human input’. An assistive driving tool cannot afford to wait for human validation in real time, so instead, it may require the driver to take the wheel as soon as the environment becomes even a little unfamiliar.

Managing the boundary of known examples s not a complete solution for autonomous cars, which need to make split second decisions. However, thinking about the shape of the sensor data in high dimensions can help design training simulations by ensuring that the data are as diverse as possible while still representative of real-world road tests.

We also need to consider how humans will respond. An AI can offer a degree of certainty alongside its prediction, for example: ‘I’m 60% sure this is a positive diagnosis, but a human expert needs to check’. This approach could backfire if the tool makes a highly confident but wrong diagnosis based on an incorrect rationale. A human who has come to trust the AI, in this case, may not even be aware that edge cases where the AI fails completely are possible. Further, “60% certainty” to an AI may mean 60% overlap of data points, which is not necessarily the same as a 60% chance of cancerous tissue. Rather than expressing a degree of certainty, sometimes it’s better that the machine holds up its hands and says ‘I don’t know, please defer to someone who does’.

Implementing safety constraints on high-risk technologies

In case you feel we are being unduly harsh on these new technologies, it is important to note that a well-trained AI will be right almost all of the time when deployed under familiar conditions. We should continue to build better and more powerful software based on AI. But we need to remember that these models are always limited by their training data, so we need systems to handle failure before deploying them in high-stakes applications.

By describing the shape of the training data, we can build quality assurance into this emerging class of complex autonomous technologies. This would mean that we can fully trust their decisions during the 99% of cases when the data are familiar because we have confidence that the AI will alert us when it is operating in unknown territory.

Capgemini Engineering’s Hybrid Intelligence team is actively researching how Trusted AI can be deployed within high-stakes automated decision making in autonomous transport and healthcare.

Benjamin Mathiesen

Author: Benjamin Mathiesen, Lead Data Scientist, Hybrid Intelligence, Part of Capgemini Engineering

Ben has 20 years of research experience in data modeling, scientific programming, numerical analysis, and statistics, with a modern focus on data science and AI. He directs client projects and internal R&D related to knowledge modeling, natural language processing, and trusted AI within the Hybrid Intelligence group.