Insights & Data Blog

Insights & Data Blog

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

Alternate title: 

Computer says ‘No!’
Machines should be accountable for their decisions

boord pijlen besluit rechts volgende rechte manThis summer I was on holiday abroad in a place I’ve never been before. We toured around by car, relying extensively on our sat nav system. One afternoon, I decided to switch off the navigation device and tried to find my way back using an old fashioned technique: my sense of direction. Yes, I reached my destination and it felt good. My skills from the last century still worked. But it also made me realize how depended we’re on computers making decisions for us. And the stories around the mishaps that happen when drivers follow the directions blindly.

Override the system

When do you become stubborn and find your own way? Not when you are on unknown territory. In the place I live I often follow another route than suggested. Because I know more about the specific situation around my home. I know which traffic lights to avoid, which streets to avoid in rush hour and where street works are in my neighbourhood.

Well, you’re able ignore the directions given by a sat nav quite easily. (I’m still surprised the system doesn’t turn mad at me.) But in future when I use my self driving car, I would probably not be able to just ‘override’ the system. I will become very much depended on the computer algorithms of the car itself, and the decisions it makes for me.

When we have equal knowledge of a specific domain, in this example the road conditions, we can evaluate the results of computer based decision critically. And deviate from those decisions. That’s good, because computers can be wrong. Yes, they can.

In a more life saving example, the artificial intelligence IBM Watson for Oncology suggests oncologist’s diagnoses and treatments based on the patient’s medical findings and history. And IBM Watson is becoming very good in giving spot-on advice. But the human expert, the doctor, is still required to look critically at the findings from the artificial intelligence.

This is not only required because computers can make wrong decisions. But is also needed to enable the learning process of the machine. Without feedback from the real world, the machine cannot learn and improve. Just like with human learning.

“(…) even the most automated systems rely on experts and managers to create and maintain rules and monitor the results.”
(Thomas H. Davenport and Jeanne G. Harris, MIT)

Computer says ‘No!’

In the British comedy series “Little Britain”, the running gag is of a customer service representative who denies reasonable requests. Just because the computer system doesn’t allow it for some unknown reasons. Even Wikipedia writes about this phrase.

IBM once told me that the best business cases for artificial intelligence are those where humans lack the knowledge to come to proper decisions, whatever they are. Computers can help by supplying the required information. This creates added value, because the quality of the decisions will improve significantly.

But there’s also a strong tendency to replace the human by a computer bases decision support system. The idea is that computers are better and more reliable decision makers.

At this point, users of these decisions making systems are no longer able to question the decisions directly. And while we can discuss any human made decision with the decision maker itself, these discussions are void in the computer world.

Eventually we can deconstruct the way decisions are made by looking into the code of the machine. But with the advent of machine learning and artificial intelligence, the decision support rules are generated over time. The artificial intelligence learns the rules based upon fuzzy data and historic data and events. It’s hardly possible to reconstruct the way a decision rule has been composed and applied over time.

“Now things get weird. Nobody can answer, because nobody understands how these systems (...) produce their results.”
(Clive Thomson, Wired)

Who is accountable?

Accountability of computers, the suppliers, programmers and owners of those systems have been a matter of legal discussions for a long time. But I don’t want to talk about legal issues here. I want to focus on the accountability around the decision making processes within the computer.

We need to know ‘Why’ computers have reached their decisions, in order for us to determine if the decisions have been made on good grounds. When decision support systems remain black boxes, we cannot judge if things went fair. I don’t want to be dramatic, but this fairness is the essence of our democratic and open society.

Decision support systems – and artificial intelligence even more – don’t give up their secrets easily. When we design systems we should build in accountability. Yes, I’m aware that the logic behind the decision making will be regarded as company secrets. This requires openness from the organization that deploys these decision support machines.

But on the other hand, if we cannot judge the algorithms and decision patterns those systems use, we naturally become superstitious about those systems. And eventually this will turn against the companies who use decisions support systems and artificial intelligence.

So what to do to create openness about this decision making. Here’re three suggestions:

  1. IBM Watson goes into great length presenting to the user how it came to its conclusions. Al based systems should do that, so we as users can see what information has been used to reach a verdict. This includes the rules and decisions applied. By presenting this evidence we can all see how and why the computer says ‘No!’ (Or ‘Yes!’ for that matter.)
    We can now discuss the logic behind the reasoning. When the computer has reached a wrong conclusion – yes, this will happen – we can teach the computer how to improve. Just like with people, errors are human. It’s the way we deal with these errors that makes them more acceptable.
  2. Another suggestion is that artificial intelligences should publish their algorithms. This goes beyond the rules and regulations applied in the algorithms. With machine learning we’ll need to know the values and morals used when designing the system.
    Mike Ananny, an assistant professor of communication and journalism at the University of Southern California says there must be a broad interpretation of what an algorithm is: “the social, cultural, political, and ethical forces intertwined with them” should be considered too. Because these systems exert power over their users because of the decision they make for the users, this power should be made transparent.
  3. In robotics the need for a ‘kill switch’ is popping up. When robots go rogue, we need to be able to switch them off. Computer based decision systems can also be regarded as robot like: they can make decisions without human interference. When the rules are buggy or the machine learning goes astray, we need to be able to switch them off. A ‘kill switch’ is needed.
    In stock trading most deals are transactions are done by autonomous computers. The split-second decisions cannot be overlooked by humans, because humans are just too slow. But this poses the risk that these computer trading can cause a flash crash. This crash can eventually lead to a world wide financial crisis. So in the end we need to find some method the check, control and even stop the behaviour of autonomous machines.

In the end, saying that artificial intelligence will give us better and faster decisions isn’t enough. We should also look at the negative consequences of decision support systems. Thinking about the consequences these systems have on the exception, the susceptible and the peculiar, is a good starting point.

Photo Public Domain via Pixabay.

About the author

Reinoud Kaasschieter
Reinoud Kaasschieter

Leave a comment

Your email address will not be published. Required fields are marked *.