Skip to Content
DTR_Speaker-Banner_1200x350_Stuart_Birrell
Innovation

Ready For Take-Off: Defining A New Age In AI Innovation & Accountability At Heathrow

Stuart Birrell is the chief information officer of Heathrow Airport, and a member of the executive committee. Heathrow is one of the busiest airports in the world, with more than 80 million passengers per year. Previously in his career, he was CIO at both McLaren Group and Gatwick Airport.

The Capgemini Research Institute spoke with Stuart to understand how Heathrow is deploying AI and addressing potential ethical issues associated with the technology.

AI at Heathrow: Creating a Smoother Journey for Travelers and More Accurate Processes

Could you give us some background about your role at Heathrow?

I have been at Heathrow for four-and-a-half years now. As CIO, my role covers all the technologies across the organization, including operational systems as well as your traditional back office and commercial systems. And one of my core focus areas is automation – both physical and logical.

Can you give us some examples of how you are currently using AI at Heathrow?

We are using AI systems at different stages of the customer journey and in operations. At security control, we have been using facial recognition systems for quite some time. This is obviously with government support and backing.

Our ambition is now to deploy facial recognition technology to allow passengers to check in and board their flights without having to show their passport or boarding pass. This will be a much smoother journey for travelers and help us create a more accurate process.

We are also implementing AI systems in operations. There is a really complex set of interactions between the time that the aircraft lands to when it takes off again. There are dozens of companies, people, and actions and movement involved. So, keeping track of that – and becoming more predictable based on events – is really difficult. That’s where we are really using the AI, in collaboration with number of companies.

Distinguishing Facial Recognition and Digital Identity

In terms of ethics and AI, consumers have raised concerns about the use of facial recognition, such as how their data is being used. What do you think are some of the critical issues in using facial technology?

At Heathrow, we’ve been using facial biometrics for over 10 years in our eGates at the border, and the technology has been well received by a number of stakeholders, so much so that last year the government allowed even more countries to use the eGates. The technology has helped to streamline the experience at the border while keeping the country secure.

Facial recognition is preferred for the passenger journey. This is because of the relative ease with which it fits into existing systems, behaviors, and processes. It is a quicker, more accurate and efficient way of using the data already stored on passports. Although there are some who remain skeptical of the technology, the majority of passengers see the benefits and understand the need for it, especially in an environment like ours which needs to be kept secure to keep our passengers and colleagues safe.

Business Stakeholders Must Be Accountable for Their AI Systems

An important component of ethics and AI is the transparency and explainability of AI. How do you ensure that your AI systems meet those requirements?

It depends on the type of decisions you are expecting the AI to make. At the moment, most of the AI projects that we are working on support our teams with their decision making, not replace it. The key decisions are still made, ultimately, at the human level.

This comes back to what your values are, the decision you are asking the system to make, and the trust you have in the data that you used to train it. Having the causal link back from data to a decision in an AI-based environment is very difficult and usually subject to very specific and valuable company IP.

Do you feel humans should always be involved in checking decisions made by AI systems?

No – not necessarily. It very much depends on the decision the AI system is making and the consequences of this decision. What are the implications of getting it wrong? If the consequences are minimal, you don’t need human intervention. If consequences on people are significant, then you need a human to validate the decision until you learn to trust the system and can justify the decisions it is making.

How do you validate decision making if there is no human involved?

There is no easy answer. If you are completely replacing human decision making with AI-based decision making, you’d better have a robust justification for it; an unbiased, robust analysis; and high integrity and explicability.

If an AI system were to come up with a wrong diagnosis, who is responsible?

It’s no different to any other system. If our accounting system makes a mistake, the CFO is accountable. I don’t see that changing with AI. The GDPR makes it very clear that the data, the use of that data, and the decisions that are made are absolutely in that business ownership space. So, business stakeholders are responsible for the decisions that an AI system makes in their respective domains. This is why we’re working together with many of the different functions across our business to ensure that everyone understands the AI systems and is accountable for them.

How do you see the split of responsibility between Heathrow as an airport and airlines when it comes to ethics and AI or data privacy? Who is responsible?

With 400 companies here at Heathrow, there are 20,000 people a day from over 200 companies logging in and accessing live data every single day. So, orchestrating the quality of that data to drive decision making across our ecosystem is a huge challenge. And, whether that data is used to drive an AI-based decision or a human-based decision, whoever is making that decision or running that system is accountable. So, everybody involved is absolutely accountable and responsible. We all share this data on a massive scale every single day, and we are all making thousands of decisions everyday based on that data.

Recommendations

What would be your top recommendations for organizations who are starting their AI journey and beginning to confront these sorts of ethical issues?

Start small, trial, develop, take the organization with you and have a very clear understanding of the decision making and its validation mechanism. To me, it’s no different from when ERPs came in, and we started doing things like automation. Where does that accountability lie? We can make it complicated and sound complex but it’s not. From where we have come in the last 20 years of IT technology, this is just another phase in that evolution. While it’s highlighting several big issues, we just need to deal with them in the same manner as the others and not be afraid of it.