Explainable AI- why is there a need to explain?

Publish date:

There has been a huge surge in big data and as a result, industries are finding different ways to exploit this data to their advantage through the use of Machine Learning (ML), a sub-division of Artificial Intelligence (AI). A majority of these ML models are black boxes, they lack the reasoning as to why a certain decision was made. In certain industries, this reasoning and justification are vital when stating the decision as this allows us to understand the outcome so that we can explain and justify the results.

How Explainable AI came to be

Due to the insurgence of Big Data, many organisations are making use of the potential power of this previously untapped source of information so much so that it is driving breakthroughs of new algorithms and technologies.

ML models are predominantly used to unlock the information to obtain insights into the data which leads to informed decision making. ML has been used in almost every industry as it has proven its worth to the point where existing systems such as bank loan approval system has been replaced with a ML model.

There are certain limitations with the models which range from processing performance, garbage data in garbage out (quality of the model depends on the quality of the data) and usability as certain models only work in a certain situation. However, one of the most recent limitations is the lack of transparency as to why a certain decision was made based on the input data, one example is when someone has been rejected for a bank loan and they are asking why they have been rejected. The need for this has dramatically increased in recent years and one contributing factor is that we are now increasing our dependency on ML and this naturally raises questions, why did it make this decision?

What is Explainable AI (XAI)?

In simple terms, it is the ability to explain or present as to why a certain decision or outcome was made which is understandable by humans. The aim of explainability is the same across all types of ML models but the way the models explain themselves can be different.

Is XAI necessary to use it in every ML application?

The answer is no, and the main reason is because it depends on the application of ML.

Consider the following two applications, a ML model that outputs a patient’s diagnosis and a ML model that can play a game against a real human. Now, the first model makes life and death decisions which directly affect someone’s life whereas the second model is for fun and to show off what AI can do. If the first model got the result completely wrong there could be dramatic consequences but if a mistake were to happen by the second model, the consequences would not affect someone’s life, therefore trusting the decision of the model without an explanation is dangers especially when the decision could be life-changing.

One thing that is certain, that no ML/AI model is perfect and this is one reason as to why XAI is required. Depending on the application, one reason why we need to use XAI is that it allows us to learn from the model as it may find new knowledge which was previously unknown to humans.

Why there is a need to check for bias in the model results?

To sense check the decision in order to see if there is any bias and/or discrimination, especially if it was unexpected. This means there is traceability of the decision and reasons which can be used to prove the ML/AI was fair and ethical which builds trust in the decision. The model is unaware of bias and/or discrimination but as humans we realise that it is unfair and unethical. One example was when Apple’s credit card was sexist against women and gave them a lower credit limit. It was only realised when a couple compared their credit limit increase request as the woman’s request was rejected despite her having a better credit score than her partner’s.

Read Capgemini Research Institute’s report “Towards Ethical AI” about how we can incorporate ethics in AI.

Another reason which is linked to discrimination is that the models needs to follow legislation. There is an EU regulation, part of GDPR that enforces ML/AI applications that make decisions for the public to provide the “right to explanation” of the decision made about them.

There is an increase in pressure from social, ethical and legal aspects to provide explanations about decisions made which may ‘force’ the models to include explainability.

An important reason is that it allows developers to improve and fix the models as they will know what is working correctly which can be improved and what isn’t, in order to fix it. Any industry creating any application, all perform tests so why not for a model?

Explainable AI (XAI) is still in its early stages and as more and more industries implement AI models, there will be a huge influence on developing XAI to increase its usability, making it easier to implement XAI and making the outputs to be understandable by anyone. GDPR has already affected the need for XAI in certain applications/industries but this will increase as the adoption of ML/AI increases. Even though XAI can be implemented in most models whether that be custom or a generic implementation, it is important to remember that XAI is not necessary for all applications but a useful component.

Author


Vishal Jhaveri

Vishal is a Data Scientist in the Insights & Data practice in the UK, he is also a degree apprentice and in currently in the final part of his degree program in Software Engineering.  Vishal has over four years’ experience, in both the public and private sector where he has delivered solutions to solve key business problems. He often undertakes his own projects from detecting drivers that are using their mobile whilst driving to analysing road traffic accident data.

Related Posts

AI

How data quality can hurt your data science programme… if you’re not careful

Date icon February 3, 2020

Data quality isn’t often thought of as the most exciting aspect of data science, but that...

AI

Automatic Machine Learning

Date icon February 3, 2020

Computer programming is about automation, and machine learning is “all about...