Skip to Content

AI Fairness – An honest introduction

Pantelis Hadjipantelis
March 3, 2021

Ethical AI focuses on the societal impact of AI systems and their perceived fairness. There is a multitude of overlapping definitions, but the common denominator is always there: Will this AI system affect our society in a positive or negative way? As with all other transformative technical leaps, the implications of integrating AI systems in our society are far-reaching and often not obvious. Almost all major international organisations have published working papers or reports on Ethical AI or AI for good; the IEEEITUOECDEuropean CommissionUK governmentGoogleMicrosoftIBM; the list is long enough to be a blog post by itself! This shows that people care about the impact of AI systems on their daily lives. In an early post from the AI Guild has already touched upon this need to build trust in AI systems. To achieve this task current ML research has (re-) introduced terms regarding Fairness, Accountability, Transparency & Explainability, and Causality. Succinctly here is what each of them refers to:

  • Fairness: Are the results of AI algorithms independent of certain sensitive/protected characteristics? (e.g. ethnicity)
  • Accountability: Who is accountable (i.e. responsible) for an AI systems design and more importantly for its decisions?
  • Transparency & Explainability: What does the AI system do? How is this decision reached?
  • Causality: Why something happened in real life?

We note that while interlinked these terms are not equivalent.

A fair system might be opaque; we do not necessarily appreciate how an intelligent defibrillator works but we all agree that it has life-saving impact on all people. A perfectly explainable system might be completely arbitrary; one might use a simple risk estimation method (e.g. logistic regression) for a classification task without an apparent reason as to why a given threshold was chosen. A causal link might be completely unfair; an A/B validated marketing campaign might be successful because it exploits people’s vulnerabilities (e.g. shopping addictions). When employing an AI system all these points should be commented on in an informed way. For the rest of the blogpost, we will focus on a simple a/the exposition of fairness and the metrics associated with it.

Research activity in fairness in AI has exploded in recent years; Google Scholar suggests 6 results for the term “AI Fairness” between 2010 and 2015 but 444 results from 2016 to mid-October 2020. This has led, correctly, to several different definitions of fairness. We emphasise that the fact that there are multiple definitions of AI fairness, is correct and expected. In different settings, the definition of fairness is not straightforward. While we all agree that unfair discrimination and biases are wrong, they are also not clearly defined in all cases and even if they are, their remedy is not obvious either. We will explore three simple fairness criteria: Demographic Parity, Equal Opportunity, and Equal Accuracy. We will focus on their application in terms of classification tasks. Each of them can be interpreted as follows:

  • Demographic parity: Our AI algorithm’s predictions are independent of the sensitive feature A, i.e. the same proportion of each subgroup is classified as positive. Formally, we check if the Predictive Positive Value among the subgroups is equal.
  • Equal opportunity: Our AI algorithm’s positive predictions appear at equal rates among the subgroups of the sensitive feature A, i.e. the same proportion of each subgroup of truly positive candidates is classified as positive. Formally, we check if the True Positive Rate (or Recall) among the subgroups is equal.
  • Equal accuracy: Our AI algorithm’s correct predictions (positive and negative taken together) appear at equal rates among the subgroups of the sensitive feature A, i.e. we are equally good at classifying each subgroup. Formally, we check if the Accuracy among the subgroups is equal.

We will use a simplified example of an AI system that classifies job applicants as “hireable” or not. To that extent, we assume we want to do not want to utilise gender information when making a hiring decision, i.e. we do not want an applicant’s sex to be a factor in our decision making. Let’s start:

  • We recognise immediately that simply excluding a candidate’s sex is an inadequate solution. Simply put, because of proxy variables (e.g. schools of attendance, social club memberships, etc.) we might infer that easily.
  • We might wish to ensure that we satisfy demographic parity. If 20% of our applicants are women, then 20% of the new hires ought to be women. But one may object as we can just randomly pick applicants and we will still be satisfying demographic parity!
  • We might wish to ensure that we satisfy equal opportunity. If 30% of our hireable applicants are women, then 30% of the new hires ought to be women. But one may object as we can still reject most women candidates and still satisfy equal opportunity!
  • We might wish to ensure that we satisfy equal accuracy. If our classification accuracy for women candidates is 70% then it should be the same with other candidate genders. But one may object as we can still reject most women candidates (both in terms of demographic parity as well as an equal opportunity) and still satisfy equal accuracy!
  • And all these results do not even touch upon potential biases reflected in the training data, where women might look less “career-focused” than men!

So, is there any universal AI fairness solution? No. The verdict on this question is already here (see Kleinberg et al. (2016) for this important result showing that “key notions of fairness are incompatible with each other”) but that does not mean that fair AI systems are unattainable. It means that AI, like its makers, is an imperfect framework that must be tuned and trained to a particular task. Fairness in university admissions and in face identification does not refer to the sam­­­­­­e concept. We should accept that we need to do informed trade-offs between different fairness metrics and stand accountable for them.

To conclude, it is equally easy to be lulled into a false sense of security or to panic about an AI system’s societal impact. It should not be that way and it does not have to. For an informed, experienced, and current approach to your AI solution and its fairness implications, please contact me here.

Through our Capgemini UK’s Ethical AI Guild we provide guidance on ethical issues and practices. Made up of experienced AI practitioners, the guild looks to accelerate our clients’ journeys towards ethical AI applications that benefit all.