Are machines fair? (The way to a better AI)

Publish date:

AI applications: Fair or bias?

Are all candidates for a job opening valued equally by AI systems irrespective of their gender? Is a chatbot able to resist the toxic comments of Twitter? Do our online queries resist reinforcing our own prejudice about minorities? Do intelligent marketing tools profile users based on competencies and life choices rather than race or color?

For all these questions we now have at least one example where the use of AI applications has resulted in negative or biased results. In 2015, it emerged that one of Amazon’s recruiting systems downgraded women’s potential for developer jobs. In 2016, Tay, a chatbot by Microsoft that was expected to mimic human interactions, started to reproduce several inflammatory responses as it mimicked the abusive language that Twitter users directed to it. For years, Google’s autocomplete predictions for user queries on “are XYZ …,” where XYZ typically represented some minority ethnic group, was a staple of uncomfortable looks as it returned bigoted views. And even now, in 2019, Facebook is being charged by the US Department of Housing and Urban Development (HUD) as discriminating against people in terms of sales or rentals of a dwelling based on their race or color through its targeted advertising systems. All these companies acted immediately to rectify the shortcomings of their intelligent products. Amazon publicly denied using that recruiting system. Tay was quickly taken offline by Microsoft and her more polite sibling Zo is now exploring the Twittersphere more carefully. Google has mostly weeded out inflammatory autocompleted queries and Facebook fully cooperates with HUD to resolve this as soon as possible. But the fact remains: our AI solutions were unfair. They were unfair not only because their results were biased in some way but also because their implementation breached the confidential nature of users’ data and, with that, the users’ privacy. When searching for a new home, we do not expect our first language to be a deciding factor to our search results. Similarly, when our technical aptitude is evaluated, we do not anticipate any of our protected characteristics to be used in determining our results. Data confidentiality and AI bias are linked because data that are not expected to be part of a judgement (e.g. one’s sexual preferences) find their way into the framework implementing the analysis of that judgement call (e.g. the presentation of an advertisement)

Major regulatory bodies have already recognized these issues: judicial initiatives such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR) exemplify our awareness of the need to regulate the application of AI algorithms. Similarly, industry and academia initiatives like the Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and the Institute of Ethical AI & Machine Learning champion solutions and frameworks to provide ethical applications of machine learning.

Science, as always, is striving to meet these new requirements from society. We direct “what if” questions to our ML algorithms, resulting in ignited interest in counterfactual inference. We care about what would happen if certain key attributes in our sample were different. We pose “why” questions to our ML algorithms. This drives the emergence of interpretable ML frameworks, such as LIME (local interpretable model-agnostic explanations) and SHAP (shapley values additive explanations). We want to know why a particular decision has been made and what were the key drivers behind it. We require privacy guarantees on database queries and bring the notion of differential privacy in the forefront of data access. We quantify the trade-off between accuracy and privacy and distort our results according a privacy budget. We appreciate that our estimates and forecasts are not always strictly binary (yes/no) or single-point estimates. We make decisions based on probabilistic forecasts where distribution-like outcomes and uncertainty quantification are integrated.

AI is a tool that can help society move forward and improve quality of life for all people. Machine learning allows us to tackle issues on environmental sustainability and sustainable development, social welfare, and criminal justice as well as healthcare and education. AI can offer solutions in the form of fast, automated, data-driven decisions. But, it can inadvertently amplify a problem through biases and obscurity. Being able to deliver fair, accountable, confidential and transparent AI solutions in organizations around the world is not a corporate social responsibility exercise but a necessity for the twenty-first century.

For further information please contact Pantelis Hadjipantelis and Marijse van den Berg

Related Posts

5G

Public or private: What is the right 5G network choice?

Becky Hsu
Date icon August 2, 2019

Based on reports, 5G is going to solve everything. 5G promises to solve many problems. It...

AI

Retail in AI: An ethical dilemma

Shannon Warner
Date icon July 31, 2019

Consumers expect companies to respect their data when implementing AI.

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information