Skip to Content

AI Ethics & Society

Capgemini
24th April 2020

Artificial Intelligence (AI) is evolving at a rapid pace. From the critical such as diagnostic tools in healthcare to the mundane like dating apps, AI is undoubtedly changing the way we live. What effect will these changes have on our societies, and how will we manage and mitigate the risks associated with them? To answer this, we need to understand the AI incentives that exist within our societies.

Broadly speaking, there are three components that make up our society: Individuals, companies, and the government. As with any new technology, the adoption and deployment of AI relies on motivations relative to these three groups. This blog will drill down into these three perspectives and assess any potential benefits and risks associated with them.

Human-Centric AI: Responsible deployment of AI that protects us

The need for transparency and control over how companies manage our personal data has become increasingly relevant in recent years. In 2018 this was recognised with the introduction of the EU’s General Data Protection Regulation (GDPR) which aims to give us control over our personal data. Regulation does of course have its limitations, with some issues too complex or context-dependant to be addressed generally.

Figure 1 The flaws of connectivity 2017, Medium
Figure 1 The flaws of connectivity 2017, Medium

AI poses real and uncertain risks, which include but are not limited to privacy. AI-bias can result from poor data quality or even as a direct result of previous practices. For example, heart attacks present themselves differently in women compared to men and are consequently 50% more likely to be misdiagnosed. If historical data is used to train algorithms we will have to account for existing biases in our current and/or previous practices.

Benefits

We want to have control over our personal data and its uses. If deployed ethically, there is no doubt of the potential to truly enhance our lives and development.

Risks

A human-centric AI, with its numerous legal and ethical frameworks may slow down the rapid evolution of novel AI. Confusion over rules, and availability of data could impede rather than facilitate our development.

Consumer-Centric AI: Design and use of AI by individual companies to further their market potential

AI is being utilised across industries to increase sales, detect fraud and improve our customer experience. Collecting seemingly unrelated data on us can inform what we buy, where we go, how we vote… is consumer-centric AI a benefit or detriment to us in the long-term?

Figure 2 Why Breaking Up Big Tech Could Do More Harm Than Good 2019, Knowledge @ Wharton
Figure 2 Why Breaking Up Big Tech Could Do More Harm Than Good 2019, Knowledge @ Wharton

Benefits

Improving the efficiency of companies will create new opportunities for revenue generation, evolving the job market. This could also look to enhance our lives through the automation of previously mundane tasks, with more value attributed to interpersonal skills and creativity.

In addition, unlike government institutions, for-profit large technology companies are not afraid to prototype various novel or untested AI algorithms until they find an optimal solution to any given problem.

Risks

Consumer-centric AI is focused on maximising consumer consumption at scale, to create more value for shareholders in a competitive climate. This could lead to independent AI development generating interoperable standards. Unintended catastrophic risks from AI could originate from a short-term profit mentality.

Government-Centric AI: AI for the benefit of the country

An indication of the adoption and deployment of technology by governments worldwide is their response to the outbreak of Covid-19. A broad range of approaches have been attempted worldwide. AI-epidemiology has been of paramount importance as complex models map and continue to predict the spread. While facial recognition in some countries has been deployed to trace person-to-person contact.

Figure 3 Pawel Czerwinski 2018, Unsplash
Figure 3 Pawel Czerwinski 2018, Unsplash

Benefits

As well as the benefits of governments utilisation of AI against global pandemics, AI is and could continue to elevate our day-to-day lives through government services. Public transport is becoming more efficient and reliable, moderating the impacts of climate change and improving access across income levels. Deployments of AI in the judicial systems and law enforcement could make us safer and even remove personal bias.

Access to detailed, complete, and identifiable datasets could lead to the development of low bias AI with cross-domain applications. This would be particularly useful for developing AI in the healthcare industry, where there is a requirement to have access to identifiable longitudinal data (same sample of data at different points in time) in order to discover new treatments for chronic non-communicable diseases.

Risks

Government-centric AI may not be beneficial for human development in the long term as privacy and anonymisation dissolves. Unlike consumer-centric AI, we have fewer rights when it comes to privacy and even previously anonymous data can be de-anonymised with AI.

Facial recognition was used at Notting Hill carnival to catch wanted criminals with a 98% accuracy rate – a high but far from perfect figure. Decisions made on population-scale data will profoundly affect people’s lives and should be under constant review and subject to ethical scrutiny.

Looking to the Future

We are continuing to embrace AI seamlessly in our daily lives. Bespoke video streaming suggestions, accurate journey time estimates, and live speech translation can and are being facilitated by AI. There is no doubt of the potential to truly enhance our lives and development, but as with any data-driven technology it is only with understanding and anticipating societal impacts that we can we really begin to determine the level of integration we will accept on a personal level.

Nuzli Karam

Consultant
Nuzli is a graduate consultant in the Insights & Data practice in the UK. She studied Aerospace Engineering (BSc) at the University of Liverpool. She has two years’ experience in the public and private sector focusing on AI applications.