Skip to Content

Algocracy – For Common Good or Authoritarianism?

Pardeep Singh Ghuman
9th July 2019

Algocracy is transforming our world from the kind of place where man sets policy implemented by man to a world where policy designed by data scientists, implemented by an algorithm – and this move is shaking our very democratic infrastructure.

Though the above scenario describes a potential future state, already now in 2019, governments across the globe are planning and introducing their approach to AI governance, having been allured to algorithmic tools based on the huge rewards in terms of efficiency, reliability and care it can deliver to the public sector and citizens.

 The AI and GDPR conundrum

In the quest for public sector efficiency Denmark and Udbetaling (a public authority responsible for paying out a number of public services) are trialling a system that would use algorithms to identify children at risk of abuse, consenting authorities to target for early intervention. By combining statistics previously collected by the state and personal identification numbers, the agency then processes data to generate a ‘puzzlement list’. The list identifies suspicious patterns that may well indicate fraud or abuse. A ‘control unit’ then goes to investigates those suspected. The first issue is the need to be mindful of the differences between science and reality. An algorithm is incompatible on a human level. What I mean is that it is best at pursuing a single mathematical objective, and it cannot account for intangible outcomes such as liberty and welfare. In the Copenhagen case, the civil service will be unable to understand and explain why the algorithm identified abuse or fraud.

The second issue is how Denmark are making use of their increased powers under GDPR.  Article 23 of GDPR introduces derogations for EU Member States to possibly supplement GDPR laws for country-specific purpose. In cases of national interest, data can be collected and processed for the prevention, investigation, detection and prosecution of criminal activities, the execution of criminal penalties and for preventing threats to public safety. Articles 85-91 of GDPR specifically cover situations where derogations may be appropriate, such as archiving in the public interest, public access to official documents, national identification numbers, and data for scientific or historical research. These definitions are broad and open to interpretation given the use of terminology such as “possibility” and “may”, this can lead to uncomfortable outcomes that in certain ways may invite misuse.

Moreover, there is misalignment between AI and GDPR. Article 22 of GDPR provides a set of specific provisions within AI-based decisions on individuals, particularly those related to automated decision making and profiling. However, the intent of the GDPR around these provisions is not clear relating to “right to explanation” – Article 15 of GDPR does not actually refer to or establish a right to explanation that extends to the “how” and the “why” of an automated individual decision.

In the case of Denmark, they have given the agency further access to collect and process sensitive market data. They have put in a proposal to further the agency’s access to data on electricity use of Danish households to better identify social fraud. There’s no breach of any laws, it’s all well within legal boundaries and the government has a responsibility to prevent abuse or social fraud. The wide issuer though is transparency pertaining to explainability, morality, trust, and objective truth. Danish citizens are not being informed that their data is being processed, or if they have been put on the ‘puzzlement list’, and if citizens have been put on the list, there’s no explanation why the algorithm came to that outcome.

People are able of suffering, they can feel pain, both physical and emotional. We care about other people because they can suffer. Because they will suffer, it means we must think about them before we act or make a decision. In the same instance, an algorithm cannot comprehend suffering and therefore harms might accrue as a result of ruining a person’s character. The other value here is social trust. Trust is something that makes our life’s go well when we are in a high trust environment, many of the things that we want can happen. If there’s no trust or if we’re in a society that doesn’t have very much trust or if organisations have the reputation that people don’t trust you, an algorithm will not go down well.

The great divide and its ethical trade-off

There is a divide between the people who are proposing and rolling out AI solutions and the individuals who set policies for when and how these solutions should be applied. This ethical trade-off is all too reminiscent of the Cambridge Analytica scandal, therefore, will only end in ignominy for the public sector.

When algorithmic governance stops benefiting the public and instead becomes a servant of the demagogue’s, governments would find that kind of awareness could lead to AI being perceived as dishonest or could turn insignificant political issues into a full-blown national conflagration.

Though it appears that I am overly criticising various AI contributors, it is quite the contrary. I say with zeal that I am a firm believer, a partisan, that AI can help solve complex societal challenges. Please, can we just remove the sheer rapacity of the above by making AI reflective of the society it serves.

Given the correct approach, the public sector can successfully implement algorithmic AI

The public sector must focus on algorithmic regulations to safeguard meaningful democratic involvement and rightfulness in the creation of the algorithms. Continue to use algorithmic AI techniques to improve society and create opportunity such as directing resources to communities of people for whom opportunities have historically been restricted and explore how to allocate resources to everyone in society.

Transparency is prudent! Inform the public how their personal information will be processed by navigating data agencies through the correct procedure. The public seeing the governments tough stance on regulation will be reassuring. To build trust back into the system, the Danish citizens are entitled to access the personal data that the Ministry processes about them, as well as certain information about the Ministry’s processing of their personal data, and the right to object.

The data scientist is the vanguard of the future, your work is complex, and you recognise that the problems are not solely mathematical, but rather complex in a world crammed with structural challenges and disparities.

One thing I’ve learned as a consultant is ‘the plan is your friend’ because the plan helps set clear baseline goals for decision making. You need a well-defined and truthful baseline before an algorithm go-live. Setting a clear baseline against a criterion on decision making using historical society and cultural data will ensure that an algorithm has principles to check against.

There’s human bias. How do we regulate for it?

Algorithm bias and the techniques to mitigate this exist because researchers make a subjective decision when setting questions, selecting datasets and how the results should be presented back based on their experience and background. Yet still, in America, citizens have shown the fear that public authorities and AI researchers will marginalise different communities using biased algorithms to increase profiling in-order to police individual behaviour.

In order to mitigate this bias, there must be a greater diversity of people in the creation of the algorithm. Involving individuals of differing backgrounds will provide greater insight about understanding the problem, help establish the research direction, control bias, and therefore write or implement advantageous AI algorithms. This is only one form there are many other techniques such as removing socio-economic data or explainable AI.

As mentioned earlier, human behaviour is ‘intangible’, complex and often irrational thus making us uncertain beings. Taking these behaviours and applying it to an AI model have a habit to become personified and made more explicit. Instead, Eckersley endorses to design uncertainty into AI models. By calculating various solutions, the model can handle uncertain human behaviour and then give civil servants a carte du jour of possibilities with their associated trade-offs and probabilities.

For instance, we go back to my introduction. Instead of endorsing one treatment over another, in that case to minimise cost, the model can present a few options, for example, one for maximising the patient’s life span, another for minimising their agony, and a third for minimising cost then give the predicament back to the specialists reliable to bring about a suitable outcome.

Pardeep Singh Ghuman

Associate Consultant
Pardeep is an AI and Automation Strategy Consultant in the Insight Driven Enterprise practise with a passion keen for delivering AI Technology ethically.