Skip to Content

Deep stupidity – or why stupid is more likely to destroy the world than smart AI

Steve Jones
7 Jun 2023

The hype in AI is about whether a truly intelligent AI is an existential risk to society. Are we heading for Skynet or The Culture? What will the future bring?

I’d argue that the larger and more realistic threat is from Deep Stupidity — the weaponization of Artificial General Intelligence to amplify misinformation and create distrust in society.

Social media is the platform, AI is the weapon

One of the depressing things about the internet is how its made conspiracy theories spread. Where before people were lone idiots, potentially subscribing to some bizarre magazine or conspiracy society in a given area, you really didn’t have the ability to industrial scale these things. Social media and the Internet has increased the spread of such ideas. So while some AI folks talk about the existential threat of AGI, personally I’m much more concerned about Artificial General Stupidity.

So I thought it is worth looking at why it is much easier to build an AI that is a flat earther than it is to build a High School physics teacher, let alone a Stephen Hawking.

It is easier being confidently wrong and not understanding

LLMs are confidently wrong, that inability to actually understand is a great advantage when being a conspiracy theorist. Because when you understand stuff, then conspiracy theories are dumb.

This means the training data set for our AI conspiracy theorist must be incomplete, what we need is not something that has access to a broad set of data, but actually something that has access to an incredibly small and specific set of data that repeats the same point over and over again.

To be a conspiracy theorist means denying evidence and ignoring contradictions, this is much easier to learn and code for than actually receiving new information that challenges your current model and altering it.

Small data set for a single topic

So this is a massive advantage for LLMs when trying to create a conspiracy theorist. What we need is a limited set of data that repeats a given conclusion and continually lines up all evidence to that conclusion. We can apply this to lots of conspiracy theorists out there, for instance those folks who scream “false flag” after every single mass shooting incident in the US, in other words we have a small set of data, possibly only a few hundred data points that always result in the same conclusion. This means for our custom trained conspiracy theorist the association it always knows is “what ever the data, the answer is the conspiracy”.

Now we could get fancy and have a number of conspiracies, but given very few of them are logically consistent with each other, let alone with reality, it is more effective to have a model per conspiracy and just switch between them. That a conspiracy theorist is inconsistent with what they’ve previously said isn’t a problem, but we don’t want inconsistencies between conspiracies on a single topic. What we need to add are the standard “rebuttals of reality” like “Water finds its level”, “We don’t see the curve”, “NASA is fake” or “Spurs are a top Premier League club”.

Hallucinations help

This small set of data really helps us take advantage of the largest flaw in LLMs, hallucinations, or when the LLM just makes stuff up either because it has no data on the topic, or because the actual answer is rare so the weightings bias it towards an invalid answer. This is where LLMs really can scale conspiracy theories, because the probabilities are weighted towards the conspiracy theory already (as that is the only “correct” answer within the model) then any information we are provided with is recast within that context. So if someone tells us that the Greeks proved the earth was round in the 2nd Century BC our LLM could easily reply:

Context makes hallucinations doubly annoying

Our LLMs can go beyond the average conspiracy theorist thanks to the context and hallucinations. While an average conspiracy person will only have a fixed set of talking points, and potentially be constrained at some level by reality, the hallucinations and context of the conversation enables our conspiracy LLM to keep building its conspiracy and adding elements to it. Because our LLM is unconstrained by reality and counter arguments, instead being able to reframe any counter argument by using a hallucination it will be significantly more maddening. It will also mean it will create new justifications for the conspiracy that have never been put forwards before. These will, of course, be total nonsense but new total nonsense is mana from heaven to other conspiracy theorists.

Reset and start again

The final piece that makes a conspiracy LLM much easier to create is that if the LLM goes truly bonkers and you need to reset… this is exactly what conspiracy theorists do today. So if our LLM is creating hallucinations that fail some form of basic test, or just every 20 responses, we can reset the conversation in a totally different direction. Making my generative LLM detect either a frustration or an “ah ha” moment from the person it is annoying, a trivial task, enables me to then have my conspiracy bot just jump to another topic, and to do so in a much smoother way than most conspiracy theorists do today.

This is a much smoother transition for a flat earth conspiracy than you’ll hear on TikTok or YouTube.

We have achieved AGS, that isn’t a good thing

I’ve argued that the current generation of AIs aren’t close to genuinely passing the Turing test, let alone more modern tests. Turing set the bar of intelligence as the CEO of a Fortune 50 company, and made it have awareness of what it didn’t know.

Some folks are concerned about a coming existential crisis where Artificial General Intelligence becomes a threat to humanity.

But for me that is assuming the current generation of technologies are not a threat, and that intelligence is a greater threat than weaponized stupidity. Many people in AI are in fact arguing that GPT passes the Turing test, not because it replicates an intelligent human, but because either it can pass a multiple choice or formulaic example, or because it can convince people they are speaking to a not very bright person.

We can today make an AI that is the equivalent of a conspiracy theorist, someone untethered to reality and disconnected from logic. This isn’t General Intelligence, but it is General Stupidity.

Deep fakes and deep stupidity

Where Deep Fakes can make us not trust sources, Deep Stupidity can amplify misinformation and constantly give it justification and explanation. Where Deep Fakes imitate a person or event, Deep Stupidity can imitate the response of the crowd to that event. Spinning up a million conspiracy theorists to amplify not just the Deep Fake but the creation of an alternative reality around it.

The internet and particularly social media has proven a fertile ground for human created stupidity and conspiracy theories. Entire political movements and groups have been created based on internet created nonsense. These have succeeded in gaining significant mindshare without having the capacity to really generate either convincing material or convincing narratives.

AIs today have the ability to change that.

Stupidity and misinformation are today’s existential threat

We need to stop talking about the challenge with AI being only when it becomes “intelligent”, because it is already sufficiently stupid to have massive negative consequences on society. It is madness to think that companies, and especially governments, aren’t looking at this technologies and how they can use them to achieve their ends, even if their ends are simply to sew chaos.

Stupidity is the foundation for worrying about intelligence

Worrying about an AI ‘waking up’ and threatening humanity is a philosophical problem, but addressing Artificial Stupidity would give us the framework to deal with that future challenge. Everything about controlling and managing AI in future can be mapped to controlling and avoiding AGS today.

When we talk about frameworks for Trusted AI and legislation on things like Ethical Data Sourcing these are elements that apply to General Stupidity just as much as to intelligence. So we should stop worrying simply about some amorphous future threat and instead start worrying about how we avoid, detect and control Artificial General Stupidity, because in doing that we lay the platform for controlling AI overall.

This article first appeared on Medium.