There’s only one way to regulate Facebook (or your business), and you won’t like it!

Publish date:

While voices to regulate Facebook and the social media become stronger, increased regulation is not always that straightforward. Modernised technology, such as Artificial Intelligence, can facilitate the introduction of stricter regulation, this usually with a cost though that all businesses need to take into account

If one thing is for sure, Facebook has been an integral part of our society since its creation in 2004. With 2.41 billion monthly active users, 5 new profiles generated every second and $22.111 billion of net income in 2018, one can only admire the scale of impact Facebook has on our lives.

Recently though, Facebook has been in the news for the wrong reasons. Following a series of cybersecurity and regulation breaches, there is an increasing wave of criticism towards Facebook and the amount of opinion-making and political power it has acquired.

If regulation is the answer, what are the complications of regulating social media and businesses?

Why regulating Facebook is trending

People have never stopped questioning the freedom, power and data processing methods of Facebook; however, what brought the topic back into headlines was the Cambridge Analytica data scandal.

In March 2018, Cambridge Analytica was found to have illegally processed data of more than 87 million Facebook users without their consent to create targeted political ads, leading to a compromising fine of $5 billion for the social media platform.

The trend grew following Mark Zuckerberg’s testimony in Congress about facilitating Trump’s victory over Hillary Clinton by spreading fake news in 3,000 Russian political ads.

Regulating Facebook first became an imperative with the introduction of GDPR in May 2018: an enhanced piece of data protection regulation initially driven by calls to limit the unrestricted power of social media networks on personal data. Despite all the noise around that, did you know GDPR compliance does not meet expectations for most companies today, despite 92% of GDPR compliant firms having gained a competitive advantage through adhering to that?

In this setting, one can justify the increasing demand for stricter policies to regulate Facebook and its content, with Zuckerberg himself leading the protests, ironically.

And as much as regulating social media may be trending, it would be foolish to assume the rest of the business world stays unaffected. Protests for enhanced regulation are global and concern the majority of industries, from Financial Services to Public Sector and Health Services.

Why regulating Facebook is not that straightforward?

The 3 key areas where Facebook needs regulating are:

Source: Capgemini Invent
Source: Capgemini Invent

Data privacy & portability can be supported using a combination of Artificial Intelligence (AI) and a robust data privacy framework. Research has identified different technologies organisations can use to cope with increasing data protection and privacy regulation, with Facebook being already on track to tick the GDPR requirements here by using AI for data discovery and management (an example being their release of a new tool for photo data portability)

Elections integrity and harmful content monitoring can also be accomplished using Artificial Intelligence. The way to do so is detailed in a recent Ofcom report, discussing how AI can be used to:

  • Improve the pre-moderation stage and flag content for review by humans, increasing moderation accuracy
  • Synthesise training data to improve pre-moderation performance
  • Assist human moderators by increasing their productivity and reducing the potentially harmful effects of content moderation on individual moderators


So far, it’s become evident that using AI can enable Facebook to comply with the regulation in 2 of the 3 key areas it needs to, and Capgemini’s report on Reinventing Cybersecurity using Artificial Intelligence discusses some of the tips and tricks when doing so.



Unfortunately, even when using AI, assurance of Facebook’s content quality (area 3 in the list above) comes with some challenges. This includes detecting the nuances of language, such as sarcasm and the use of emojis with languages evolving and developing truly transparent & explainable models consisting of millions of AI neurons and algorithm systems.

One can understand that despite the introduction of stricter policies (GDPR, Honest Ads Act etc.) and the more systematic use of AI to support with regulating Facebook, the result is still far from being perfect.

AI and regulation can partially help complete this exercise, but they cannot do it fully due to the complexities of human nature, language and thought processing.

Simply speaking, there is no easy way for Facebook to monitor and ban hate speech, propaganda and fake news in its newsfeed.

The hard truth about regulating Facebook content and why you should care

There’s only one true way we can stop Facebook showing harmful content, and that would be by executing continuous, systematic censorship of all its material. That would come with the emergence of a ‘Police’ responsible for viewing all our posts & videos, examining our private messages and monitoring our clicks and page views. This doesn’t come without costs, as it implies a functional downgrading of Facebook with interactions becoming slower, less frequent and of questionable persistence.

The question here is more of an ethical rather than a cybersecurity one therefore: How much freedom should we have as users vs. how much power are we ready to grant?

The lesson is that increased regulation – be that in social media or any other sectors – always comes with a cost. We cannot stop potential fake news in our screens without censoring the use of Facebook, same way we cannot get the same level of marketing from our bank without sacrificing some of our data privacy.

And despite the unquestionable contribution of AI technology to better protect personal data, investing on that without a plan will not simply solve all problems.

This is what all businesses, customers and regulator watchdogs need to first appreciate before pushing for and enforcing stricter regulation.



Antonis Dimitriadis

Antonis is a Senior Consultant in the People and Organisation practice of Capgemini Invent UK. Having a genuine interest in Cloud-based technology, he enjoys helping businesses drive user adoption and excitement for Office 365. He combines this with a determination to help organisations modernise their ways of working, so that they can enjoy the full benefits of this technology. As a member of the Agile Transformation Enablement team in Capgemini, Antonis is passionate about supporting organisations with their journey to build an agile mindset and approach.

Related Posts


Accelerating drug discovery with Artificial Intelligence (AI)

Date icon June 4, 2020

AI is emerging as an essential tool for transforming the drug development process. With AI we...


Is reinforcement learning worth the hype?

Date icon May 28, 2020

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if...

Artificial Intelligence

From AI pilot to production: 7 key factors for achieving ROI

Date icon April 29, 2020

AI will create $trillions in value but businesses are finding it hard to implement at scale....