11th Aug 2020


Four tweets broke Facebook - good news for EU regulators

Facebook once more. It seems that every year a new wave of outrage rains down on the company.

Time and again the company stood accused of facilitating terrorism, racist hate speech or disinformation. Time and again Mark Zuckerberg acknowledges problems, promises improvements and presents data to show the company's efforts. Facebook's reputation suffers.

Read and decide

Join EUobserver today

Support quality EU news

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

  • Democracy Reporting International, where I work, showed that hate speech may have facilitated the ethnic cleansing of the Rohingya minority. But, Myanmar seemed far away to many people and it is not an important market for Facebook.

Last year it dropped out of the list of ten most-valuable global brands, but its profits kept growing.

Now it is different. Powerful companies, such as Unilever, have stopped buying advertising space on Facebook, demanding that the tech company does much more "in the areas of divisiveness and hate speech during this polarised election period".

Other companies followed suit. Their stated reasons may not be genuine, but the public outrage is and it has financial consequences.

Why did it happen now? After all, Facebook was previously criticised for doing too little against extremely aggressive hate speech in Myanmar.

Democracy Reporting International, where I work, showed that this hate speech accompanied and may have facilitated the ethnic cleansing of the Rohingya minority.

But, Myanmar seemed far away to many people and it is not an important market for Facebook. Its content moderation policies reflect American sentiments: While nudity is strictly forbidden, free speech is wide-ranging.

But in the maelstrom of the US' ever deepening political polarisation, all of democracy's ground rules have become contested, and how to deal with speech is one of democracy's most outstanding questions.

Suddenly, in the aftermath of the murder of George Floyd, many Americans find it unacceptable that social media companies do not curb racist and violent speech online.

This outrage is not the result of some new report that revealed an unknown level of hate speech on Facebook.

The straw that broke the camel's back was Twitter being slightly more interventionist and interfering with Donald Trump's communications: it added a warning that four of his tweets seemed to glorify violence or contained disinformation.

Facebook did not do anything about similar Trump messages on its platform, saying that it is important for the public to see what politicians say. That's when the ground started shaking.

Facebook employees protested, some resigned and advertisers pulled back. It seems the cycle of acknowledging the problem and promises of improvement is broken. Advertisers expect something more tangible.

This may be a moment that offers a chance for the US and the EU to establish more common ground on platform regulation.

On both sides of the Atlantic, regulation has been based on the idea that companies that merely pass on content, so called intermediaries, are not liable for the content.

There are exceptions, but the no-liability principle stands.

In both jurisdictions the underlying question that needs a new answer is this: why does a company like Facebook still benefit – by and large - from the assumption that it has nothing to do with the messages that are posted on its service? Why is it treated like a phone company that is not liable for what people say during their calls?

'Ranking' is editing

Facebook PR chief Nick Clegg tried to make us believe that it is comparable to a phone company when he recently declared that Facebook merely "hold up a mirror to society". Nothing could be further from the truth. His company decides which messages users see. It "ranks" content.

A phone company does not decide who gets connected to whom and with which priority, rather than simply letting the call happen.

Social media companies do not create messages, but they decide which messages we see in our feeds.

They do not create the pixels, but they create a big picture out of these pixels. It's not a mirror, it's a painting. And the painting is different for each user, drawn by algorithms designed to keep him or her as long as possible on the platform.

Zuckerberg acknowledged as much by saying that Facebook is something between a phone company and a media outlet.

Indeed, it is not a phone company, but it is also not a media.

Zuckerberg is not the editor-in-chief of the 100 billion messages being posted or sent on Facebook every day. But he is not an innocent bystander either, who simply wired people so that they are better connected.

There is no good comparison for what Facebook does. It is new. Social media are fast and have an unimaginable volume of data.

Facebook must deal with 100 billion messages every day, and these have the most impact in the first few hours they are posted.

For a decade now, we have only been scratching the surface of what this really means for speech and democratic discourse online.

EU regulators are now working on overhauling the laws that governed such service for two decades. Recently we brought together European NGOs which agreed that transparency is key to address this new reality.

To do the next step, companies like Facebook need in-depth, independent, external auditing mechanisms by strong regulators, beyond what the 'Stop Hate' campaign demands and well beyond the civil rights audit that has been undertaken.

The company recently announced that it has taken down 41 accounts involved in "co-ordinated inauthentic behaviour", but what is the significance of this for a network with 2.6 billion users?

Clegg said "When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them", but when does the company find hateful post? How is it trying to find them? 'Zero tolerance' is only a soundbite in an area full of grey zones.

We need to have a detailed understanding of the work of ranking algorithms and the relative size of interactions, instead of vague company statements.

We do not believe PR statements by banks, we audit them.

The same needs to happen to social media companies. One can applaud the advertisers that are pulling back from Facebook, but rules on speech should not be made by big business.

They need to be determined by democratically-elected lawmakers.

With many American becoming uneasy with the very wide understanding of freedom of speech in the US, the crisis of Facebook points the way into a regulatory future in which the US and the EU can align in offering a democratic model of regulation that balances freedom of speech with a degree of protection of users and the democratic discourse.

Author bio

Michael Meyer-Resende is the executive director of Democracy Reporting International, a non-partisan NGO in Berlin that supports political participation.


The views expressed in this opinion piece are the author's, not those of EUobserver.


Facebook to retroactively alert users of bogus content

US social media giant Facebook announced new measures to tackle the 'infodemic' triggered by bogus content on the coronavirus. The move coincides with a study by activists showing how Facebook had so far failed to curtail virus-related disinformation.

Only EU can tame Zuckerberg's Facebook

When the EU speaks, Silicon Valley listens. The tech titans know that the EU matters. Which is why it's so crucial that following the lobbying from Zuckerberg, on disinformation, the EU gets regulation right.

EU states given right to police Facebook worldwide

National courts in EU states can order Facebook to delete content "worldwide", Europe's top tribunal has ruled, in what the US social media giant called an attack on free speech.

Why EU must limit political micro-targeting

In contrast to campaign posters on the street or ads on TV, other voters have no way of knowing what paid political messaging their fellow citizens are seeing. People become siphoned off from each other and societal divides might harden.

Schrems privacy ruling risks EU's ties to digital world

With more and more trade moving to the digital realm, Europe can ill-afford to cut itself off. Meanwhile, China continues to advance a vision for an internet that is fractured along national boundaries and controlled by governments.

Worrying rows over future EU chemicals policy

It is of utmost concern to the environmental health community that forces within the EU Commission are actively trying to push back against a European Green Deal that is supposed to put people's health at its core.

Revealed: fossil-fuel lobbying behind EU hydrogen strategy

As with the German government – which presented its own hydrogen strategy last month – the European Commission and other EU institutions appear to be similarly intoxicated by the false promises of the gas industry.

Stakeholders' Highlights

  1. UNESDANext generation Europe should be green and circular
  2. Nordic Council of MinistersNEW REPORT: Eight in ten people are concerned about climate change
  3. UNESDAHow reducing sugar and calories in soft drinks makes the healthier choice the easy choice
  4. Nordic Council of MinistersGreen energy to power Nordic start after Covid-19
  5. European Sustainable Energy WeekThis year’s EU Sustainable Energy Week (EUSEW) will be held digitally!
  6. Nordic Council of MinistersNordic states are fighting to protect gender equality during corona crisis

Latest News

  1. EU looks on as Belarus protests turn lethal
  2. EU virus-alert agency says new restrictions needed
  3. Minsk violence prompts talk of EU sanctions
  4. Schrems privacy ruling risks EU's ties to digital world
  5. UK asks military to stop Channel migrants
  6. EU wary of violence in Belarus election
  7. Iraqis paid €2,000 each agree to leave Greece
  8. EU's most sustainable islands are Danish 'Sunshine Islands'

Join EUobserver

Support quality EU news

Join us