Column
Four tweets broke Facebook - good news for EU regulators
Facebook once more. It seems that every year a new wave of outrage rains down on the company.
Time and again the company stood accused of facilitating terrorism, racist hate speech or disinformation. Time and again Mark Zuckerberg acknowledges problems, promises improvements and presents data to show the company's efforts. Facebook's reputation suffers.
Join EUobserver today
Get the EU news that really matters
Instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
Last year it dropped out of the list of ten most-valuable global brands, but its profits kept growing.
Now it is different. Powerful companies, such as Unilever, have stopped buying advertising space on Facebook, demanding that the tech company does much more "in the areas of divisiveness and hate speech during this polarised election period".
Other companies followed suit. Their stated reasons may not be genuine, but the public outrage is and it has financial consequences.
Why did it happen now? After all, Facebook was previously criticised for doing too little against extremely aggressive hate speech in Myanmar.
Democracy Reporting International, where I work, showed that this hate speech accompanied and may have facilitated the ethnic cleansing of the Rohingya minority.
But, Myanmar seemed far away to many people and it is not an important market for Facebook. Its content moderation policies reflect American sentiments: While nudity is strictly forbidden, free speech is wide-ranging.
But in the maelstrom of the US' ever deepening political polarisation, all of democracy's ground rules have become contested, and how to deal with speech is one of democracy's most outstanding questions.
Suddenly, in the aftermath of the murder of George Floyd, many Americans find it unacceptable that social media companies do not curb racist and violent speech online.
This outrage is not the result of some new report that revealed an unknown level of hate speech on Facebook.
The straw that broke the camel's back was Twitter being slightly more interventionist and interfering with Donald Trump's communications: it added a warning that four of his tweets seemed to glorify violence or contained disinformation.
Facebook did not do anything about similar Trump messages on its platform, saying that it is important for the public to see what politicians say. That's when the ground started shaking.
Facebook employees protested, some resigned and advertisers pulled back. It seems the cycle of acknowledging the problem and promises of improvement is broken. Advertisers expect something more tangible.
This may be a moment that offers a chance for the US and the EU to establish more common ground on platform regulation.
On both sides of the Atlantic, regulation has been based on the idea that companies that merely pass on content, so called intermediaries, are not liable for the content.
There are exceptions, but the no-liability principle stands.
In both jurisdictions the underlying question that needs a new answer is this: why does a company like Facebook still benefit – by and large - from the assumption that it has nothing to do with the messages that are posted on its service? Why is it treated like a phone company that is not liable for what people say during their calls?
'Ranking' is editing
Facebook PR chief Nick Clegg tried to make us believe that it is comparable to a phone company when he recently declared that Facebook merely "hold up a mirror to society". Nothing could be further from the truth. His company decides which messages users see. It "ranks" content.
A phone company does not decide who gets connected to whom and with which priority, rather than simply letting the call happen.
Social media companies do not create messages, but they decide which messages we see in our feeds.
They do not create the pixels, but they create a big picture out of these pixels. It's not a mirror, it's a painting. And the painting is different for each user, drawn by algorithms designed to keep him or her as long as possible on the platform.
Zuckerberg acknowledged as much by saying that Facebook is something between a phone company and a media outlet.
Indeed, it is not a phone company, but it is also not a media.
Zuckerberg is not the editor-in-chief of the 100 billion messages being posted or sent on Facebook every day. But he is not an innocent bystander either, who simply wired people so that they are better connected.
There is no good comparison for what Facebook does. It is new. Social media are fast and have an unimaginable volume of data.
Facebook must deal with 100 billion messages every day, and these have the most impact in the first few hours they are posted.
For a decade now, we have only been scratching the surface of what this really means for speech and democratic discourse online.
EU regulators are now working on overhauling the laws that governed such service for two decades. Recently we brought together European NGOs which agreed that transparency is key to address this new reality.
To do the next step, companies like Facebook need in-depth, independent, external auditing mechanisms by strong regulators, beyond what the 'Stop Hate' campaign demands and well beyond the civil rights audit that has been undertaken.
The company recently announced that it has taken down 41 accounts involved in "co-ordinated inauthentic behaviour", but what is the significance of this for a network with 2.6 billion users?
Clegg said "When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them", but when does the company find hateful post? How is it trying to find them? 'Zero tolerance' is only a soundbite in an area full of grey zones.
We need to have a detailed understanding of the work of ranking algorithms and the relative size of interactions, instead of vague company statements.
We do not believe PR statements by banks, we audit them.
The same needs to happen to social media companies. One can applaud the advertisers that are pulling back from Facebook, but rules on speech should not be made by big business.
They need to be determined by democratically-elected lawmakers.
With many American becoming uneasy with the very wide understanding of freedom of speech in the US, the crisis of Facebook points the way into a regulatory future in which the US and the EU can align in offering a democratic model of regulation that balances freedom of speech with a degree of protection of users and the democratic discourse.
Author bio
Michael Meyer-Resende is the executive director of Democracy Reporting International, a non-partisan NGO in Berlin that supports political participation.
Disclaimer
The views expressed in this opinion piece are the author's, not those of EUobserver.
Site Section
Related stories
- Facebook to retroactively alert users of bogus content
- Only EU can tame Zuckerberg's Facebook
- EU states given right to police Facebook worldwide
- Why EU must limit political micro-targeting
- Facebook whistleblower: EU rules can be 'game-changer'
- ECJ to clarify power of Belgian watchdog on Facebook cookies