Facebook to retroactively alert users of bogus content
Social media giant Facebook will retroactively issue alerts on coronavirus disinformation following pressure from activists.
The US firm announced the move on Thursday (16 April) on content it considers would cause "imminent physical harm".
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
In a blog post, Facebook's vice-president Guy Rosen said they want to connect people who may have interacted with harmful misinformation about the virus with the truth.
"These messages will connect people to Covid-19 myths debunked by the WHO [World Health Organisation] including ones we've removed from our platform for leading to imminent physical harm," he wrote.
The myth-busting messages are set to roll out in the next few weeks.
The announcement came as Avaaz, a global civic organisation, said millions of Facebook users were being exposed to coronavirus misinformation, without any warning from the platform.
The organisation has described Facebook as "the epicentre of coronavirus misinformation".
Fadi Quran, campaign director at Avaaz told EUobserver, that the scale of disinformation, plus gaps in Facebook's current policy to protect its users against disinformation, was surprising.
"I think the positive thing here is that Facebook is now implementing a solution, correcting the record, alerting people, that can inoculate against this problem," he said.
The Avaaz report notes disinformation that had been flagged could still take weeks before warning viewers it was fake or removing it.
The time delay meant the content was being shared and viewed, sometimes millions of times. It also means that those that had viewed it have no way of knowong it was fake.
Quran said one example included fake advice purportedly from the UK National Heath Service that those who can hold their breath for ten seconds don't have the virus.
"You can imagine someone in Brussels or somebody in Spain believes that and actually has the virus but decides to go and visit their grandmother," he warned.
Spanish and Italian-language users were also more likely to be duped, says the report, because of Facebook's broad failure to slap warning labels on the content.
Facebook's CEO Mark Zuckerberg earlier this year promised to crack down on disinformation, removing and downgrading such posts, but with mixed results.
The tech firm has a network of over 55 fact-checking partners covering over 45 languages to debunk claims.
It also started inserting popup links to authoritative information whenever someone is looking for information on the pandemic, and promotes UN World Health Organization.
A similar strategy is applied to Messenger, Instagram and WhatsApp.
But Avaaz found that those efforts lacking, even when debunked by Facebook's own fact-checking program.
It had analysed a sample of over 100 pieces of Facebook coronavirus misinformation across six languages [Arabic, English, French, Italian, Portuguese, Spanish], which it says was shared 1.7 million times and were viewed an estimated 117 million times.