Facebook users have slammed the social media giant after some received warnings that they might have seen ‘extremist content’ on the site, leading to concerns about privacy and censorship.
Screenshots shared on Twitter on Thursday showed a notice asking ‘Are you concerned that someone you know is becoming an extremist?’ and another that alerted users ‘you may have been exposed to harmful extremist content recently.’ Both included links to ‘get support.’
The world’s largest social media network has long been under pressure from lawmakers and civil rights groups to combat extremism on its platforms, including U.S. domestic movements involved in the January 6 Capitol riot when groups supporting former President Donald Trump tried to stop the U.S. Congress from certifying Joe Biden‘s victory in the November election.
The apparent crackdown has drawn ire from Conservative commentators and social media users concerned they might be unfairly targeted by the social network, which they accuse of being Left-leaning.
Facebook has started warning some users that they might have seen ‘extremist content’ on the social media site, the company said on Thursday. Screenshots shared on Twitter showed a notice asking ‘Are you concerned that someone you know is becoming an extremist?’ and another that alerted users ‘you may have been exposed to harmful extremist content recently.’ Both included links to ‘get support’
‘Nice to see Facebook going full Stasi/NKVD. Learning from the former East Germans who now advise leftist ‘anti-extremist’ organizations,’ Malaysian commentator Ian Miles Cheong wrote in a Twitter post.
A Republican delegate echoed concerns that the tool could be used to target conservative voices.
Nick Freitas, a member of the Virginia House of Delegates, shared an image of the alert, writing:
‘Yes…actually I have a real concern that some leftist technocrats are creating an Orwellian environment where people are being arbitrarily silenced or banned for saying something the ‘thought police’ doesn’t like.’
Writer Alex Berenson also out hit out at Facebook, saying the platform needed to either take responsibility for all content its hosts or ‘STAY OUT OF THE WAY’
‘Yeah, I’m becoming an extremist. An anti-@Facebook extremist. ‘Confidential help is available?’ Who do they think they are? Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY. Zuck’s choice,’ Berenson wrote on Twitter.
Facebook said the small test, which is only on its main platform, was running in the United States as a pilot for a global approach to prevent radicalization on the site.
‘This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,’ said a Facebook spokesperson in an emailed statement to Reuters.
‘We are partnering with NGOs and academic experts in this space and hope to have more to share in the future.’
It said the efforts were part of its commitment to the Christchurch Call to Action, a campaign involving major tech platforms to counter violent extremist content online that was launched following a 2019 attack in New Zealand that was live-streamed on Facebook.
Facebook said in the test it was identifying both users who may have been exposed to rule-breaking extremist content and users who had previously been the subject of Facebook’s enforcement.
The company, which has tightened its rules against violent and hate groups in recent years, said it does remove some content and accounts that violate its rules pro-actively before the material is seen by users, but that other content may be viewed before it is enforced against.
Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), said the move was an acknowledgement by Facebook of the seriousness of extremist content, but cautioned that the notifications raised concerns about user privacy.
‘Facebook has just admitted what we’ve all known for years—exposure to extremist content radicalises people, and Facebook itself has been a willing participant in allowing it to spread,’ Ahmed said.
‘Facebook themselves have been aware of the problem for a long time—but because they were the ones profiting from traffic, they simply decided to turn a blind eye. In the meantime, lives have been put in danger.
‘Facebook now has some serious questions to ask over people’s privacy. Specifically, how do they choose who gets these pop-ups?
‘Are they tapping into specific users’ timelines, and are they tracking specific posts to see who’s engaging with them? Given the security implications, we have an urgent need to know.’
It is not clear how Facebook determines who receives the notifications.