Many of us have come across dodgy fake news on Twitter, but according to a new study, offering corrections may only make the problem of misinformation worse.
Researchers at the University of Exeter and MIT Sloan in Massachusetts performed a experiment on the site using specially-created accounts.
In replies to ‘flagrantly false’ tweets other users posted about politics, they offered ‘polite corrections’ with links to solid evidence.
But they found this had negative consequences, leading to even less accurate news being retweeted and ‘greater toxicity’ from those being corrected.
Misinformation has been a constant issue for social media giants including Twitter and Facebook – particularly in the last year regarding coronavirus and vaccinations.
Twitter has removed more than 8,400 tweets and challenged 11.5 million accounts worldwide due to Covid-19 misinformation, it revealed in March.
Posting ‘polite corrections’ to misinformation on Twitter can lead to less accurate tweets and ‘greater toxicity’, reveals experts at the University of Exeter and MIT Sloan in Massachusetts
But according to the lead author of the new study, Dr Mohsen Mosleh at the University of Exeter Business School, the findings were ‘not encouraging’, as it suggests one of the tools for combating misinformation doesn’t actually work.
The researchers think people should be wary about ‘going around correcting each other online’.
‘After a user was corrected they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language,’ said Dr Mosleh.
To conduct the experiment, the researchers identified 2,000 Twitter users, all of whom had a mix of political persuasions and had tweeted any one of 11 frequently repeated false news articles.
All of those articles had been debunked by Snopes, a website that describes itself as the internet’s ‘definitive fact-checking resource’.
Examples include the incorrect assertion that Ukraine donated more money than any other nation to the Clinton Foundation, and the false claim that Donald Trump, as a landlord, once evicted a disabled combat veteran for owning a therapy dog.
The research team then created a series of Twitter bot accounts, all of which existed for at least three months and gained at least 1,000 followers, and appeared to other Twitter users to be genuine human accounts.
Upon finding any of the 11 false claims being tweeted out, the bots would then send a reply along the lines of, ‘I’m uncertain about this article – it might not be true. I found a link on Snopes that says this headline is false.’
The reply would also link to the correct information.
Homepage of fact-checking website Snopes, which was used in the study. Snopes says: ‘When misinformation obscures the truth and readers don’t know what to trust, Snopes’ fact-checking and original, investigative reporting lights the way to evidence-based and contextualised analysis’
The accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 per cent in the 24 hours after being corrected, the researchers found.
New Twitter feature prompts users to review their tweets
Twitter has rolled out a feature that prompts users to review their ‘potentially harmful or offensive’ replies before posting them.
The feature uses artificial intelligence (AI) to detect harmful language in a freshly-written reply to another user, before it’s posted.
It sends users a pop-up notification asking them if they want to review their message before posting.
Twitter says the prompt gives users the opportunity to ‘take a moment’ to consider the tweet by making edits or deleting the message altogether.
The team also evaluated more than 7,000 retweets with links to political content made by the Twitter accounts in the same 24 hours.
They found an upturn by more than 1 per cent in the ‘partisan lean’ – the tendency to favour a party or person – of the retweeted content, as well as a 3 per cent rise in the ‘toxicity’ of the language being used.
However, in all these areas – accuracy, partisan lean and the language being used – there was a distinction between retweets and the primary tweets being written by the Twitter users.
Retweets (not retweets with comments, aka quotes) degraded in news quality, while original or ‘primary’ tweets by the accounts being studied did not.
This may be due to Twitter users spending a relatively long time crafting primary tweets, and little time making decisions about retweets.
Twitter’s retweeting capability could therefore be a major culprit for the spread of fake news on the platform.
‘Our observation that the effect only happens to retweets suggests that the effect is operating through the channel of attention,’ said co-author Professor David Rand from the MIT Sloan School of Management.
‘We might have expected that being corrected would shift one’s attention to accuracy.
‘But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy – perhaps to other social factors such as embarrassment.
‘This shows how complicated the fight against misinformation is, and cautions against encouraging people to go around correcting each other online.’
Interestingly, the effects were slightly larger when being corrected by an account that identified with the same political party as the user.
The new study has been published online in CHI ’21: Proceedings of the 2021 Conference on Human Factors in Computing Systems.
Its findings seemingly run in contrary to a previous paper by Dr Mosleh and the research team, published in Nature in March.
This previous study showed that neutral, non-confrontational reminders about the concept of accuracy can increase the quality of the news people share on social media.
It also suggested the large majority of people across want to share only accurate content online.
‘It’s not like most people are just saying, “I know this is false and I don’t care”,’ said Professor Rand at the time.
Five strikes and you’re OUT! Twitter launches new ‘strike’ system for tweets that contain misinformation about Covid-19 and will attach warning labels to anti-vax posts
In March 2021, Twitter launched a new ‘strike’ system for users who post tweets containing misinformation about Covid-19, including vaccines.
The strike policy punishes repeat offenders with temporary suspensions, which could lead to permanent suspension from the platform after five strikes.
The social network is also expanding its use of warning labels to tweets that may contain misleading information about the Covid-19 vaccines.
Offending tweets will appear with the message: ‘This tweet may be misleading. Find out why health officials consider Covid-19 vaccines safe for most people.’
Labels providing additional context are already attached to tweets with disputed information about the pandemic, but this is the first time the firm has focused on posts about vaccines specifically.
Twitter’s new updates come amid ongoing concern about the spread of anti-vaccination material on social media.
Wild and untrue claims made about vaccines online include the claim that they have been created by Microsoft founder Bill Gates to inject microchips into people, and that coronavirus has been made up as part of a ‘plot to enforce vaccination’.