WhatsApp’s promise of private messages with end-to-end encryption appears to have been false, an investigation revealed.
When Facebook purchased the popular WhatsApp for $19 billion in 2014, both companies assured users that their data could not be accessed by either company, but reporters for ProPublica found that the claims were not true.
Facebook has not only hired 1,000 workers since then to sift through millions of messages on WhatsApp, which has two billion users around the world, but it also shared some of those messages with law enforcement and the U.S. Department of Justice to help put people in prison, the ProPublica claims.
In the report, ProPublica found that Facebook had hired contractors in Austin, Texas, Dublin, Ireland and Singapore to look at millions of pieces of users’ content.
‘These hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems,’ the report detailed.
‘These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.’
Will Cathcart, Head of WhatsApp, said the news was a non-issue.
‘I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,’ Cathcart said.
WhatsApp had helped prosecutors build a high profile cases against Natalie Edwards, a U.S. Treasury Department employee, who allegedly leaked confidential documents to BuzzFeed on how dirty money flows through U.S. banks, according to ProPublica.
Edwards was sentenced to six months in prison after pleading guilty to a conspiracy charge. He began serving her sentence in June.
The report also found more than a dozen instances where data from WhatsApp was used to put others in jail since 2017.
WhatsApp Head Will Cathcart, right, said he sees no issue with the platform sharing data that is flagged with law enforcement. Facebook CEO Mark Zuckerberg, left, led the $19 billion acquisition of WhatsApp in 2014 and said its’ users’ data would remain private
WhatsApp, which boasts more than 2 billion global users, was supposedly keeping messages private and away from the hands of Facebook, unlike its sister company, Instagram
Pictured, Facebook’s headquarters in Dublin, Ireland. The tech giant has employees in Dublin and other major cities that sift through data from WhatsApp
WhatsApp Director of Communications, Carl Woog, told ProPublica that Facebook had hired the employees to identify and remove ‘the worst’ abusers from the platform, but said he agrees with Cathcart and does not consider the work to be content moderation.
‘The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse,’ WhatsApp said in a statement.
WhatsApp users appeared unfazed by the news, tweeting that it was no surprise that a large tech company owned by Facebook would monitor user messages.
One user wrote, ‘I thought we all knew what Facebook was doing?’
‘None of these services are truly private. Don’t believe that. In the end they all abuse their powers,’ another Twitter user wrote.
People tweeted that they were not surprised by the fact that WhatsApp was sharing data
Will Cathcart, left, discussing his platform with the Australian Strategic Policy Institute in July. He discussed how it flags possible child-exploitation imagery but not that the data could be sifted by employees hired by Facebook
Facebook claims that messages are only examined when its flagged to have inappropriate content, and that personal calls and other messages are still kept out of reach from the company.
An unnamed whistleblower had filed a complaint last year with the U.S. Securities and Exchange Commission, alleging that WhatApp’s boasts of protecting users’ privacy and data were false.
The SEC has said they have not seen the complaint and have not taken action against the issue.
While Facebook has refrained from detailing how it monitors WhatsApp posts, it openly publishes the actions that it takes on its own service and Instagram.
The company has said that some 15,000 moderators exist to filter the millions of posts on the two platforms.
From April to June alone, the company has taken down more than 32 million posts depicting adult nudity and sexual activity on Facebook. In that same time, it removed 28 million posts depicting abuse and exploitation of a child.
It also took action against more than 1.8 million of these types of post on Instagram.
Facebook has a 95% rate of handing over ‘at least some data’ from its users when requested by law enforcement.
On WhatsApp, Cathcart has said that it reported about 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020.
During an interview with the Australian Strategic Policy Institute, Cathcart had attributed the reports to the platforms AI and user who flag the content, but made no mention of the private contracts who would have examined the posts.