Apple has indefinitely delayed the roll-out of controversial child safety features following a furious backlash from its users.
The contentious plans, revealed by the tech giant on August 5, involve scanning iPhones for child abuse images and reporting ‘flagged’ owners to the police.Â
It had planned to rollout the feature for iPhones, iPads, and Mac with software updates later this year in the US.Â
But Apple said on Friday it would take more time to collect feedback and improve the proposed features, after the criticism of the system on privacy and other grounds both inside and outside the company.
However, child protection agencies have expressed their disappointment regarding Apple’s decision today, with one criticising the assumption that ‘child safety is the trojan horse for privacy erosion’.
 Apple has indefinitely delayed its plans for features intended to help protect children from predators
As of Friday, Apple’s original statement announcing the plans on its website from last month now has a short but important amendment at the top.Â
‘Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of child sexual abuse material [CSAM],’ it says.Â
‘Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.’Â
Apple plans to automatically scan iPhones and cloud storage for child abuse images and report ‘flagged’ owners to the police after a company employee has looked at their photos.
The new safety tools will also be used to look at photos sent by text messages to protect children from ‘sexting’, automatically blurring images Apple’s algorithms could detect as CSAM.Â
The iPhone maker said last month that the detection tools had been designed to protect user privacy and wouldn’t allow the tech giant to see or scan a user’s photo album.Â
Instead, the system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organisations.
As well as looking for photos on the phone, cloud storage and messages, Apple’s personal assistant Siri will be taught to ‘intervene’ when users try to search topics related to child sexual abuse.    Â
The new tools were set to be introduced later this year as part of the iOS and iPadOS 15 software update due in the autumn.
They were initially set to be introduced in the US only, but with plans to expand further over time.Â
Critics had argued the entire set of tools could be exploited by repressive governments looking to find other material for censorship or arrests.
If and when implemented, it would also be impossible for outside researchers to check whether Apple was only checking a small set of on-device content.       Â
Apple’s plans sparked a global backlash from a wide range of rights groups, with employees also criticising the plan internally.
Greg Nojeim of the Center for Democracy and Technology in Washington DC said: ‘Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship.’Â
Using ‘hashes’ or digital fingerprints, images in a CSAM database will be compared to pictures on a user’s iPhone. Any match would then sent to Apple and, after being reviewed by a human, on to the National Center for Missing and Exploited Children
Security researcher Alex Muffett said Apple was ‘defending its own interests, in the name of child protection’ with the plans and ‘walking back privacy to enable 1984’.Â
Muffett raised concerns the system will be deployed differently in authoritarian states, asking ‘what will China want [Apple] to block?’Â
Matthew Green, a top cryptography researcher at Johns Hopkins University, also warned that the system could be used to frame innocent people by sending them seemingly innocuous images designed to trigger matches for child pornography.Â
That could fool Apple’s algorithm and alert law enforcement.Â
‘Researchers have been able to do this pretty easily,’ Green said of the ability to trick such systems.
Other abuses could include government surveillance of dissidents or protesters. ‘What happens when the Chinese government says, “Here is a list of files that we want you to scan for”,’ Green asked.Â
‘Does Apple say no? I hope they say no, but their technology won’t say no.’Â Â
‘This will break the dam — governments will demand it from everyone,’ Green said.Â
‘The pressure is going to come from the UK, from the US, from India, from China. I’m terrified about what that’s going to look like’, he told WIRED.Â
Ross Anderson, professor of security engineering at Cambridge University, branded the plan ‘absolutely appalling’.Â
‘It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of our phones and laptops’, he said.Â
However, other experts welcomed Apple’s plans. Dr Rachel O’Connell, founder and CEO of verification consultancy Trust Elevate, called Apple’s child protections proposal ‘a scalable solution that does not break encryption’.
‘[It] respects user privacy while at the same time significantly bearing down on certain types of criminal behaviour, in this case terrible crimes which harm children,’ she said.Â
‘The idea that child safety is the trojan horse for privacy erosion is a trope that privacy advocates expound.Â
‘This creates a false dichotomy and shifts the focus away from the children and young people at the front line of dealing with adults with a sexual interest in children, which often engage in grooming children and soliciting them to produce child sexual abuse material.’Â Â Â
Meanwhile, Andy Burrows, the head of child safety online policy at NSPCC, called Apple’s decision ‘an incredibly disappointing delay’.Â
‘Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard,’ he said.Â
‘They sought to adopt a proportionate approach that scanned for child abuse images in a privacy preserving way, and that balanced user safety and privacy.Â
Apple previously said: ‘We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)’Â
‘We hope Apple will consider standing their ground instead of delaying important child protection measures in the face of criticism.’Â
Apple had been playing defence on the plan for weeks, and had already offered a series of explanations and documents to show that the risks of false detections were low.Â
Apple boasted that ‘the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year’.Â
Craig Federighi, Apple’s senior vice president of software engineering, told The Wall Street Journal in August that the AI-driven program will be protected against misuse through ‘multiple levels of auditability’.Â
‘We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world,’ Federighi said.Â