Facebook flooded with child abuse images since switching off scanning software, says NSPCC

Charity accuses tech giant of allowing hundreds of thousands of illegal images to be uploaded since it deactivated detection software

Facebook has allowed more than half a million child abuse images to go undetected since switching off scanning software, the NSPCC has estimated.

The tech company turned off its software in the EU that blocks known indecent child material being uploaded to its apps last year in response to new European privacy laws.

However, Facebook is yet to resume scanning even though the bloc has since clarified it is not banned under the new regulations.

The NSPCC warned that this means around 1,600 abuse images are not being found every day in an estimate drawn from the numbers Facebook was reporting before it halted scanning.

Facebook has said the delay had been due to it building “new technical measures” and that it plans to resume “some” scanning from this week.

The criticism comes after a torrid month for Facebook after it was plunged into crisis by a former employee-turned whistleblower, Frances Haugen.

Ms Haugen, a former executive tackling harmful content, has accused the company of "putting profit" ahead of the safety of its users.

Appearing before MPs last month, she said: "Facebook has been unwilling to accept even small slivers of profit being sacrificed for safety."

Facebook has defended its records saying it has spent $13 billion (£9 billion) on safety and security measures since 2016. 

The scanning row has come about as tech companies widely use software to digitally mark known child abuse images to prevent them being uploaded to their services.

The first version of the technology was developed by Microsoft in 2008 in response to the explosion of child abuse being shared digitally with the rise of the internet.

Facebook, which has more than three billion users worldwide, blocks and reports millions of child abuse images every year due to this technology. In 2020, it made 20 million such reports to the US watchdog, the National Center for Missing & Exploited Children, up from just under 16 million in 2019.

Last December Facebook paused its scanning in the EU - not including the UK which had by then left the bloc - in response to its e-Privacy Directive, which banned the automated monitoring of people’s online communications.

However, other tech companies, such as Microsoft and Google, did not pause their abuse scanning operations deeming them not to fall under the regulations.

In April, the EU clarified that the regulations would be changed to make it clear they did not prevent scanning and the legal text was altered in August.

The NSPCC warned that child abuse shared in the EU also impacted UK users, as well as UK victims, who may feature in some of the material.

'Cavalier attitude towards children's safety'

Andy Burrows, head of child safety online policy at the NSPCC, said: “It’s staggering that more than half a million child abuse reports could have been missed because of Facebook’s decision to stop scanning messages for child sexual abuse content in the EU.

“Every day Facebook delays with no good reason another estimated 1,600 child abuse reports go unchecked. It beggars belief that they switched their technology off overnight but months after they got the legal certainty they said they needed we are still waiting for it to be turned back on.

“The fact Facebook took this step in the first place when every other company continued to scan for abuse shows the cavalier attitude towards children’s safety at the top of the company. Tough regulation can’t come soon enough.”

Facebook’s failure to resume scanning has prompted concern from other children’s charities, with the global children’s charity Ecpat saying earlier this month that by the end of July reports of child abuse being found by EU member states were down 76 percent.

A spokesman said: “Our fight against child exploitation has continued even as we comply with the ePrivacy Directive. This includes continuing to scan public surfaces on our apps, stopping adults from messaging children they’re not connected with and actioning user reports.

“Since the interim derogation came into effect in August we’ve been urgently building the new technical measures it requires, such as specific user notifications. As we update our systems we’ve been consulting with the Data Protection Commission and have already informed them that we plan to resume scanning on some of our surfaces starting Monday”.

License this content