Cutting corners on child safety —

Twitter suspended 400K for child abuse content but only reported 8K to police

Twitter’s internal detection of child sexual abuse materials may be failing.

Twitter suspended 400K for child abuse content but only reported 8K to police
NurPhoto / Contributor | NurPhoto

Last week, Twitter Safety tweeted that the platform is now “moving faster than ever” to remove child sexual abuse materials (CSAM). It seems, however, that’s not entirely accurate. Child safety advocates told The New York Times that after Elon Musk took over, Twitter started taking twice as long to remove CSAM flagged by various organizations.

The platform has since improved and is now removing CSAM almost as fast as it was before Musk’s takeover—responding to reports in less than two days—The Times reported. But there still seem to be issues with its CSAM reporting system that continue to delay response times. In one concerning case, a Canadian organization spent a week notifying Twitter daily—as the illegal imagery of a victim younger than 10 spread unchecked—before Twitter finally removed the content.

"From our standpoint, every minute that that content's up, it's re-victimizing that child," Gavin Portnoy, vice president of communications for the National Center for Missing and Exploited Children (NCMEC), told Ars. "That's concerning to us."

Twitter trust and safety chief Ella Irwin tweeted last week that combating CSAM is “incredibly hard,” but remains Twitter Safety’s “No. 1 priority.” Irwin told The Times that despite challenges, Twitter agrees with experts and is aware that much more can be done to proactively block exploitative materials. Experts told the Times that Twitter’s understaffing of its trust and safety team is a top concern, and sources confirmed that Twitter has stopped investing in partnerships and technology that were previously working to improve the platform’s effectiveness at rapidly removing CSAM.

“In no way are we patting ourselves on the back and saying, ‘Man, we’ve got this nailed,’” Irwin told The Times.

Twitter did not respond to Ars’ request for comment.

Red flags raised by Twitter’s low-budget CSAM strategy

Twitter Safety tweeted that in January, Twitter suspended approximately 404,000 accounts that created, distributed, or engaged with CSAM. This was 112 percent more account suspensions than the platform reported in November, Twitter said, backing up its claim that it has been moving “faster than ever.”

In the same tweet thread, Twitter promised that the company has been “building new defenses that proactively reduce the discoverability” of tweets spreading CSAM. The company did not provide much clarity on what these new defense measures included, only reporting a vague claim that one such new defense against child sexual exploitation (CSE) “reduced the number of successful searches for known CSE patterns by over 99% since December.”

Portnoy told Ars that NCMEC is concerned that what Twitter is publicly reporting doesn't match what NCMEC sees in its own Twitter data from its cyber tipline.

"You've got Ella Irwin out there saying that they're taking down more than ever, it's priority number one, and what we're seeing on our end, our data isn't showing that," Portnoy told Ars.

Other child safety organizations have raised some red flags over how Twitter has been handling CSAM in this same time period. Sources told the Times that Twitter has stopped paying for CSAM-detection software built by an anti-trafficking organization, Thorn, while also cutting off any continued collaboration on improving that software. Portnoy confirmed to Ars that NCMEC and Twitter remain seemingly divided by a disagreement over Twitter’s policy not to report to authorities all suspended accounts spreading CSAM.

Out of 404,000 suspensions in January, Twitter reported approximately 8,000 accounts. Irwin told the Times that Twitter is only obligated to report suspended accounts to authorities when the company has “high confidence that the person is knowingly transmitting” CSAM. Any accounts claiming to be selling or distributing CSAM off of Twitter—but not directly posting CSAM on Twitter—seemingly don’t meet Twitter’s threshold for reporting to authorities. Irwin confirmed that most Twitter account suspensions “involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it,” the Times reported.

Portnoy said that the reality is that these account suspensions "very much do warrant cyber tips."

"If we can get that information, we might be able to get the child out of harm's way or give something actionable to law enforcement, and the fact that we're not seeing that stuff is concerning," Portnoy told Ars.

The Times wanted to test out how well Twitter was working to combat CSAM. So the organization created an automated computer program to detect CSAM without displaying any illegal imagery, partnering with the Canadian Center for Child Protection to cross-reference CSAM found with illegal content previously identified in the center’s database. The Canadian center’s executive director, Lianna McDonald, tweeted this morning to encourage more child safety groups to speak out against Twitter seemingly becoming a platform of choice for Internet users on the dark web openly discussing strategies for finding CSAM on Twitter.

“This reporting begs the question: Why is it that verified CSAM (i.e., images known to industry and police) can be uploaded and hosted on Twitter without being immediately detected by image or video blocking technology?” McDonald tweeted. “In addition to the issue of image and video detection, Twitter also has a problem with the way it is used by offenders to promote, in plain sight, links to CSAM on other websites.”

While Irwin seems confident that Twitter is “getting a lot better” at moderating CSAM, some experts told the Times that Twitter wasn’t even taking basic steps to prioritize child safety as the company claims it has been. Lloyd Richardson, the technology director at the Canadian center, which ran its own scan for CSAM on Twitter to complement the Times' analysis, told the Times that “the volume we’re able to find with a minimal amount of effort is quite significant.”

Channel Ars Technica