Twitch has been hounded for months after a September report showed just how many sexual predators were stalking the streaming platformâs halls, targeting children that were not supposed to be there in the first place. Now the company says itâs developed systems to combat child sexual abuse material, though itâs not exactly sharing the details.
In a Tuesday blog post, Twitch said it is working on a âconstantly evolving approachâ to limit harm to young people. The user-streaming platform said itâs creating phone verification requirements for âpotentially vulnerable accounts,â AKA those accounts made by young people pretending to be over 13, before they can livestream. The company said itâs also working to delete more accounts for those under 13-years-old.
In addition, the Amazon-owned platform has blocked certain search terms in the in-app browser. The company seemed to be purposefully vague about which search terms it has blocked and how its phone verification system would work. The platform mentioned it is working on creating a machine learning system to detect harm in written text on Twitch, but the company offered no specifics about its planned development or implementation.
Gizmodo reached out to Twitch to see if the company would expound on how its phone verification system would work, though a company spokesperson responded Twitch wanted to be purposefully vague.
âWe always need to be careful with the amount of specificity we offer you [referring to Twitch users and the public] in order to avoid giving bad actors information they could use to evade our effortsâand this is especially true when it comes to child predators,â the company wrote on its blog. âThere is no single fix to prevent predation,â
This comes just a few months after a massive report from Bloomberg showed child predators were using Twitchâs services to track and interact with young children streaming on the platform. Bloombergâs data, based on an anonymous internet extremism researcher, showed that some predatory accounts weretargeting hundreds of thousands of young peopleâs accounts. The researcher found streams of young people and tracked which accounts were also following other children. There were hundreds of these kinds of predatory accounts following more than 1,000 children each.
The company mentioned that it has updated its privacy settings on the internal messaging whispers system, which in testing revealed that blocking strangers from messaging new users is turned on by default. Twitch also claimed it will prioritize user reports that involve young users.
Twitchâs user policy says account-holders must be 13 years or older, and thereâs the usual slate of banned violent or sexual content, but just like many other digital platforms, thereâs been a struggle to keep child sex abuse material from appearing on the site. Twitter has a problem with this, too, which has led to advertisers abandoning the site. The National Center for Missing and Exploited Childrenâs 2021 report on instances of child sexual abuse material showed Twitch was ranked rather low compared to other services, with 6,629 reports compared to TikTokâs 154,618 or WhatsAppâs 1.3 million. Twitch mentioned its law enforcement response team shares information with police and the NCMEC.
Still, Twitch said it wanted to crack down even more on grooming on the platform, which involves adults interacting with young people before manipulating them and eventually exploiting or abusing them. Itâs hard to tell without specifics, but Bloombergâs report mentions Twitchâs existing tools have previously proved incapable of stopping adults from contacting and, in some cases, grooming young people. Twitch Chief Product Officer Tom Verrilli told Blomberg that âeven one single instance of grooming is abhorrent to us.â