Twitch on Tuesday introduced a new way for streamers to tackle accounts that escape channel-level bans. The automated tool is called “Suspicious User Detection” and can spot users who attempt to bypass bans, giving anyone moderating a Twitch channel more leverage to deal with potentially disruptive behavior before it starts. . The company announced the new ban escape detection tool is on its way back in August.
The company says it developed the tool in direct response to community feedback calling for more robust moderation options to manage users who appear with new accounts after being banned. Once an account is flagged as a “possible” or “probable” fraudster, the moderators of a channel can choose to take action against it manually.
All messages sent from an account flagged as a potential violator will be automatically filtered out of the chat, pending review by a moderator. For channels that want to be more aggressive in moderation, the same setting can be enabled for accounts flagged as potential scammers. Mods can also manually add users to the suspicious accounts list to keep a closer eye on them.
Twitch notes that, as with any automated moderation tool, false positives are possible, although it hopes to strike a balance between proactive detection powered by machine learning systems and human intervention. “You are the community expert and you have to decide who can participate,” Twitch wrote in a blog post, adding that the system will improve over time after training on human moderator contributions.
Twitch sees the new ban evasion detection system as a modular solution alongside AutoMod, which gives moderators a way to review potentially dangerous messages in chat, and verified chat by phone an added option. by Twitch last month which requires users to verify their accounts before chatting. . Twitch users can create five accounts with a single phone number, but a channel ban now affects all accounts linked to that number, sealing one of the easiest workarounds for anyone looking to circumvent the policies of the Twitch. platform.
Twitch streamers have long pushed the company to do more to protect creators, especially those who are most vulnerable to online harassment. This year alone, the #ADayOffTwitch and #TwitchDoBetter campaigns raised the visibility of marginalized creators facing widespread abuse on the platform, prompting the company to take action.
“We’ve seen a lot of conversations about bots, hate raids and other forms of harassment targeting marginalized creators,” the company tweeted at the time. “You are asking us to do better, and we know we need to do more to address these issues. “
Twitch’s long-standing lack of discovery tools has already made success on the platform a tall order for under-represented creators, but targeted harassment campaigns have made matters worse. A mine of Twitch payments data leaked last month painted a grim picture of diversity in the upper echelons of streaming success, where the best creators are almost universally white males.
In May, Twitch added over 350 tags to help users find streamers by identifiers such as gender, sexuality, race, and ability. The update was a late step in fostering discovery and bringing more diverse creators to the platform, but without adequate moderation tools, many users feared the system would drive targeted harassment to their communities. In September, Twitch took the unusual decision to file a complaint against two users linked to thousands of bots fueling mass harassment campaigns.