Discord is implementing a more flexible approach to content moderation

Discord doesn’t want to count strikes when users run afoul of its rules.

As part of a slew of fall product updates, the online chat platform announced a more flexible approach to moderation. Instead of handing out strikes for every policy violation, Discord will tailor its warnings or punishments to fit the crime, while providing steps users can take to improve their standing.

“We think we’ve built the most nuanced, comprehensive, and proportionate warning system of any platform,” Savannah Badalich, Discord’s senior director of policy, told reporters.

Alongside the new warning system, Discord is also launching new safety features for teens: It will auto-blur potentially-sensitive images from teens’ friends by default, and it will show a “safety check” when teens are messaging with someone new, asking if they want to proceed and linking to additional safety tips.

In both cases, Discord wants to show that it’s taking safety seriously after years of controversy and criticism. A report by NBC News in May documented how child predators used the platform to groom and abduct teens, while other reports have found pockets of extremism to thrive there.

Discord likes to point out that more 15% of its employees work on trust and safety. As the company expands beyond its roots in gaming, it hopes to build a system that’s more effective at moderating itself.

The no-strike system

Discord’s moderation rules have always been a bit tricky to pin down, perhaps by design.

While individual servers can set their own rules, Discord itself has not laid out a specific number of strikes or infractions that lead to suspension across its platform. Users in turn had no way to know where they stood, even if Discord was quietly keeping a tally of their infractions.

[Photo: Discord]

The new system tries to be more transparent while still stopping short of a distinct strike count. When users violate a rule, they’ll get a detailed pop-up describing what they did wrong along with any temporary restrictions that might apply. They can then head to Discord’s privacy and safety menu to see how the violation affects their account standing and what they can do to improve it.

Discord says it will have four levels of account status—including “All Good,” “Limited,” “Very Limited,” and “At Risk”—before users reach a platform-wide suspension. Serious offenses such as violent extremism and sexualizing children are still grounds for an immediate ban, but otherwise Discord isn’t assigning scores to each violation or setting a specific number of violations for each level.

It’s a different approach than what some of its peers are doing. Facebook, for instance, has a 10-strike system with increasing penalties at each level, while Microsoft recently launched an eight-strike system for Xbox users, with some violations counting for more than one strike.

Ben Shanken, Discord’s vice president of product, says the company will treat each type of violation differently, but ultimately it wants to leave more room for subjectivity.

“If your friend is just trying to report a message to troll you a little bit, we don’t want that to result in your account getting banned,” he says. “We’ve built from the ground up to try and be more bespoke about it.”

Early warnings for teens

As for Discord’s new teen safety features, the company says it will use image recognition algorithms to detect and blur potentially sensitive images from friends, and will block those images in DMs from strangers. Teen can then click the image to reveal its contents or head to Discord’s settings to disable the feature. While image blurring will be on by default for teens, adults will have an option to enable it as well.

Meanwhile, Discord will begin sending safety alerts to teens when they get messages from strangers. The alerts will make sure the teen is sure they want to reply, and will include links with safety tips and instructions on how to block the user.

Discord says the new warnings are part of a broader initiative to make its platform safer for teens. In June, NBC News reported on dozens of kidnapping, grooming or sexual assault cases over the past six years in which communications allegedly happened on Discord. It also cited data from the National Center for Missing & Exploited Children showing that reports of child sexual abuse material on Discord increased by 474% from 2021 to 2022, with the group claiming slower average response times from Discord over that period.

Shanken says Discord started working on the new safety features about nine months ago, and that the company will build on those features over the coming year. The plan is to give teens more control over their communications, while also getting smarter at detecting potential safety issues and flagging them for users.

“We’d much rather have a teenager receive an alert and block a user than just send a report to us, and us having to go figure that out,” he says.

Like other big tech companies, Discord dreams of being able to use AI and automation to build self-moderating systems. But while other technology companies are making cuts to those moderation efforts, Shanken says that hasn’t been the case at Discord. He notes that the team working on safety is Discord’s second-largest technology group.

“It’s true that these parts of the business are pressured in tougher economic times, and that’s not been the case at Discord,” he says. “We’ve only continued to expand our investment over the past couple of years.”

https://www.fastcompany.com/90969656/discord-is-implementing-a-flexible-approach-to-content-moderation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 19 oct. 2023, 15:30:09


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

AI coding tools could bring us the ‘one-employee unicorn’

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

24 apr. 2025, 18:40:03 | Fast company - tech
Bot farms invade social media to hijack popular sentiment

Welcome to the world of social media mind control. By amplifying free speech with fake speech, you can numb the brain into believing just about anything. Surrender your blissful ignorance and swall

24 apr. 2025, 13:50:11 | Fast company - tech
The economic case for saving human jobs

Few periods in modern history have been as unsettled and uncertain as the one that we are living through now. The established geopolitical order is facing its greatest challenges in dec

24 apr. 2025, 13:50:11 | Fast company - tech
Patreon’s rivalry with Substack is growing. Who will win over creators?

Substack and Patreon are vying to become creators’ primary revenue stream.

For most influencers, payouts from platforms like Meta or Google aren’t enough to build a sustainable career. R

24 apr. 2025, 11:40:04 | Fast company - tech
TikTok’s ‘SkinnyTok’ trend is under fire from EU regulators

The European Commission is coming for “SkinnyTok.”

EU regulators are investigating a recent wave of social media videos that promote extreme thinness and “tough-love” weight loss advice,

24 apr. 2025, 00:10:04 | Fast company - tech
The subreddit r/AITA is headed for the small screen

The infamous “Am I The A**hole?” subreddit is making its way to the small screen.

Hosted by Jimmy Carr, the new game show for Comedy Central U.K. will feature members of the public appea

23 apr. 2025, 19:30:03 | Fast company - tech
Ex-OpenAI workers ask state AGs to block for-profit conversion

Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to s

23 apr. 2025, 17:10:06 | Fast company - tech