Discord doesn’t want to count strikes when users run afoul of its rules.
As part of a slew of fall product updates, the online chat platform announced a more flexible approach to moderation. Instead of handing out strikes for every policy violation, Discord will tailor its warnings or punishments to fit the crime, while providing steps users can take to improve their standing.
“We think we’ve built the most nuanced, comprehensive, and proportionate warning system of any platform,” Savannah Badalich, Discord’s senior director of policy, told reporters.
Alongside the new warning system, Discord is also launching new safety features for teens: It will auto-blur potentially-sensitive images from teens’ friends by default, and it will show a “safety check” when teens are messaging with someone new, asking if they want to proceed and linking to additional safety tips.
In both cases, Discord wants to show that it’s taking safety seriously after years of controversy and criticism. A report by NBC News in May documented how child predators used the platform to groom and abduct teens, while other reports have found pockets of extremism to thrive there.
Discord likes to point out that more 15% of its employees work on trust and safety. As the company expands beyond its roots in gaming, it hopes to build a system that’s more effective at moderating itself.
The no-strike system
Discord’s moderation rules have always been a bit tricky to pin down, perhaps by design.
While individual servers can set their own rules, Discord itself has not laid out a specific number of strikes or infractions that lead to suspension across its platform. Users in turn had no way to know where they stood, even if Discord was quietly keeping a tally of their infractions.

The new system tries to be more transparent while still stopping short of a distinct strike count. When users violate a rule, they’ll get a detailed pop-up describing what they did wrong along with any temporary restrictions that might apply. They can then head to Discord’s privacy and safety menu to see how the violation affects their account standing and what they can do to improve it.
Discord says it will have four levels of account status—including “All Good,” “Limited,” “Very Limited,” and “At Risk”—before users reach a platform-wide suspension. Serious offenses such as violent extremism and sexualizing children are still grounds for an immediate ban, but otherwise Discord isn’t assigning scores to each violation or setting a specific number of violations for each level.
It’s a different approach than what some of its peers are doing. Facebook, for instance, has a 10-strike system with increasing penalties at each level, while Microsoft recently launched an eight-strike system for Xbox users, with some violations counting for more than one strike.
Ben Shanken, Discord’s vice president of product, says the company will treat each type of violation differently, but ultimately it wants to leave more room for subjectivity.
“If your friend is just trying to report a message to troll you a little bit, we don’t want that to result in your account getting banned,” he says. “We’ve built from the ground up to try and be more bespoke about it.”
Early warnings for teens
As for Discord’s new teen safety features, the company says it will use image recognition algorithms to detect and blur potentially sensitive images from friends, and will block those images in DMs from strangers. Teen can then click the image to reveal its contents or head to Discord’s settings to disable the feature. While image blurring will be on by default for teens, adults will have an option to enable it as well.
Meanwhile, Discord will begin sending safety alerts to teens when they get messages from strangers. The alerts will make sure the teen is sure they want to reply, and will include links with safety tips and instructions on how to block the user.
Discord says the new warnings are part of a broader initiative to make its platform safer for teens. In June, NBC News reported on dozens of kidnapping, grooming or sexual assault cases over the past six years in which communications allegedly happened on Discord. It also cited data from the National Center for Missing & Exploited Children showing that reports of child sexual abuse material on Discord increased by 474% from 2021 to 2022, with the group claiming slower average response times from Discord over that period.
Shanken says Discord started working on the new safety features about nine months ago, and that the company will build on those features over the coming year. The plan is to give teens more control over their communications, while also getting smarter at detecting potential safety issues and flagging them for users.
“We’d much rather have a teenager receive an alert and block a user than just send a report to us, and us having to go figure that out,” he says.
Like other big tech companies, Discord dreams of being able to use AI and automation to build self-moderating systems. But while other technology companies are making cuts to those moderation efforts, Shanken says that hasn’t been the case at Discord. He notes that the team working on safety is Discord’s second-largest technology group.
“It’s true that these parts of the business are pressured in tougher economic times, and that’s not been the case at Discord,” he says. “We’ve only continued to expand our investment over the past couple of years.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

A new partnership between music creation platform BandLab and Sony is set to bring users production tools that are aimed at making independent musicians competitive with big-budget artists.


If AI lives up to its hype and we can “outsource” the thinking, planning, and strategy parts of our jobs, do we risk losing the skills that make us human?
Research from the Center for St

Influencers get a lot of stick these days. The latest thing they’re being blamed for: shark attacks.
Scientists have noted a recent rise in shark attacks, and according to new research p

As artificial intelligence gets smarter, a growing number of companies are increasing its implementation in their operations or more heavily promoting their own AI offerings. The buzzword for this

Consumers are only just starting to feel pain from Trump’s Liberation Day tariff spree. Amazon

When Donald Trump returned to the White House in 2025, many in the tech world hoped his promises to champion artificial intelligence and cut regulation would outweigh the risks of his famously vol