The underground world of black-market AI chatbots is thriving

ChatGPT’s 200 million weekly active users have helped propel OpenAI, the company behind the chatbot, to a $100 billion valuation. But outside the mainstream there’s still plenty of money to be made—especially if you’re catering to the underworld.

Illicit large language models (LLMs) can make up to $28,000 in two months from sales on underground markets, according to a study published last month in arXiv, a preprint server owned by Cornell University. 

That’s just the tip of the iceberg, according to the study, which looked at more than 200 examples of malicious LLMs (or malas) listed on underground marketplaces between April and October 2023. The LLMs fall into two categories: those that are outright uncensored LLMs, often based on open-source standards, and those that jailbreak commercial LLMs out of their guardrails using prompts.

“We believe now is a good stage to start to study these because we don’t want to wait until the big harm has already been done,” says Xiaofeng Wang, a professor at Indiana University Bloomington, and one of the coauthors of the paper. “We want to head off the curve and before attackers can incur huge harm to us.”

While hackers can at times bypass mainstream LLMs’ built-in limitations meant to prevent illegal or questionable activity, such instances are few and far between. Instead, to meet demand, illicit LLMs have cropped up. And unsurprisingly, those behind them are keen to make money off the back of that interest.

“We found that most of the mala services on underground forums exist mainly to earn profit,” says Zilong Lin, a researcher at Indiana University Bloomington, and another of the paper’s coauthors. 

The malicious LLMs can be put to work  in a variety of different ways, from writing phishing emails (a separate study estimates that LLMs can reduce the cost of producing such missives by 96%) to developing malware to attack websites. 

The abilities of these black market LLMs to carry out their tasks can vary wildly, although some are particularly powerful tools. Lin and Wang found that two uncensored LLMs, DarkGPT (which costs 78 cents for every 50 messages) and Escape GPT (a subscription service charged at $64.98 a month), were able to produce correct code around two-thirds of the time, and the code they produced were not picked up by antivirus tools—giving them a higher likelihood of successfully attacking a computer.

Another malicious LLM, WolfGPT, which costs a $150 flat fee to access, was seen as a powerhouse when it comes to creating phishing emails, managing to evade most spam detectors successfully.

The existence of such malicious AI tools shouldn’t be surprising, according to Wang. “It’s almost inevitable for cybercriminals to utilize AI,” Wang says. “Every technology always comes with two sides.”

Andrew Hundt, a computing innovation fellow Carnegie Mellon University who was not involved in the study, says the authors “demonstrate that malicious users are succeeding in reselling corporate offerings for malicious purposes.” 

Hundt believes policymakers ought to require AI firms to develop and put in place know-your-customer policies to verify a user’s identity. “We also need legal frameworks to ensure that companies that create these models and provide services do so more responsibly in a way that mitigates the risks posed by malicious actors,” he says.

Wang, for his part, points out that research like his team’s is just the start when it comes to fighting the battle against cybercriminals. “We can develop technologies and provide insights to help them,” he says, “but we can’t do anything about stopping these things completely because we don’t have the resources.”

https://www.fastcompany.com/91184474/black-market-ai-chatbots-thriving?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 6mo | 5. 9. 2024 10:40:02


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

AI slop is suffocating the web, says a new study

The generative AI revolution shows no sign of slowing as OpenAI recently rolled out its GPT-4.5 model to paying ChatGPT users, while competitors have announced plans to introduce their own latest

10. 3. 2025 11:10:08 | Fast company - tech
I tried out a bunch of the AI assistants. Here’s what you need to know about each one

Does it feel to you like there are way too many AI assistants to keep track of?

Between ChatGPT, Microsoft Copilot, Google Gemini, Anthropic Claude, DeepSeek, and others, it’s hard

10. 3. 2025 11:10:07 | Fast company - tech
TikTok’s ‘recession brunette’ trend signals tough economic times

Noticed all the blondes going back to their natural hair color lately? As much as many try to claim it’s because of a “hair health journey,” other factors seem to be at play here. 

10. 3. 2025 6:30:08 | Fast company - tech
3 simple ways to fight back against spam calls

There’s a special place in you-know-where for spam callers. They’re annoying. They waste time. They’re also dangerous.

And while it’s challenging to eliminate spam calls entirely,

10. 3. 2025 6:30:07 | Fast company - tech
Back from Extinction: How Colossal Is Charting a New Frontier in Genomics

Featuring Ben Lamm, Founder and CEO, Colossal Biosciences and Joe Manganiello, Actor, Producer. Moderated by Kc Ifeanyi, Executive Director of Ed

10. 3. 2025 1:50:05 | Fast company - tech
iPad shoppers beware: One of the new models is not like the others

This week, Apple updated half of its iPad lineup.

After updating the iPad Pro and iPad mini in 2024, the company has just unveiled a third-generation iPad Air and an eleventh-generation

8. 3. 2025 12:40:08 | Fast company - tech
This secret site lets you try DeepSeek on a trustworthy U.S. server

We need to talk about AI. Have you noticed it often just isn’t—well, very intelligent?

Already, we’ve lived through years of AI hype. We’ve watched companies pitch AI as a great

8. 3. 2025 12:40:07 | Fast company - tech