Underground trading of malicious LLMs is fueling cybercrime

The web is being swamped by AI slop—but the swamp is creeping closer to home. Your email inboxes, phone SMS apps, instant messaging, and social media services are all being overtaken by inauthentic content.

From AI-generated footage of Hollywood actor Brad Pitt that conned a French woman out of $800,000, to phishing emails that direct victims to live chats with AI bots purporting to be from a legitimate business but which are actually criminals, AI scams are everywhere. Two in every three people tested by Vodafone failed to identify an AI-driven phishing attack.

One of those people was George Wilson, the founder of a small business based in Marietta, Georgia. Wilson asked Fast Company to use a pseudonym and not to disclose details of his business or bank information, concerned that being publicly identified as a scam victim could damage his company’s reputation and make him a future target.

In November 2024, Wilson received an email claiming to be from his bank stating that an invoice payment he didn’t recognize was being delayed. After clicking a link in the email, he was taken to a convincing online chat page, where a supposed bank representative explained the situation.

“It was all done in real time,” says Wilson. “Their dialogue was super natural, too, so while I was initially confused and suspicious, they managed to get rid of my fears.”

The representative told him he was the target of an attempted scam and assured him the bank had blocked it. Wilson no longer has the chat log, but suspects he must have shared some information during the exchange that gave the scammers access to his account. The next day, several thousand dollars were taken from his business before he realized what had happened, contacted his bank, and learned he’d been scammed.

The human-like interaction led Wilson to be defrauded of thousands—money he never recovered. “With AI, attackers can tailor messages to appear highly personalized, making it harder than ever for employees to distinguish a fake email from a legitimate one,” says Katie Paxton-Fear, an ethical hacker and cybersecurity lecturer at Manchester Metropolitan University.

The lure of AI for cybercriminals is obvious, and mirrors why the general public uses LLMs and other AI tools: They can generate convincing content at scale with minimal work. A lab-based study published in Harvard Business Review found that AI-generated phishing emails successfully deceived victims 60% of the time. It’s a high payoff for low effort, especially as LLMs take over the burden of crafting emails.

“We know that social engineering is one of the most effective forms of attack anyway, rather than malicious code, because you’ve got to try and have some way of landing the malicious code on victims,” says Alan Woodward, a cybersecurity professor at the University of Surrey. “It’s not surprising that LLMs are being used in the first instance to try and produce more effective versions of that.”

As the public becomes more aware of AI-powered phishing, cybercriminals are already moving several steps ahead. Cybersecurity firm Kela reports that discussion of malicious AI tools on the dark web has increased 200% in the past year. In 2023, there were around 4 daily mentions; by 2024, they spiked to 14-plus. This booming underground market is a “seismic shift” in cyberthreat development, says Yael Kishon, AI product and research lead at Kela.

Even open forums like Hack Forums are buzzing with discussions about optimal models for different attacks. A “Dark AI” forum regularly hosts new posts and replies, with a pinned mega list of AIs used for illicit purposes garnering nearly 20,000 views and more than 100 replies—on a public forum. Far more activity takes place in dark web spaces.

This growing trade in malicious LLMs is a worry to the general public, but it’s not surprising. “Once upon a time, to be a cybercriminal you needed skills, you needed knowledge, and needed to be able to code,” says Rob Allen, chief technical officer at ThreatLocker, which monitors the rise of LLMs among criminals. “Now all you need, really, is bad intentions.”

Only some mainstream LLMs have built-in guardrails to prevent malicious use, and dark web LLMs often rely on open-source models without safeguards or cracked versions of commercial ones. According to Kela’s analysis, discussions around jailbreaking LLMs have surged 52% year over year.

Malicious LLMs sold on criminal forums typically fall into two categories: jailbroken commercial LLMs or altered open-source models. Their approaches vary. EscapeGPT, launched in August 2023 at around $65 a month, was based on ChatGPT’s 3.5 Turbo, analysis of its behavior showed. WormGPT, developed from an open-source model in 2021, reportedly brought in about $14,000 monthly by charging for access.

This underground market reflects the legitimate AI industry: Competition is fierce, pricing strategies abound, and model creators market their tools aggressively. FraudGPT, a model released in July 2023, gets poor reviews (partly because its creators allegedly scam their own buyers) but still boasts in marketing materials about its 8.5-terabyte training data set.

GhostGPT, one of the latest and most talked-about malicious LLMs, is either a jailbroken version of ChatGPT or a modified open-source model, according to researchers at Abnormal Security. “GhostGPT is a chatbot specifically designed to cater to cybercriminals,” the researchers say. “By eliminating the ethical and safety constraints typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by conventional AI systems.”

Access is priced competitively: $50 for a week’s access to a Telegram bot, $150 for a month (less than OpenAI charges for access to ChatGPT Pro), or $300 for three months.

The model can generate phishing emails and malware prompts on demand. It also boasts fast responses and no data logs, minimizing the digital trail for law enforcement. In testing, Abnormal Security got it to produce a convincing Docusign phishing email in less than a minute from the prompt “Write a phishing email from Docusign.”

GhostGPT may be the newest model making waves in cybercrime circles, but it follows a familiar pattern. “We found that most use an API from OpenAI and jailbreaking prompts,” says Zilong Lin, a researcher at Indiana University Bloomington, who conducted an August 2024 analysis of more than 200 malicious LLMs available on the dark web. Criminals prefer jailbreaking existing models because building one from scratch is costly, Lin tells Fast Company.

And with many models simply a single jailbreak prompt away from being shorn of their protections, it’s never been easier to leverage the capabilities of the world’s most powerful chatbots for nefarious means. But solutions can be complicated, says ThreatLocker’s Allen. “Fundamentally, everything is to a greater or lesser degree vulnerable,” he explains. “Most things are weaponizable.”

This story was supported by Tarbell Grants.


https://www.fastcompany.com/91309620/underground-trading-of-malicious-llms-is-fueling-cybercrime?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1d | 2 avr. 2025, 11:20:03


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Straight Talk Wireless rolls out smartphone vending machines at Walmart stores

For those tired of waiting in line to buy a new smartphone or anxiously refreshing a delivery tracking site to make sure a new phone arrives intact, Verizon’s

3 avr. 2025, 10:30:03 | Fast company - tech
The Tumblr revival is real—and Gen Z is leading the charge

Rumors of a Tumblr comeback have been bubbling for a couple of years—think a pair of Doc Martens here, a splash of pastel hair dye there. Now, Gen Z is embracing the platform as a refuge from an i

3 avr. 2025, 05:40:10 | Fast company - tech
Andrew Tate is back—and he’s getting a hero’s welcome from right-wing podcasters

You can’t talk about the manosphere without mentioning Andrew Tate. The British-American influencer and former professional kickboxer built his platform by promoting misogynistic ideas—claiming wo

2 avr. 2025, 22:50:04 | Fast company - tech
Meta and UFC team up to bring AI and VR to fans

UFC is joining up with Facebook’s parent company

2 avr. 2025, 22:50:02 | Fast company - tech
An AI watchdog accused OpenAI of using copyrighted books without permission

An artificial intelligence watchdog is accusing OpenAI of training its default ChatGPT model on copyrighted book content without permission.

In a new paper

2 avr. 2025, 20:30:07 | Fast company - tech
Trump signals TikTok sale will done by April 5 deadline. Who will buy it?

As the deadline to strike a deal over TikTok approaches this week, President Donald Trump has signaled that he is confident his administrat

2 avr. 2025, 18:20:04 | Fast company - tech
CERN scientists release blueprint for the Future Circular Collider

Top minds at the world’s largest atom smasher have released a blueprint for 

2 avr. 2025, 18:20:03 | Fast company - tech