The evil twin sister: Gen AI’s use in fraud
- Bad actors are using LLMs like FraudGPT and WormGPT to generate personalized phishing emails and malicious code.
- It's not only FraudGPT and WormGPT that pose risks; regular and publicly available LLMs like ChatGPT and Bard play a role too.

The powers of Gen AI have fallen prey to malicious actors. Bad actors are using LLMs like FraudGPT and WormGPT to generate personalized phishing emails and malicious code. These GPT-lookalikes are available for as little as $200 per month on the dark web.
The email text generated by these services doesn’t have the usual tell-tale signs of a malicious email, like incorrect grammar. These signals have been used by FIs to educate their customers for quite some time, and the use of Gen AI will make much of this work out-of-date. In 2022, over 300,000 people in the U.S. fell prey to phishing attacks, which amounted to a total loss of $52,089,159.

While there is some discussion about whether LLMs like FraudGPT and WormGPT are actually effective, the biggest concern is not that these services will make the most malicious and efficient hackers better, but that they will allow amateurs to go farther than they would have on their own.
For example, when bad actors target their attacks on individuals in mid to large companies, they have to sort through a lot of information like titles and organization hierarchies to imitate familiarity, urgency, and authority. LLMs make this task much easier, according to Jesse Barbour Chief Data Scientist at Q2. “The successful application of such techniques requires a deep understanding of the target business’s organizational structure so that the fraudster knows which levers to turn for a given individual. LLMs excel at synthesizing large amounts of this kind of information into structured and effective narratives. Imagine a prompt with the following structure: “Given org chart A, devise a plan to influence employee B to take action C,” he added.
The video below is being distributed amongst buyers to showcase FraudGPT’s capabilities.
Investigations by reformed black hat hacker Daniel Kelley reveal that work is underway on improved versions of FraudGPT and WormGPT. These new versions are expected to have access to the internet and will be able to integrate with Google Lens to allow fraudsters to send videos and images along with text.
Previously, the FBI issued a warning which highlighted the use of Gen AI in traditional crimes like fraud and extortion:
“Tools from AI are readily and easily applied to our traditional criminal schemes, whether ransom requests from family members, or abilities to generate synthetic content or identities online, attempts to bypass banking or other financial institutions’ security measures, attempts to defraud the elderly, that’s where we’ve seen the largest part of the activity,” said a senior FBI official.
In the same vein, Barbour added that it's not only FraudGPT and WormGPT that pose risks; regular and publicly available LLMs like ChatGPT and Bard play a role too. “The most opportunistic fraudsters employ an array of different tools, including commercially available state-of-the-art models from companies like Anthropic and OpenAI, open-source models that are readily available on the internet, and task-specific models that have been purposefully fine-tuned and aligned to do things like execute fraud (these are the ones on the dark web,” Barbour said.
The most vulnerable here are consumers who are not digitally savvy. “It can be very challenging, particularly for non-digitally native generations, to discern what’s real from what’s fake in today’s AI-driven landscape,” said Doriel Abrahams, Head of Risk (US), at fraud prevention technology provider Forter. He also added that he was able to prompt an AI tool into generating hundreds of thousands of fake credit card numbers.
On the other hand, Gen AI is also making inroads into the fraud detection industry with companies talking about its improved capabilities in rule-generation and synthetic data generation that can improve detection models. It is likely that soon, the technology will be active on both sides of a fraud attempt. Hopefully, since FIs can access better data and AI models, the odds will be tipped in their favor.