Artificial Intelligence

The evil twin sister: Gen AI’s use in fraud

  • Bad actors are using LLMs like FraudGPT and WormGPT to generate personalized phishing emails and malicious code.
  • It's not only FraudGPT and WormGPT that pose risks; regular and publicly available LLMs like ChatGPT and Bard play a role too.

Email a Friend

The evil twin sister: Gen AI’s use in fraud

The powers of Gen AI have fallen prey to malicious actors. Bad actors are using LLMs like FraudGPT and WormGPT to generate personalized phishing emails and malicious code. These GPT-lookalikes are available for as little as $200 per month on the dark web.

The email text generated by these services doesn’t have the usual tell-tale signs of a malicious email, like incorrect grammar. These signals have been used by FIs to educate their customers for quite some time, and the use of Gen AI will make much of this work out-of-date. In 2022, over 300,000 people in the U.S. fell prey to phishing attacks, which amounted to a total loss of $52,089,159.

An interactive map of the USA (by states) showing the states most affected by Phishing scams. States like DC, New York among others are the most affected.

While there is some discussion about whether LLMs like FraudGPT and WormGPT are actually effective, the biggest concern is not that these services will make the most malicious and efficient hackers better, but that they will allow amateurs to go farther than they would have on their own.

For example, when bad actors target their attacks on individuals in mid to large companies, they have to sort through a lot of information like titles and organization hierarchies to imitate familiarity, urgency, and authority. LLMs make this task much easier, according to Jesse Barbour Chief Data Scientist at Q2. “The successful application of such techniques requires a deep understanding of the target business’s organizational structure so that the fraudster knows which levers to turn for a given individual. LLMs excel at synthesizing large amounts of this kind of information into structured and effective narratives. Imagine a prompt with the following structure: “Given org chart A, devise a plan to influence employee B to take action C,” he added. 

The video below is being distributed amongst buyers to showcase FraudGPT’s capabilities.

Investigations by reformed black hat hacker Daniel Kelley reveal that work is underway on improved versions of FraudGPT and WormGPT. These new versions are expected to have access to the internet and will be able to integrate with Google Lens to allow fraudsters to send videos and images along with text. 

Previously, the FBI issued a warning which highlighted the use of Gen AI in traditional crimes like fraud and extortion:

“Tools from AI are readily and easily applied to our traditional criminal schemes, whether ransom requests from family members, or abilities to generate synthetic content or identities online, attempts to bypass banking or other financial institutions’ security measures, attempts to defraud the elderly, that’s where we’ve seen the largest part of the activity,” said a senior FBI official.

In the same vein, Barbour added that it's not only FraudGPT and WormGPT that pose risks; regular and publicly available LLMs like ChatGPT and Bard play a role too. “The most opportunistic fraudsters employ an array of different tools, including commercially available state-of-the-art models from companies like Anthropic and OpenAI, open-source models that are readily available on the internet, and task-specific models that have been purposefully fine-tuned and aligned to do things like execute fraud (these are the ones on the dark web,” Barbour said.

The most vulnerable here are consumers who are not digitally savvy. “It can be very challenging, particularly for non-digitally native generations, to discern what’s real from what’s fake in today’s AI-driven landscape,” said Doriel Abrahams, Head of Risk (US), at fraud prevention technology provider Forter. He also added that he was able to prompt an AI tool into generating hundreds of thousands of fake credit card numbers.

On the other hand, Gen AI is also making inroads into the fraud detection industry with companies talking about its improved capabilities in rule-generation and synthetic data generation that can improve detection models. It is likely that soon, the technology will be active on both sides of a fraud attempt. Hopefully, since FIs can access better data and AI models, the odds will be tipped in their favor. 

0 comments on “The evil twin sister: Gen AI’s use in fraud”

Artificial Intelligence, Partner

The Future Unveiled: Generative AI’s influence on financial institutions, from Customer Care to Fraud Prevention

  • In finance, generative AI transforms customer service with advanced conversational AI and NLP, boosting satisfaction and revenues.
  • Additionally, it impacts fraud detection by crafting synthetic data, cutting false positives and improving detection rates significantly.
Jacqueline White, i2c | November 16, 2023
Artificial Intelligence, Partner, Payments

The AI Effect of the future of payments

  • Technical innovations like generative AI can push the bounds of global commerce
  • Visa’s new AI Advisory Practice will aid businesses in the payments sector to unlock the potential of artificial intelligence (AI) and utilize generative AI (Gen AI)
Visa | November 09, 2023
Artificial Intelligence

Is Generative AI successfully making inroads into the banking industry?

  • Do Generative AI tools have the power to propel banks into a new era of personalization and efficiency? 
  • Dive into how banks are utilizing the power of Gen AI, what if anything is holding them up and whether technology providers have a head start in the space.
Rabab Ahsan | September 13, 2023
Artificial Intelligence

53% of consumers trust Generative AI for financial planning

  • Customers are showing willingness to purchase products recommended by Generative AI as well as trust its use for financial planning.
  • Even though customer appetite seems to be strong, FIs are not rushing into Generative AI adoption, especially in consumer-facing products.
Rabab Ahsan | June 22, 2023
Artificial Intelligence

The environmental cost of Generative AI: A conundrum for FIs

  • Generative AI models are posing many new questions about the future of business, work and even industries like finance.
  • However, advancements in the field have left much to be desired when it comes to the impact on the environment. How will the financial industry be affected by Generative AI's carbon footprint?
Rabab Ahsan | May 30, 2023
More Articles