Artificial Intelligence, Banking, Member Exclusive

The double-edged sword of Gen AI: Harms and risks for consumers and employees and why nobody talks about it

  • We have all heard that Gen AI is a transformative force, but why does nobody talk about the harms may befall us through this technology?
  • In this article, we breakdown how technology provider concentration and conflict of interest can impact firms, and how hallucinations and bias can negatively affect customers and employees.
close

Email a Friend

The double-edged sword of Gen AI: Harms and risks for consumers and employees and why nobody talks about it

For our dedicated content series on Gen AI in financial services, we have had some of the biggest names in the industry speak to us about use cases that are unlocking pools of revenue and increased efficiency for these firms. These conversations have focused on Gen AI’s work in the back office at the biggest banks and fintechs in America, and how hundreds of teams across the industry are using the tech for tasks like software development, customer service, and summarization.

But missing in these conversations is a deep and serious discussion on the risks and harms that can come with adopting Gen AI. In this article, I break down why the industry doesn’t like to talk about the potential harm from using Gen AI and what these risks even are.

Why nobody talks about potential Gen AI harms

Bad press: Gen AI adoption is allowing companies to position their brand as tech-forward and cutting-edge. External facing conversations on potential harms and risks do not make for good marketing, especially in a climate that is convinced of Gen AI’s capability to propel us into a new future. 

The financial industry is responsible for people’s money, and so these companies often have to prioritize an image of safety that bolsters people’s trust, and discussions that undermine this image are perceived as harmful to this marketing play.

AI is complicated: Digital literacy is critical to understanding how AI works and its possible implications. While most AI practitioners are well aware of AI’s “black box” nature and the complex algorithmic overhead that goes into making AI algorithms explainable, consumers as well as non-tech bank employees may not have the same interest in understanding what AI is, how it works, where it can break, and how it impacts their lives.

Products and features which are layered with user-friendly UX are much more approachable and demonstrate tangible value when used. Dedicating hours to understanding how the backend works is a harder goal to justify to board members, employees, and customers with likely no short term advantages other than building a more aware community.

Gen AI is new: The novelty of Gen AI impedes the construction of sophisticated federal and state level regulations and sufficiently proactive company policies. This means financial leaders have no choice but to keep pace with competitors, adopt Gen AI, and watch their deployments closely for signs of harms and risks. The limited information in the market and vacuum of regulations on education, misuse, consumer and employee protection regarding AI does not encourage open conversations.

Despite the reticence in the industry to openly discuss potential harms and risks, one can make a pretty good argument that such a conversation is absolutely critical to the Gen AI-fueled utopia the industry is dreaming to build: Organizations willing to lead real conversations have a chance to position themselves as thought leaders and, more importantly, may be able to coax the industry into coming out of its silos and collaborate to build industry-wide standards that can help mitigate potential lawsuits and harms faced by consumers and employees.

What are these potential harmful impacts I’m referring to? There are quite a few, but covering each one in one article is nearly impossible, so I’ll include the ones that have the closest ties to use cases already active in the industry.

AI’s bot sized problem(s) for FIs

“Generative AI agents threaten to destabilize the financial system, sending it swinging from crisis to crisis,” writes the Roosevelt Institute. Gen AI tools are available to everyone, including bad actors that can use it to defraud customers, launch cyberattacks on FIs, and execute strategies to manipulate the market. Moreover, an organization’s internal tools have the capability to subject customers to discriminatory behaviors, privacy breaches, and hallucinations, as well. Considering that many FIs are currently using Gen AI in the back office, similar adverse effects can be experienced by employees, too.

The Gen AI powers that be: “The provision of AI agents may be an oligopolistic market, if not a natural monopoly” according to the Roosevelt Institute. This means that FIs that want to adopt Gen AI may face higher prices with the impetus for continued innovation being relatively low. It also means that bad actors can concentrate on these providers and exploit single points of failure that may expose an array of organizations and their customers to malicious activities.

Source: European Central Bank

Conflicts of Interest: The market is obsessed with agentic AI. But it’s unclear whose interests these agents will act on behalf of if two negotiating parties are using the same agent. Moreover, if multiple Gen AI agents are drawing from the same data bank, they run the risk of reacting to market conditions in identical ways, opening up chances for algorithmic biases against certain products. They may also encourage large groups of customers to act in a similar manner, which can lead to bank runs or stock market crashes.

How consumers and employees maybe at risk due to Gen AI

Plain old vanilla AI has been reported to make decisions that can lead to discrimination in credit decisioning algorithms. In 2022, Lemonade wrote in its 10-Q that its “proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”

This risk does not disappear with Gen AI. While a broad infusion of Gen AI in the credit decisioning process has yet to become commonplace, without stringent policies on what data Gen AI can or can not use, and how its decisions and outcomes will be governed, the industry has yet to build tools that will help prevent systemic discrimination against certain types of consumers barring them from accessing credit.

“Nonbank firms like financial technology (fintech) companies, which are already subject to significantly more permissive regulations than banks, may be especially inclined to deploy AI in assessing customer worthiness for their products,” writes the Roosevelt Institute, a sentiment which is in line with industry behaviors, where fintechs have been much faster at adopting and launching Gen AI-facing features like chatbots and dedicated Gen AI to research stocks.

It’s (not) a fact: Consumers and employees are also at the risk of being impacted by hallucinations. Although the biggest banks in the industry have yet to launch consumer-facing chatbots, most are now coming on record to talk about the productivity gains their employees are experiencing by using internal Gen AI chatbots.

Source: AI Multiple Research

The most commonly cited use cases are customer service agents using Gen AI to quickly access answers to customer questions, technology teams using Gen AI tools for software development and code conversion, and team-agnostic tools that help employees access company policies regarding day to day questions about processes.

The issue here is that it is unclear how these firms respond when employees take the wrong action based on the information they receive from Gen AI agents.

The question we need to ask is this: Is it enough to say that “Gen AI can make mistakes, so please double check the answers to ensure accuracy” when a whole marketing engine is dedicated to positioning these tools as “time-savers” and their users lack the digital competency to understand the tools they are using?

Sidebar: Gen AI in credit unions

Sidebar is a member-exclusive section, if you want to keep reading, please consider becoming a TS Pro member by clicking below. subscription wall for TS Pro

0 comments on “The double-edged sword of Gen AI: Harms and risks for consumers and employees and why nobody talks about it”

10-Q, Member Exclusive

A tale of two innovations: Square’s AI edge for SMBs and Morgan Stanley’s code makeover

  • We track two new AI developments this week from well-known public companies: Square and Morgan Stanley.
  • We analyze how Square’s Conversational AI signals a broader transformation in small business tech, and shed light on Morgan Stanley’s work addressing a critical, yet often ignored, AI issue.
Sara Khairi | June 09, 2025
10-Q, Member Exclusive

How Pagaya (PGY) and Upstart (UPST) are venturing deeper into AI to make fintech lending more intelligent

  • Recently, some of the quieter names in public finance have pushed their AI efforts beyond experimentation and into practical use.
  • We look at how under-the-radar public financial firms, Pagaya and Upstart, are operationalizing AI within their organizations.
Sara Khairi | June 02, 2025
10-Q, Member Exclusive

Remitly’s Q1 in review — and why its WhatsApp integration could be a turning point for fintech UX

  • Matt Oppenheimer, Remitly’s co-founder and CEO, shares key takeaways from the company’s Q1 earnings, while Ankur Sinha, Chief Product and Technology Officer, shares what the newly launched WhatsApp integration signals about the future of remittances and fintech UX.
  • We also explore the role of Remitly's conversational AI with the new integration.
Sara Khairi | May 28, 2025
10-Q, Member Exclusive

Affirm and Robinhood’s Earnings: The story so far and the road ahead

  • We look at Affirm and Robinhood’s recent earnings reports, the strategies that shaped their current positions, and their forward-looking trajectories.
  • We also explore their growing investments in AI and generative AI technologies.
Sara Khairi | May 19, 2025
10-Q, Member Exclusive

Green Dot lends real-world reach to Crypto.com’s digital ambitions

  • Renata Caine, General Manager and SVP of Embedded Finance at Green Dot, shares exclusive insights on the bank's new crypto collab.
  • Dave stock jumped after delivering a strong first-quarter performance.
Sara Khairi | May 12, 2025
More Articles