The double-edged sword of Gen AI: Harms and risks for consumers and employees and why nobody talks about it
- We have all heard that Gen AI is a transformative force, but why does nobody talk about the harms may befall us through this technology?
- In this article, we breakdown how technology provider concentration and conflict of interest can impact firms, and how hallucinations and bias can negatively affect customers and employees.

For our dedicated content series on Gen AI in financial services, we have had some of the biggest names in the industry speak to us about use cases that are unlocking pools of revenue and increased efficiency for these firms. These conversations have focused on Gen AI’s work in the back office at the biggest banks and fintechs in America, and how hundreds of teams across the industry are using the tech for tasks like software development, customer service, and summarization.
But missing in these conversations is a deep and serious discussion on the risks and harms that can come with adopting Gen AI. In this article, I break down why the industry doesn’t like to talk about the potential harm from using Gen AI and what these risks even are.
Why nobody talks about potential Gen AI harms
Bad press: Gen AI adoption is allowing companies to position their brand as tech-forward and cutting-edge. External facing conversations on potential harms and risks do not make for good marketing, especially in a climate that is convinced of Gen AI’s capability to propel us into a new future.
The financial industry is responsible for people’s money, and so these companies often have to prioritize an image of safety that bolsters people’s trust, and discussions that undermine this image are perceived as harmful to this marketing play.
AI is complicated: Digital literacy is critical to understanding how AI works and its possible implications. While most AI practitioners are well aware of AI’s “black box” nature and the complex algorithmic overhead that goes into making AI algorithms explainable, consumers as well as non-tech bank employees may not have the same interest in understanding what AI is, how it works, where it can break, and how it impacts their lives.
Products and features which are layered with user-friendly UX are much more approachable and demonstrate tangible value when used. Dedicating hours to understanding how the backend works is a harder goal to justify to board members, employees, and customers with likely no short term advantages other than building a more aware community.
Gen AI is new: The novelty of Gen AI impedes the construction of sophisticated federal and state level regulations and sufficiently proactive company policies. This means financial leaders have no choice but to keep pace with competitors, adopt Gen AI, and watch their deployments closely for signs of harms and risks. The limited information in the market and vacuum of regulations on education, misuse, consumer and employee protection regarding AI does not encourage open conversations.
Despite the reticence in the industry to openly discuss potential harms and risks, one can make a pretty good argument that such a conversation is absolutely critical to the Gen AI-fueled utopia the industry is dreaming to build: Organizations willing to lead real conversations have a chance to position themselves as thought leaders and, more importantly, may be able to coax the industry into coming out of its silos and collaborate to build industry-wide standards that can help mitigate potential lawsuits and harms faced by consumers and employees.
What are these potential harmful impacts I’m referring to? There are quite a few, but covering each one in one article is nearly impossible, so I’ll include the ones that have the closest ties to use cases already active in the industry.