Artificial Intelligence, Banking

GPT-4 faces a challenger: Can Writer’s finance-focused LLM take the lead in banking?

  • We often focus on chatbots built by banks and financial firms, but today, we explore the engines driving chatbot interactions and platform automation.
  • Banks typically turn to GPT-4 for LLM solutions, but a potential rival is emerging. San Francisco’s Writer, a gen AI company, is pushing forward in enterprise AI with domain-specific LLMs like Palmyra Fin.
close

Email a Friend

GPT-4 faces a challenger: Can Writer’s finance-focused LLM take the lead in banking?

Banks are heavily investing in Large Language Models (LLMs) to enhance both internal operations and customer interactions — yet building a model that excels at both is a significant challenge.

A recent study by Writer, a San Francisco-based generative AI company that provides a full-stack AI platform for enterprise use, found that ‘thinking’ LLMs produce false information in up to 41% of tested cases.

The study evaluated advanced reasoning models in real-world financial scenarios, highlighting the risks such inaccuracies pose to regulated industries like financial services. The research also showed that traditional chat LLMs outperform thinking models in accuracy.

LLMs are used in three main ways within financial services:

  1. Platforms for operations & automation – LLMs power internal enterprise platforms to streamline workflows, automate document processing, summarize reports, analyze data, and assist employees. For example, Ally Bank’s proprietary AI platform, Ally.ai, uses LLMs to improve its marketing and business processes.
  2. Task-specific AI assistants – LLMs enhance specific financial tasks such as fraud detection, compliance monitoring, or investment analysis. An example of this is J.P. Morgan’s IndexGPT, which aims to provide AI-driven investment insights.
  3. Chatbots & virtual assistants – LLMs improve customer-facing chatbots by making them more conversational and executing basic tasks. Bank of America’s virtual assistant, Erica, provides banking insights to its customers.

We often focus on chatbots built by banks and financial firms, but today, we explore the underlying technology behind them — the engines driving chatbot interactions and platform automation.

We take a closer look at the LLMs driving these AI systems, their challenges, and how financial firms can train enterprise-grade models to capitalize on their potential while controlling their risks.

Thinking LLMs vs. traditional chat LLMs 

Thinking LLMs, also referred to as CoT (Chain-of-Thought) models, are designed to simulate multi-step reasoning and decision-making processes to provide more nuanced responses beyond only retrieving or summarizing information, says Waseem Alshikh, CTO and co-founder of Writer.

Waseem Alshikh, CTO and co-founder of Writer

Morgan Stanley’s AI Assistant, for example, uses OpenAI’s GPT-4 to scan 100,000+ research reports and provide quick insights to financial advisors. It enhances portfolio strategy recommendations by summarizing complex data beyond retrieving reports.

“These models are not truly ‘thinking’ but are instead trained to generate outputs that resemble reasoning patterns or decompose complex problems into intermediate reasoning steps,” Waseem notes.

Morgan Stanley’s AI tool encountered accuracy issues stemming from hallucinated responses. Shortly after its launch in 2023, sources within the company described the tool as ‘spotty on accuracy,’ with users frequently receiving responses like “I’m unable to answer your question.”

While Morgan Stanley has been proactive in fine-tuning OpenAI’s GPT-4 model to assist its financial advisors, the company acknowledges the challenges posed by AI hallucinations. To reduce inaccuracies, the bank curated training data and limited prompts to business-related topics.

Traditional chat LLMs, however, tend to be more accurate, according to Waseem. These models mainly use pattern matching and next-token prediction, responding in a conversational manner based on pre-trained knowledge and contextual cues. While these models may struggle with complex queries at times, they produce fewer hallucinations, making them more reliable for regulatory compliance, according to Writer’s research.

Bank of America’s virtual assistant, Erica, uses a traditional chat model to assist customers with banking tasks like balance inquiries, bill payments, and credit report updates. By leveraging structured data and predefined algorithms, it provides accurate and reliable responses while reducing the likelihood of misinformation.

But how can financial firms navigate the trade-off between AI sophistication and accuracy?

Best practices for implementing thinking LLMs in financial services

Given the advanced capabilities of thinking LLMs, financial firms can’t simply rule them out, but they can deploy them effectively with the right strategic approaches.

Waseem outlines the key steps:


TS Pro subscription options

0 comments on “GPT-4 faces a challenger: Can Writer’s finance-focused LLM take the lead in banking?”

Banking, Partner

How cloud-native core banking helped EQ Bank grow through improved customer experience

  • EQ Bank’s early move to a cloud-native core with Temenos positioned it to lead in digital transformation, leveraging real-time data and AI to deliver personalized customer experiences.
  • Temenos’ Will Moroney and EQ Bank’s Geoff Vona talk about how co-building solutions has propelled both the company and the bank forward.
Tearsheet Editors | September 02, 2025
Banking, Member Exclusive

What happens when the biggest bank starts thinking like a tech firm? The 3 pillars transforming J.P. Morgan’s banking model

  • With every passing year, J.P. Morgan is taking deliberate steps to rearchitect how its financial infrastructure works — internally, externally, and everywhere in between.
  • It’s doing that through three interlocking shifts across three fronts: AI, embedded finance, and blockchain rails.
Sara Khairi | August 14, 2025
Banking, Partner

Banks can’t duct tape their way out of legacy system failures. Core modernization is a business imperative

  • Legacy banking systems create mounting operational risks and innovation constraints, with "duct tape" fixes leading to frequent outages and inability to compete with agile fintechs.
  • Ritesh Rihani from Galileo and John Kraper from PwC discuss incremental transformation strategies, talent challenges, and unlocking data-driven banking through modern API-based architecture and event-driven systems.
Rabab Ahsan | August 12, 2025
Banking, Partner

How Temenos is leading banking forward through customer and partner insight

  • Chief Marketing Officer Isabelle Guis says customer centricity and innovation are at the heart of Temenos’ philosophy.
  • Guis also discusses how global market changes are impacting technology investments, strategies for addressing the limitations of legacy systems, and what it really means to lead banking forward.
Tearsheet Editors | August 06, 2025
Artificial Intelligence, Member Exclusive, SMB Finance

How Intuit is designing embedded AI agents in QuickBooks to serve SMBs

  • Traditional automation has helped with repetitive tasks, but often falls short when workflows get complex or unpredictable. In addition, key data remains siloed.
  • Intuit is tackling this issue by embedding AI agents directly into QuickBooks, supporting core functions like payments, accounting, finance, and customer support to better serve SMBs.
Sara Khairi | July 31, 2025
More Articles