How Temenos is co-creating AI products with banks, not just for them

Nine months into her role as Chief Product and Technology Officer at Temenos, Barb Morgan is focused on a simple principle when it comes to product strategy: quality over quantity. “We want to build less, but build it better,” Morgan said during a conversation at the Temenos Regional Forum Americas 2025 held May 28-30 in Miami.

Temenos’ approach centers on co-creating meaningful solutions with bank customers rather than rushing to market with multiple products. Morgan emphasized that the company is “really focused on making sure that whatever we put out there is meaningful,” as the industry navigates what she calls the “AI hype curve.”

Morgan’s insights reveal why many banks struggle with AI adoption despite the technology’s promise. The real barriers aren’t about computing power or algorithms — they’re messier problems involving decades-old data systems that were never designed for AI and organizational cultures that haven’t caught up to the pace of technological change. 

Her conversation also detailed Temenos’ bet on bringing innovation closer to customers, such as through its new hub in Orlando designed for co-creation, and why the company is taking a strategic and deeply integrated approach to AI that enables banks to deploy AI-powered solutions faster and safer.

Listen to full podcast

A three-pronged AI strategy

Temenos has structured its AI approach around three core components: Gen AI embedded directly into its platform and products, agentic AI with a first solution for sanctions screening already live at one Tier-1 bank, and an AI studio for custom use cases. “We have a lot of customers coming to us with very unique use cases, and so we want to provide them a platform that’s pre-built with banking modules,” Morgan explained.

The company’s focus on embedded AI addresses a common industry challenge. “Having it embedded, versus our customers trying to figure out how to bolt it onto our product, is really important to us,” she said. This approach allows banks to access AI capabilities without investing too many resources into integration.

Banks are ready for AI – their data isn’t

One of the biggest obstacles to AI adoption isn’t fear of technology, but foundational data issues, shares Morgan. “A lot of banks over the past 10 to 15 years went through this huge digital transformation, but what they didn’t transform was the data in the back end,” Morgan noted. “In order to leverage the power of AI, you have to have your data clean.”

This reality has shifted many of Temenos’ client conversations toward data readiness rather than AI capabilities. “Our clients also want to leverage their own data systems. So how clean is your data? Is it really ready? Because for secure AI products, you have to have your data in order,” she said.

Cultural change is the harder challenge

Beyond technical hurdles, banks face significant organizational resistance to AI implementation. “I was talking with one of our US banks last week, and he said I underestimated the amount of cultural change that’s necessary, because so many people are afraid of AI,” Morgan shared.

The fear stems from job displacement concerns rather than technological limitations. “They think it’s going to take my job away, versus thinking of it as augmenting their job and being more of a side by side partner,” she explained. This cultural aspect has to become a major focus for banks that want to succeed with their AI implementations.

A gradual approach to AI deployment

Temenos’ strategy acknowledges these cultural and technical challenges by allowing banks to phase in AI adoption. Morgan described how one tier-one bank using their agentic AI product FCM AI Agent started with just 5% of its traffic, then gradually increased it to 20%. “It wasn’t because they didn’t trust the technology. It was because they were getting the rest of the organization comfortable,” she said.

This incremental approach extends to customer-facing applications as well. “A lot of people, it seems, have their favorite [Gen AI] tool on their phone,” Morgan observed. “I think maybe the banks have underestimated that the customers are actually ready to interact with AI.”

Bringing innovation closer to customers

Part of Temenos’ US expansion includes the opening of its Orlando Innovation Hub, designed specifically for co-creation with bank customers. “Instead of just expanding one of our existing offices, we’re actually going into a brand new building,” Morgan said. “It’s all about being able to do the design workshop, but then the space can transform to doing co-development together.”

The facility will include spaces that can replicate bank branch environments. “There’s a space where we can make it feel like you’re walking into the branch of the bank, and so we can actually recreate exactly what it’ll feel like for their customers,” she explained.

Market-centric over centralized delivery

The Orlando hub represents a broader shift in Temenos’ delivery model. “Over the past 30 years, we have had a pretty centralized delivery team, and this is about bringing it closer to our customers,” Morgan said. “Versus centralized delivery, it’s more about market-centric innovation.”

This approach is driving the firm’s hiring, with plans underway to recruit for 200 positions at its Orland Innovation Hub. “At a recent hiring event, every candidate who received an offer accepted,” Morgan noted. “They were really excited about the co-innovation and the ability to actually work how we want and bring our best selves.”

Building products that actually ship

Morgan has instituted a new discipline around product announcements, moving away from proof-of-concepts toward deliverable solutions. “We’re only going to announce things when they’re live and ready to use now,” she said.

The company has also allocated 25% of its development capacity specifically to customer-driven features. “We’ve actually allocated about 25% of our capacity to just listening to customers and putting their needs on top of what we would already have planned,” Morgan explained.

This customer-centric approach extends to the broader organizational transformation Morgan is leading. 

The double-edged sword of Gen AI: Harms and risks for consumers and employees and why nobody talks about it

For our dedicated content series on Gen AI in financial services, we have had some of the biggest names in the industry speak to us about use cases that are unlocking pools of revenue and increased efficiency for these firms. These conversations have focused on Gen AI’s work in the back office at the biggest banks and fintechs in America, and how hundreds of teams across the industry are using the tech for tasks like software development, customer service, and summarization. 

But missing in these conversations is a deep and serious discussion on the risks and harms that can come with adopting Gen AI. In this article, I break down why the industry doesn’t like to talk about the potential harm from using Gen AI and what these risks even are. 

Why nobody talks about potential Gen AI harms 

Bad press: Gen AI adoption is allowing companies to position their brand as tech-forward and cutting-edge. External facing conversations on potential harms and risks do not make for good marketing, especially in a climate that is convinced of Gen AI’s capability to propel us into a new future. 

The financial industry is responsible for people’s money, and so these companies often have to prioritize an image of safety that bolsters people’s trust, and discussions that undermine this image are perceived as harmful to this marketing play. 

AI is complicated: Digital literacy is critical to understanding how AI works and its possible implications. While most AI practitioners are well aware of AI’s “black box” nature and the complex algorithmic overhead that goes into making AI algorithms explainable, consumers as well as non-tech bank employees may not have the same interest in understanding what AI is, how it works, where it can break, and how it impacts their lives. 

Products and features which are layered with user-friendly UX are much more approachable and demonstrate tangible value when used. Dedicating hours to understanding how the backend works is a harder goal to justify to board members, employees, and customers with likely no short term advantages other than building a more aware community. 

Gen AI is new: The novelty of Gen AI impedes the construction of sophisticated federal and state level regulations and sufficiently proactive company policies. This means financial leaders have no choice but to keep pace with competitors, adopt Gen AI, and watch their deployments closely for signs of harms and risks. The limited information in the market and vacuum of regulations on education, misuse, consumer and employee protection regarding AI does not encourage open conversations. 

Despite the reticence in the industry to openly discuss potential harms and risks, one can make a pretty good argument that such a conversation is absolutely critical to the Gen AI-fueled utopia the industry is dreaming to build: Organizations willing to lead real conversations have a chance to position themselves as thought leaders and, more importantly, may be able to coax the industry into coming out of its silos and collaborate to build industry-wide standards that can help mitigate potential lawsuits and harms faced by consumers and employees.

What are these potential harmful impacts I’m referring to? There are quite a few, but covering each one in one article is nearly impossible, so I’ll include the ones that have the closest ties to use cases already active in the industry. 

AI’s bot sized problem(s) for FIs

“Generative AI agents threaten to destabilize the financial system, sending it swinging from crisis to crisis,” writes the Roosevelt Institute. Gen AI tools are available to everyone, including bad actors that can use it to defraud customers, launch cyberattacks on FIs, and execute strategies to manipulate the market.

Moreover, an organization’s internal tools have the capability to subject customers to discriminatory behaviors, privacy breaches, and hallucinations, as well. Considering that many FIs are currently using Gen AI in the back office, similar adverse effects can be experienced by employees, too. 

The Gen AI powers that be: “The provision of AI agents may be an oligopolistic market, if not a natural monopoly” according to the Roosevelt Institute. This means that FIs that want to adopt Gen AI may face higher prices with the impetus for continued innovation being relatively low. It also means that bad actors can concentrate on these providers and exploit single points of failure that may expose an array of organizations and their customers to malicious activities. 

Conflicts of Interest: The market is obsessed with agentic AI. But it’s unclear whose interests these agents will act on behalf of if two negotiating parties are using the same agent. Moreover, if multiple Gen AI agents are drawing from the same data bank, they run the risk of reacting to market conditions in identical ways, opening up chances for algorithmic biases against certain products. They may also encourage large groups of customers to act in a similar manner, which can lead to bank runs or stock market crashes.

How consumers and employees maybe at risk due to Gen AI

Plain old vanilla AI has been reported to make decisions that can lead to discrimination in credit decisioning algorithms. In 2022, Lemonade wrote in its 10-Q that its “proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”

This risk does not disappear with Gen AI. While a broad infusion of Gen AI in the credit decisioning process has yet to become commonplace, without stringent policies on what data Gen AI can or can not use, and how its decisions and outcomes will be governed, the industry has yet to build tools that will help prevent systemic discrimination against certain types of consumers barring them from accessing credit. 

“Nonbank firms like financial technology (fintech) companies, which are already subject to significantly more permissive regulations than banks, may be especially inclined to deploy AI in assessing customer worthiness for their products,” writes the Roosevelt Institute, a sentiment which is in line with industry behaviors, where fintechs have been much faster at adopting and launching Gen AI-facing features like chatbots and dedicated Gen AI to research stocks. 

It’s (not) a fact: Consumers and employees are also at the risk of being impacted by hallucinations. Although the biggest banks in the industry have yet to launch consumer-facing chatbots, most are now coming on record to talk about the productivity gains their employees are experiencing by using internal Gen AI chatbots. 

The most commonly cited use cases are customer service agents using Gen AI to quickly access answers to customer questions, technology teams using Gen AI tools for software development and code conversion, and team-agnostic tools that help employees access company policies regarding day to day questions about processes. 

The issue here is that it is unclear how these firms respond when employees take the wrong action based on the information they receive from Gen AI agents. 

The question we need to ask is this: Is it enough to say that “Gen AI can make mistakes, so please double check the answers to ensure accuracy” when a whole marketing engine is dedicated to positioning these tools as “time-savers” and their users lack the digital competency to understand the tools they are using? 

Sidebar: Gen AI in credit unions

We have extensively covered how the biggest banks are activating Gen AI use cases to benefit from efficiency and productivity gains. But smaller institutions are also hopping onto this train. We heard from two industry players

  1. Commonwealth Credit Union: Recently, the $2.5 billion, Kentucky-based CU decided to fill in this gap by integrating a tool by Zest AI called LuLu Pulse, which uses Gen AI to consolidate multiple data sources like NCUA Call Reports, HMDA, and economic data. This ultimately allows lenders to gain insight into how their products and services compare to their peers by querying the platform.
  2. Duke University Federal Credit Union (DUFCU): The firm is experimenting with how the new tech can enable it to expand reach and build a stronger marketing funnel. It recently integrated Vertice AI’s copywriting tool called COMPOSE. 

For DUFCU’s Director of Marketing Jennifer Sider, purpose-built tools focused on the financial services space offer her a significant advantage over free Gen AI tools available to the public. It’s also better than the manual alternative of managing the whole copywriting process alone.

‘Trust me, I’m an algorithm’: How fintech is rebuilding customer confidence in the age of AI

The financial services industry has always been built on trust. Artificial intelligence is editing the rulebook on what that means. As banks and fintechs are pushing to deploy AI across everything from fraud detection to personalized recommendations, they’re discovering that customers’ definition of trustworthiness has evolved far beyond traditional metrics like security and reliability.

Today’s consumers want to know not just that their money is safe, but how algorithms are making decisions about their financial lives. They’re requesting transparency about data usage, explainability in AI-driven recommendations, and proof that these powerful new tools actually serve their interests, not just institutional bottom lines.

We asked industry leaders across financial services, fintech, and their supporting ecosystem how they’re navigating this new trust landscape. Their responses reveal both the complexity of the challenge and the emerging strategies that are actually working.

The new trust equation

The numbers tell a stark story about consumer sentiment. According to recent research from Accenture, while banks remain the most trusted entities for protecting customer data, 84% of customers are concerned about how that data gets used. Even more telling: only 26% are comfortable with extensive AI usage for data analysis, even when it promises better personalization.

“Today’s customers are no longer just evaluating institutions on performance — they’re scrutinizing how their data is used, how decisions are made, and whether emerging technologies like AI act in their best interests,” explains Monica Hovsepian, Global Senior Financial Services Industry Lead at OpenText. “This shift demands a new trust contract: one built not only on accuracy and speed, but on transparency, explainability, and ethical AI deployment.”

The message is clear: personalization must be transparent and demonstrably beneficial. Financial institutions can no longer assume that faster, smarter service automatically equates to better customer relationships.

Beyond the algorithm: Human-centered AI

For companies serving underbanked populations, this trust challenge carries additional weight. Kelly Uphoff, CTO at Tala, emphasizes that AI innovations must solve real customer problems while protecting dignity and identity. “Not all customers will be dazzled by AI unto itself,” she notes. “The technologists building these new solutions don’t often come from the communities we serve.”

Tala’s approach involves co-creating technology with customers from day one: showing early prototypes, listening to pain points, and incorporating feedback throughout development. They’ve also made hiring from the communities they serve a priority, creating a diverse workforce that better understands customer needs.

This human-centered approach echoes across different sectors of financial services. As Taran Lent, CTO at Transact + CBORD, puts it: “AI doesn’t replace the human relationships at the heart of meaningful engagement, it enhances them by making every touchpoint more relevant, timely, and personalized.”

The fraud fighter’s dilemma

Most likely, nowhere is the AI trust challenge more acute than in fraud prevention, where the technology serves as both weapon and shield. Parilee Wang, Chief Product Officer at Alloy, describes navigating AI from two sides: “It’s being used both as a tool for fraudsters and a tool for fraud fighters.”

While generative AI has enabled fraudsters to scale attacks like synthetic identity fraud, Wang argues that the real innovation lies in moving beyond detection to action. “An AI tool that alerts you to fraud without taking action is like a home alarm that goes off when someone breaks in. If it doesn’t call the police or lock the doors, what’s the point?”

Yinglian Xie, CEO and co-founder of DataVisor, sees AI transparency as critical to maintaining customer trust in fraud prevention. “The ability to explain and verify how AI systems work and the data that drives their decisions is of utmost importance,” she explains. The most effective approaches leverage AI to increase fraud detection while ensuring frictionless customer experiences, proving that security and convenience can be complementary rather than competing priorities.

Practical trust-building strategies

Many concrete trust-building strategies are emerging from early AI adopters in financial services:

i) Label and explain: Public’s approach involves clearly marking all AI-generated content and emphasizing the need for independent verification. “By clearly indicating that content is AI-generated and emphasizing the inherent risks associated with such outputs, we help our members understand what they’re using,” says Rachel Livingston, Director of Communications at Public.

ii) Value at every interaction: Scott Mills, President of William Mills Agency, advocates for using AI to provide consistent value: answering customer inquiries, explaining complex situations, and offering tailored solutions. The key is eliminating friction while adding genuine utility.

iii) Human oversight by design: Derek White, CEO of Galileo Financial Technologies, emphasizes that there’s no “set it and forget it” approach to AI in financial services. “AI applications are only as good as the data that goes into them, and the human oversight and strategy used to guide and deploy them.”

The content and communication challenge

As AI impacts how customers seek information, traditional marketing and communication strategies need updating. Anna Kragie, Account Director at The Fletcher Group, notes that with large language models changing how people look for answers, brands need “a smart AI content and PR strategy centered on content that builds trust with customers.”

This means pivoting toward more authentic, conversational content that directly answers buyer questions, while using media relations to establish authority on high-credibility news sites. In an environment where AI can generate massive volumes of low-quality content, human curation and authentic expertise become more valuable, not less.

Finding the balance

The self-driving car analogy keeps appearing in these conversations, and for good reason. As Brandon Spear, CEO of TreviPay, explains: “Just as autonomous vehicles require human oversight, AI-driven banking solutions must strike a balance between automation and necessary human intervention. The goal is not to replace human judgment but to enhance it with data-driven insights and improved efficiency.”

This balance requires what Transact + CBORD’s Lent calls “robust AI governance frameworks”, clear standards and best practices for both internal teams and vendors, combined with responsible piloting and focus on measurable outcomes over hype.

The trust dividend

Financial institutions that get this balance right stand to gain a significant competitive advantage. As Hovsepian notes, “In a digital-first world, where convenience is expected, trust has become the true differentiator, and the most valuable asset any financial institution can earn.”

The companies building trust in the age of AI are embedding security, privacy, and fairness into their AI models from the ground up, then communicating these efforts clearly to customers. They’re working to prove that AI can enhance rather than replace human relationships, and that transparency doesn’t have to come at the expense of innovation.

The financial services industry has always been in the trust business. AI isn’t changing that fundamental reality – it’s just raising the bar for what earning that trust requires.


This article features insights from members of Tearsheet’s monthly PR/Comms Working Group serving the best professionals in financial services and fintech. Contributions came from both in-house communications leaders and agency executives who represent major players in the financial services sector.

Become a member of Tearsheet’s monthly PR/Comms Working Group — reach us here.

How Citizens Bank is building GenAI with a five-year vision, not just quick fixes

Investment in data is the hallmark of successful Gen AI implementations, according to Citizens’ Chief Data and Analytics Officer, Krish Swamy. 

Giving us a system wide view of how Citizens is leveraging Gen AI, Swamy joins the podcast to talk about harnessing the power of data to drive decision-making, enhance customer experiences, and navigate the complexities of digital transformation in the banking sector. 

Our conversation delves into the challenges and opportunities of building a data-driven culture within a traditional banking environment, and how Citizens is positioning itself at the forefront of financial innovation through strategic analytics initiatives.

Swamy, who also heads the firm’s Generative AI Council, shares his vision for the future of data in banking and the tangible ways Citizens is turning data insights into meaningful actions that benefit both the institution and its customers.

Watch the full episode

Listen to the full episode

Subscribe: Apple Podcasts | SoundCloud | Spotify

Long term view of Gen AI implementations

Citizens’ approach to Gen AI is best described as cautious and optimistic. While the firm is not rushing into any use case and is instead taking a methodical approach to evaluating every time a process or task can be improved by Gen AI, it is also sketching out what role the technology could play in the future for its employees and customer experience. 

“We’re not just taking a process, or a component within a process, and applying Generative AI there. While that might be the starting point, the end game is always going to be: How does this function three or five years from now? How do we work towards that end game?” said Swamy. 

Strong data based foundation as a differentiator

Swamy is a firm believer in using a comprehensive data infrastructure as the scaffolding for new technological implementations. “When we invest in data, when we make data easily available, and when we teach people how to use data, I think they become a lot more effective at being able to self-serve. So creating that foundation is an area of differentiation,” he shared. 

One area where this focus helps the bank drive powerful results is fraud, which has seen a significant uptick since the pandemic, according to Swamy. “We’ve spent a lot of time overhauling the fraud infrastructure and the fraud platform itself. There are multiple sub components around fraud detection, claims processing, case management, which all are parts of the overall fraud value chain. And we made investments to improve the quality of those platforms,” he said. 

Helping the fraud team stay ahead of bad actors, is the firm’s move to the cloud, which should be completed by the end of this year. “We are almost 80% migrated to AWS, and it makes it easier to get access to data and we are able to bring better data when it comes to our fraud defenses,” he said. 

Having a centralized source for the data also ensures that fraud teams that include analysts and contact center employees are working from the same source of information. This allows these teams to be more effective and coordinated when trying to spot trends and undertake fraud mitigation strategies, he shared. 

Another area where the firm is applying data-led Gen AI strategies is the call center. “A lot of the customers’ questions tend to be fairly narrow, almost esoteric and [call enter employees] have to reference procedure documents to be able to give that answer,”  he said.

In the past, call center employees have used keyword search to access this information, but now the firm is using Gen AI and helping call center agents learn how to prompt more effectively to reach information,” he said. 

Similarly, the firm is also using the tech to help its developers take care of some of the most frustrating parts of coding: documentation and testing. “Those are areas where we’ve been able to find a lot of leverage from giving software development engineers the right tools to be able to do the testing, documentation, sometimes even writing code, and become more efficient at that,” he shared. 

Citizens’ partnership strategy 

When it comes to assembling the right technology partners, Swamy believes building consistency across the organization is the golden rule. “For instance, there are multiple teams that need the ability to have machine learning platforms, and it is conceivable that everybody then goes out and figures out their own thing. That would be a really bad outcome, because I think that would lead to proliferation of costs and would lead to loss of control,” he said.

“What you do need to do is make sure these solutions are all integrated with all of the other solutions, which is a lot of work for sure. The place where we have spent a lot of time on homegrown solutions is on managing our data. Those are critical assets which are unique to us, which we would not be comfortable leaving completely in the hands of a commercial solution or a bought out solution,” he said.

The call for Gen AI and why banks are slow to answer it

The room for automation in the financial services industry is huge and research by Citi finds that 54% of jobs in the banking industry could be impacted by Gen AI. 

Within financial services, consultative services like wealth management and mortgage brokering may be the most vulnerable to disruption by Gen AI, says Matt Britton, CEO and founder of Suzy, a market research firm. 

“When you talk about the financial services – particularly the services aspect – anything that’s consultative, that’s the first place AI is going to go. Mortgage brokers, wealth managers, accountants, those are areas AI is just built to be able to disrupt,” he said on a recent Tearsheet podcast

One major reason for this is the expense that comes with hiring human expertise in these areas, according to Britton. 

“[Employees are] so expensive, especially for SMBs, and 99% of the things that they do are highly templatized. Sure, there are going to be that 1% of cases where, if someone’s selling their company, they wouldn’t want an AI lawyer. But 99% of SMB-owners are going to seek AI-driven services because it’s just cheaper, faster, and more efficient.” 

Gen AI’s entry into these services is already well underway: 

  1. Tax Management: Intuit’s Gen AI financial assistant integrates across its product line, including QuickBooks and TurboTax to help customers file their taxes easily and comprehensively.
  2. Accountancy: Fintech Lili recently deployed a Gen AI tool called Accountant AI that will help its SMB customers with finding out answers to common accounting-related questions, as well as other tasks like budgeting.
  3. Insurance: Lemonade has created bots that create custom policies and help with claims processing.
  4. Investing: Public’s Gen AI powered assistant Alpha provides market trends, answers questions, and assists its users to do investment research. It’s set to become a major part of the firm’s strategy for the future, according to its CEO, Leif Abraham: “Currently Alpha, our AI assistant, is solely used to provide insights into the markets, public companies, and other assets. In the future, Alpha will expand to help people manage their portfolio. Moving Alpha from an assistant that gives context and information, to an assistant that can take action. This next phase is about integrating Alpha into that experience.” 

Traditional FIs, on the other hand, have yet to take on a Gen AI strategy that centers around customer-facing products. And while most banks are steering clear of using AI assistants powered by Gen AI, they are more open to using it in the back office to help make their current employees and teams more productive.

Banks are using Gen AI to boost productivity

In July, JPMC introduced a new Gen AI powered tool to its Asset & Wealth Management team which the bank said could perform the tasks of a typical research analyst. The bank is gradually exposing more and more of its workforce to the tool, and an internal memo shows it’s encouraging its employees to use the tool for tasks like “writing, generating ideas, solving problems using Excel, [and] summarizing documents.”

Morgan Stanley has also launched its AI tool called Morgan Stanley Debrief, which helps financial advisors with creating notes on a meeting with a client. 

Using Gen AI to increase productivity rather than build new products is a quintessential bank move. But apart from the obvious reasons like regulations and uncertainty, there may be another reason why banks are not moving faster with deploying Gen AI in client facing interactions.

Older folks aren’t keen on Gen AI 

Suzy’s research shows that younger consumers are a lot more comfortable with using AI for financial planning and optimization than older consumers. 

The trend repeats when consumers are asked which financial tasks like tax management, mortgage brokering, and wealth management do AI perform better than humans. Close to 60% of older consumers report feeling that AI is not better than humans at any of these tasks, according to Suzy’s research.

The fact that a majority of older consumers don’t feel comfortable with AI nor trust the ability of Gen AI-powered tools to perform well in the areas mentioned is a problem for banks. In the US, 50% of the local banking revenue is generated by people who are fifty years or older, according to data

The challenge for banks is clear: they must navigate a delicate balancing act between meeting the needs of their current, older customer base while preparing for a future shaped by younger, tech-savvy consumers who are far more open to AI-driven solutions. To stay competitive, traditional financial institutions will need to move Gen AI to the front of the office, and find a way to collaborate with fintechs and co-create what Gen AI powered products will look like. 

If you want to read more about how AI is changing the role of banks, download this guide

OTAS Technologies’ Tom Doris is creating machines to do (part of) a trader’s job

interview with fintech investor, Dan Ciporin

Tom Doris is CEO of OTAS Technologies

What’s OTAS all about?

Tom Doris, OTAS Technologies
Tom Doris, OTAS Technologies

At OTAS we use big data analytics, machine learning and artificial intelligence to extract meaning from market data and provide traders and portfolio managers with insights that would otherwise lay hidden. Our decision support tools help traders to focus on what’s important and interesting, you could say that we use machines to identify the areas that humans should be paying attention to.

I did my Ph.D. in artificial intelligence, and around 2009, it was clear to me that several of the more sophisticated hedge funds were converging on a set of approaches to market data analysis that could be unified and made more efficient and general by applying algorithms from machine learning and artificial intelligence.

Better yet, it quickly became clear that the resulting analysis could be delivered to human traders and portfolio managers using natural language and infographics that made it easy to absorb and action. At the same time, the role of the trader was becoming increasingly important to the investment process, while the problem of executing orders was becoming more difficult due to venue fragmentation, dark pools, and HFT, so it was clear to me that there would be demand for a system that helped the trader overcome these problems.

How does leveraging artificial intelligence for trading help traders and portfolio managers make better decisions and manage risk?

Experienced traders and PMs really do have skill and insight. With all human skills, it is not easy to apply the skill systematically. We can leverage AI to help humans scale their investment process to a larger universe of securities, and also to ensure they apply their best practices on every single trade.

In many professions, everyday tasks are too complex for a human to execute reliably, for instance, pilots and surgeons both rely on extensive checklists. Checklists aren’t sufficient in financial markets because hundreds of factors can potentially influence a trader’s decision, so the problem is to first find the factors that are unusual and interesting to the current situation. This is the task that AI is exceedingly good at, and it’s what OTAS does. Once we’ve identified the important factors for a given situation, machine learning and statistics help to quantify their potential impact to the human, and we use AI to generate a natural language description in plain English.

What is compelling the increased use of artificial intelligence and big data analysis in financial services?

A basic driver is that the volume of data that the markets generate is simply too much for a human to analyze, but the more compelling reason is that AI and machine learning are effective and get the results that people want. Intelligent use of these techniques gives you a real edge in the market, and that goes to the firm’s bottom line.

How do you see artificial intelligence and big data analysis playing a role in trade execution in the future? Any predictions for 2016?

AI is going to provide increased automation on the trading desk. Execution algorithms have already automated the task of executing an order once the strategy has been selected by a trader. Now we’re seeing a big push to automate the strategy selection and routing decision process. The next milestone will be to see these systems in wide deployment, and with it will come a shift in the trader’s role; traders will have more time to focus on the exceptional orders that really benefit from human input. Also, the trader will be able to drive the order book in aggregate according to changes in risk and volatility. Instead of manually modifying each order, you will simply tell the system to be more aggressive, or risk averse, and it will automatically adapt the strategies of the individual orders.

What’s the biggest challenge in acquiring new customers in your space?

Traders have largely been neglected in recent years as regards technology that helps them to make better decisions. Even when the benefits of a new tool are clearly established, it can be difficult for the trading desks to get it through their firm’s budget. Despite the recent hype around HFT and scrutiny of trading, there’s still a lag when it comes to empowering traders with the best information and tools to support them.

Photo credit: k0a1a.net via VisualHunt.com / CC BY-SA