Due to data fragmentation, finding the right information is a pain for banking employees
- Finding the right information is difficult for bank employees due to not having access or having to sift through information from multiple sources.
- But there is a possibility that Generative AI can help, if a few problems can be worked out first.

To help customers, banking employees need to find the right information quickly. But recent research indicates that silos within the banking environment as well as dated technology make this difficult.
16% of bank employees are expected to search seven or more sources to be able to do their jobs. Needing to search multiple places, getting unreliable information from their FI’s intranet, or not knowing where to look in the first place are some of the reasons that add to the time it takes for employees to complete a task. At least 73% of all employees reported they don't know where the information is stored or that they don't have permission to access it.
An important part of getting the job done for these employees is finding the right information. But employees in the banking industry report that around half of the information found is irrelevant to the task at hand and some aren’t able to find the urgently needed information on a weekly basis.
Hey AI can help (?)
Generative AI may take some time to be ready for consumer facing applications in financial services, and improving an employee’s search experience is a use case with no regulatory strings attached. AI in general can search through a vast amount of information in a short span of time and some companies already offer enterprise search solutions to enable this kind of company-wide search.
The real value addition by Generative AI is around summarization of the information found, as well as sifting through unstructured forms of data like emails, reports and news articles. This ability to summarize and rearrange information into consumable portions may prove very useful for those working within wealth management, investment banking and even the customer servicing areas of retail banking.
For example, Morgan Stanley’s Wealth Management unit is leveraging Open AI’s Generative AI capabilities to allow its Financial Advisors access to an internal employee-facing chatbot. The chatbot will allow employees to “ask questions and contemplate large amounts of content and data, with answers delivered in an easily digestible format generated exclusively from MSWM content and with links to the source documents,” according to the bank’s website.
Similarly the bank is also changing how its customer service agents access information, by providing automated notes to agents that can enable them to take different actions during a call. This will allow agents to skip the tedious document and information searching and reduce the waiting and call times for customers. Both of these implementations can cross over to retail banking.
But before banks can develop Generative AI solutions on a larger scale, issues like GenAI’s tendency to hallucinate incorrect or nonfactual information needs to be solved. Currently OpenAI is undertaking a “process supervision” approach that rewards the model for each time it uses correct logical reasoning, when arriving at an answer. “The motivation behind this research is to address hallucinations in order to make models more capable at solving challenging reasoning problems,” said Karl Cobbe, mathgen researcher at OpenAI, to CNBC.
Process supervision may make Gen AI more explainable and possibly better equipped to answer complex banking-related queries.
However, how organizations deal with assigning agency and responsibility when using AI is intriguing. If a banking employee passes on incorrect information provided by Gen AI to the customer who is to blame? Similarly, if an employee passes on incorrect information provided by the tool to a senior, who should be reprimanded: the employee, the Gen AI model, or those who helped make the tool?
At least in older AI strategies like enterprise search tools, the employee is still in the driver's seat. Which search results they click on and how deeply they read the information provided are all decisions that impact the end result. And if something is incorrect, the chain of accountability doesn’t break, and instead, responsibility can be pinned on whoever synthesized or found the information. None of that is applicable to Gen AI. The algorithm searches and summarizes the information and may add factually incorrect information when it doesn't have enough data.
This is perhaps why Morgan Stanley’s employee facing chat bot has a mock citation format, with each summarization coming with source links. It is likely that any search related solution by Generative AI will have to include a fact-checking mechanism of this kind. If it does, it will also preserve the chain of responsibility, by holding those in charge of ensuring accuracy accountable if something goes wrong.