Community Banks face mounting pressure to move away from screen scraping, but APIs are far from perfect. Should we panic?
- Community Banks receive complaints about logins at odd hours due to screen scraping activity.
- While innovative solutions are helping banks catchup, some things are yet to be decided about the API-first ecosystem
Screen scraping isn’t as widely used anymore, but in some parts of the financial ecosystem, it is still integral to everyday business. This is because smaller institutions like community banks have been unable to keep pace with large FIs. To find out why community banks feel the need to move away from screen scraping and what is slowing them down, we dive into the cases of two community banks, which are critically poised in their journey towards an API-first ecosystem.
Spotlighting Community Banks’ experience with Screen Scraping
Banks have had to keep pace with a lot of technological change over the past few years, however the advent of fintech and growing consumer demands to connect their bank data with their apps have created even more issues:
i) Broken links: Currently community banks participate in data sharing through independently established screen scraping connections. If these connections break, consumers often think that the fault is within the bank’s architecture when it may in fact lie somewhere else.
ii) Fragmentation: Over 150 years old and with $12.3 billion in assets, Illinois-based Busey Bank has yet to begin its transition towards an API-first strategy. The bank’s svp of technology and business systems Randi Potter said that the bank currently has ten-customer-facing platforms. This means consumers go through ten different logins before they can view their entire banking relationship. Eliminating fragmentation means offering a single platform that provides a holistic view of a customer’s financial accounts. Potter thinks they have a strategic advantage in building a 360 view for their customers.
iii) Limited Field of Vision: Similarly, IncredibleBank, based in Michigan and Wisconsin with assets of $1.7 billion, has begun its transition to APIs. The bank found that screen scraping limits its visibility into customer issues, which in turn restricts the bank from effective troubleshooting.
Director of digital and innovation at IncredibleBank, Philip Suckow, said that screen scraping was creating issues for its consumers too:
“Many providers that leverage screen scraping would login during the middle of the night and our customers would receive security alerts and unrecognized activity at odd hours. This would result in increased customer fears related to the security of their information and data,” he said.
Suspicious logins in the middle of the night, broken links, and clunky customer experiences aside, customers do want access to their data, and the ability to transfer it as they please.
iv) Data Portability: Research by the FDX shows that most consumers are in favor of having the ability to easily take their data from a service or institution and transfer, or port, it elsewhere.
Suckow has noticed a similar demand in IncredibleBank’s consumers, which he hopes to meet by leveraging an API-based technology. The FI thinks that its quality of service will improve after such a move because APIs enable a more robust, reliable, and secure customer experience.
One thing is clear though: community banks are beginning to prioritize customer-centric values like improving data portability, good customer experiences, privacy, and security. At the same time, it is important to remember that issues highlighted by IncredibleBank and Busey Bank are not unique to them but in fact endemic to all legacy banking architecture. Unfortunately, even though many banks face problems due to screen scraping, not nearly enough of them are able to access APIs.
Making it to the finish line with the help of APIs
The economics of overhauling legacy architecture for the sake of API support is often considered to be the biggest roadblock that community banks face. Since most banks don’t have extensive engineering teams nor the budget to support building one. The issue of economics may be core to the conversation.
Scott Weinert, CTO and co-founder of Atomic, a provider of API-enabled income and employment data verification services, asserted the role of economics, by sharing his experience with Unbill(now Biller Direct) “I remember it wasn’t until we reached a tipping point, that we could even have a conversation with AT&T and say this is the upside to going through the API approach. This is how many users we’re sending through and how much the potential revenue is going to be, and what they could gain by working with us and doing a partnership. So, I think it was all about economics. That’s what drives the migration from screen scraping to API first”. Moreover, Busey Bank emphasized the depth of the problem pointed out by Weinert, when it added that it often has to do business with vendors that aren’t ready to go API-first themselves. Whenever this happens, the bank has had to partner up with these vendors or was placed in year-long development queues
To mitigate the lack of extensive engineering teams, IncredibleBank’s strategy is to partner up with technology providers like Jack Henry which help banks go through a streamlined process for API-based support. Even though it is reassuring to see community banks figure out a way around so many roadblocks, a lot more can be said about how many options these banks have available on the market and their awareness of technology providers like Jack Henry and Akoya, a data sharing network, nationwide. The reason why community banks continue to soldier on is because the demand for customer centric values like portability, minimization, security, and privacy is only building.
Hopefully, community banks will achieve these goals when they are finally done transitioning to APIs. Right?
Making APIs ready for communities and community banks
Moving away from screen scraping and towards APIs often means moving away from credential sharing itself. This means that in place of using customers’ credentials to login into accounts to scrape data, tokenized architecture, strengthened by oAuth, is beginning to take center stage. This method avoids accessing and storing user’s passwords by authenticating them through unique tokens.
Similarly for Atomic, the best way to ensure the firm doesn’t endanger consumers’ privileged information is for them to never have it in the first place. Weinert elaborated that the company follows a “least privileged approach”, cyber security concept that posits:
“Every program and every user of the system should operate using the least set of privileges[information] necessary to complete the job.”
Limiting access improves data privacy and crimps the extent of data that can be collected. When it comes to data minimization and privacy, capabilities of an API-based ecosystem elevate the data sharing method far above screen scraping.
But is that enough? Screen scraping doesn’t seem like the kind of practice we want to raise to a pedestal. If community banks want to meet their communities’ demands, they will have to investigate APIs a little deeper.
The question of values
Both APIs and screen scraping are methods of sharing and collecting data. Inherent to data collection is the problem of missing information. When dealing with missing data, data aggregation firms like Atomic leverage their bird’s eye-view of the entire data set to extrapolate and infer critical information about consumers. However, it was unclear what mechanisms were in place to inform consumers of this practice.
“We have a pattern of saying what we’re going to do,and what we’re not going to do. Say their salary is not present in the data, we will use pay stubs to calculate that salary. We told the user that we’re going to report on their salary, right? Whether it’s calculated and inferred or not, we don’t necessarily make a distinction between the two.I guess we could tell the user afterwards that: hey, your salary wasn’t found,” Weinert said.
“In that case, it’s up to our customer to build an experience when they need to confirm a derived field. We leave that up to them because it depends on the use case. Inside of our user experience, they’ll take the data and decide what they need to confirm with the user or not.”
From the consumer’s perspective, the handoff described above can mean it is confusing to tell who knows what about you and what to do if that information is wrong.
Another issue is that even though giving access is made very easy by APIs, revoking access can be a bit more tricky. Currently, providers like Akoya help banks display current connections on their dashboards
They’ve structured their practices this way because the firm designates the bank as the locus of connection, according to the execs at the firms “the customer should be in control of managing their data sharing relationships and be provided the tools to do so. We believe that consumer consent and control should occur at the financial institution – where the data is being sourced and where the consumer relationship is maintained.”
Similarly, Weinert elaborated that Atomic “provides flows free to revoke access. But we also rely on our customers to optimize those flows, because we’re an SDK, right? We get embedded inside of other products. And if our customer wanted to hide those flows, not that they would, we have great customers that I don’t think would do that. But if they wanted to, they could.”
Another point of contention are passwords. On this issue Weinert has a contrarian view, whereby he believes that passwords can be more efficient than tokens at ensuring user autonomy. Since screen scraping relies on credentials to log into accounts and collect data, if users change or reset their passwords, access to those accounts is easily revoked. While Atomic ensures consumer consent, Weinert asserted the importance of doing so for all organizations because access cannot be revoked by simply changing the credentials. “Unless the business gives them the ability to revoke that token, it can stay active indefinitely, and continually share their data without permission” Weinert said.
These comments are reminiscent of charges made by Lauren Saunders in her testimony from the National Consumer Law Center against screen scraping that data aggregators could continue to access consumer’s data even if they deleted the app. Which means that while a straight forward API approach would allow organizations to access data in a more targeted and robust manner, its ability to guarantee consumer data privacy is debatable.
It’s not a standard
On top of figuring out who is responsible for what, the industry is still trying to decide how much standardization needs to take place before guaranteeing efficiency and attending to values like data minimization, data privacy, and security.While the Financial Data Exchange is working on getting the industry in a single API standard, Weinert personally prefers frameworks and architectures more than standards, even though he acknowledges the importance of standards like OAuth. This is because standards run the risk of trapping organizations in a cycle of compliance efforts rather than moving forward and trying out new things.
In this view, imposing a standard around the actual design of the API itself is an “anti-pattern” and bad practice. “Everyone should be free to iterate on the design of their API. Standards should exist at the layer of communicating how your API authenticates and what the data structure is. For example, the standards should be in the areas of security and the limitations of your API,” Weinert elaborated.
Many details still need to be hammered out in API based approaches, but the lack of clarity on these issues shouldnt prevent organizations from moving away from screen scraping.
This is because issues like not ensuring consumers can easily revoke consent or change incorrect information about them are not issues with screen scraping alone. These problems are endemic to the current digital infrastructure which has evolved on the assumption that data is not a result of consumer’s work and behavior, but a free-for-all mineable resource.
Since these issues are not caused by screen scraping alone, cutting out screen scraping is not going to solve them. Nor is simply going API-first. Since industry attitudes about the value and nature of data have yet to budge, modern technologies like APIs will continue to struggle to deliver a customer experience and data security that is worth putting on a pedestal.