Key considerations when evaluating a data aggregation platform
- Data aggregation platforms differ in their approaches to financial data.
- Data reliability and cleanliness are good starting points when evaluating a platform.
Our last article addressed how modern financial apps are powered by user data. Access to financial data drives innovation, as evidenced by new digital banks and new fintech apps.
A new whitepaper by Fiserv outlines five categories of evaluation criteria for selecting a data aggregator. These include data reliability and cleanliness, coverage of institutional sources and data type handling, platform adaptability and support for innovation, security practices, and privacy, transparency, and reliability.
When evaluating a data aggregation platform, it’s important to start with a look at data reliability and cleanliness.
Data reliability and cleanliness
Reliability refers to a platform’s ability to maintain continuous, live connections to data sources and to correctly extract data. Cleanliness addresses data integrity, as well as efficient, easy-to-handle data structures.
Data reliability shortcomings are the death knell for financial apps. If the complete picture of customers’ information requires multiple interactions or is interrupted by incomplete data pulls, then higher abandonment and churn will follow.
When considering reliability, fintechs will benefit from evaluating it in context. The performance of data integrators is impacted by regular infrastructure changes at FIs and application updates at thousands of institutions. This is one sector where “uptime” (i.e., the data access success rate) is measured in the low to mid 90 percent range. Note: the actual uptime of aggregators’ internal systems is more akin to Six Sigma rates and measured separately in SLAs. Because the integration platform providers are beholden to FIs as data sources, they differentiate on consistency and responsiveness to broken data connections.
According to Kevin Hughes, senior product manager, Data Aggregation Services at Fiserv, “The real question is ‘If something’s broken, how fast can you get it fixed?’” Lowell Putnam, head of partnerships at Plaid, believes data aggregation is in essence a “maintenance business.” He asserts that what matters is, “What percentage of your code base is built to monitor your own code?”
Hughes also cites the depth and proactiveness of FI relationships as a reliability indicator.
“Banks may call, advise that changes are coming and provide a test account to assess disruption. We can respond with script or API call changes, and 80 to 90% of the time make fixes in less than 24 hours. For clients with millions of users, even small percentage improvements in a platform’s data access is meaningful to the end user experience.”
The reported dispute between PNC Bank and P2P payments app, Venmo, highlights the criticality of stable data connections. Citing “security enhancements,” PNC Bank prevented aggregator access to its customers’ accounts, disabling information flow for Venmo’s end users via its integration platform. While the nature of access disputes varies, aggregator and bank frictions exist, and, until the industry defines clear parameters for the terms of access, good relationships help facilitate resolution.
Fintechs expect their data fast. But Bob Sullivan, president of FinancialApps, suggests that they underestimate a related factor – cleanliness. “Whether data is aggregated through scraping, direct connection, or open APIs like those supported by the Financial Data Exchange, it needs structuring to make it consumable. Companies’ data sets may include legacy fields. Either the integrator needs to reorganize their data structures or the burden falls to fintechs to strip out this excess.” This adds a technical and costly data preparation step.
The ease of using the delivered data is affected by the design of the calling APIs. Platforms should encourage prospective clients to experiment with their API set before committing to a platform.