With AI changing sectors such as banking, wealth management and financial services, it is essential that data is accurate and controlled. If "garbage" goes into the system, then the results will be a problem.
Joe Stensland, chief executive of BridgeFT, a technology
architecture firm, writes on how important it is to have accurate
data at a time when artificial intelligence is on the march. If
AI is only as accurate and useful as available data, then
mistaken and partial information is going to be a big
problem. The editors are pleased to share these views; the
usual disclaimers apply. Email email@example.com
From the dawn of computing, programmers and technologists have known that the greatest threat to the power of data is GIGO – Garbage In, Garbage Out. In essence, if the data you input is flawed, your output is going to be inherently flawed as a result.
What was an issue for keypunch operators in the 1960s has been magnified a millionfold in the era of artificial intelligence.
There’s no doubt that AI and machine learning have transformed the technology landscape and accelerated innovation and development within wealthtech, and we are just in the early innings. AI goes beyond popular programs such as ChatGPT. Rather, AI has accelerated programming, dramatically affecting time to market for fintech companies. It has also improved worker productivity, expanding advisors and executives' ability to manage their practices, with new programs for client communication scripting, operational workflows, and regulatory compliance oversight. All of that has contributed to the rapid proliferation of wealthtech solutions.
Yet, amid all the excitement for the potential, we’ve also seen colossal AI failures. The attorney who discovered that relying on ChatGPT led to the program creating legal citations of cases that never existed. The AI-powered robot that couldn’t pass the basic entrance exam to be admitted to the University of Tokyo. The list of gaffes grows daily.
While some of these scenarios can be remedied, within the wealth industry the stakes are much higher. Problems in AI can cost firms and their clients millions of dollars and open them to both reputational and regulatory risk. This allows no margin for error – there can be no garbage in.
That makes sourcing and reconciling data more important than ever. Wealthtech AI relies heavily on data to make accurate predictions and decisions. The quality of the predictions of AI models depends strongly on the data used to train those models. One of the main reasons why data quality is essential in AI systems is that these systems learn from historical data to make predictions or decisions.
Poor data quality results in inconsistent and inaccurate results for clients. Yet, harnessing critical client and investment account data remains a significant challenge for organizations of all sizes.
Custodians, back-office providers and even investment companies themselves hold a range of data that drives the investment ecosystem, from positions and balances to client information and trades. The key challenge lies in that each custodian has its own data policy, structure, and systems, forcing fintech companies and other financial institutions to build custom programs to ingest data from each individual source.
Historically, this can cause problems with the data itself. Issues such as data lag, inaccuracies, and lost data are common when leveraging multiple feeds to aggregate data. Indeed, these issues have been so commonplace that manual reconciliation – which is not an option for AI – was a way of life for far too long.
AI systems thrive on trusting that the data coming in is accurate. When we rolled out the industry’s first API for multi-custodial data, we knew it was critical to provide an access point for clean, accurate and comprehensive data that comes from many different sources, and that our proprietary aggregation, normalization and enrichment process had to ensure the highest data quality and that it was AI-ready.
We knew that removing the need for manual reconciliation would be a key differentiator for wealthtechs looking to harness the power of AI to speed development timelines and improve product outputs.
It’s incredible to see the level of innovation wealthtech companies can achieve when their development process isn’t hindered by clunky data processes. Large language models used in generative AI thrive when it comes to analyzing and drawing conclusions from large data sets. This type of analysis can help with risk reduction, investment decisions, and rebalancing portfolios.
Another trend the BridgeFT team is seeing is more development of AI-powered systems that automate manual tasks such as data entry, risk analysis, and client onboarding. Any repetitive task can likely be outsourced to AI.
AI has powered many wealthtech innovations, including applications for advisor marketing, client onboarding, accounting, and billing. And the list of use cases continues to grow as AI proliferates.
Ultimately, the widespread use of AI across the wealthtech ecosystem can only help advisors, asset managers, and their clients. This puts a heightened focus on ensuring that the cleanest data from custodians is part of the AI equation. Clients’ wealth has no room for garbage.
About the author
Joe Stensland is CEO of BridgeFT, a cloud-native, API-first wealthtech infrastructure platform that enables registered investment advisors, financial institutions, and fintech innovators to deliver data-driven outcomes for their clients.