Print this article

The State Of AI In Family Offices – Why You Need To Fix Your Data Before You Buy AI

Ken Gamskjaer

30 April 2026

The following article is an account of the keynote speech delivered at this publication’s recent Family Office Fintech Forum in New York by Ken Gamskjaer , CEO and co-founder, Aleta.

Ken Gamskjaer

Opening the Family Office Fintech Forum in New York was a privilege. It’s one of the few rooms where the people building the infrastructure of the family office world sit alongside the people who live with it every day. And this year, with AI finally moving from discussion to deployment, the conversations had a different kind of urgency.

I opened the keynote with a simple exercise. I asked how many people used AI tools regularly.

Almost every hand went up. Then I asked them to keep their hand up if their AI was connected to their actual portfolio data. Most came down. Finally, I asked them to keep their hand up if they had an AI workflow that runs without anyone touching it. Almost every remaining hand dropped.

That drop, from curiosity to deployment, is the real story of AI in family offices right now.

Everyone wants in. Very few are actually in.

According to Campden Wealth and RBC, 63 per cent of family offices say they want to use AI in investment reporting, 65 per cent say reporting is still too manual, and only 29 per cent use AI for it.

The majority are still sitting between experimentation and real deployment.

After more than 100 conversations with family offices over the past year, I’m convinced that the bottleneck is not AI. It is everything underneath it.

AI is only as good as the data below it. The offices making real progress aren’t the ones with the biggest budgets or the flashiest tools. They are the ones that did the boring work first: cleaning their data, structuring it, making it accessible across systems rather than locked inside each individual platform.

Most family offices today sit in one of two states. Either their data is fragmented – scattered across custodians, GPs and spreadsheets, every PE manager reporting differently – or it’s consolidated but closed, locked inside a vendor’s platform where the vendor controls the AI, not the family office. Consolidation without openness just moves the lock-in.

The leaders did something different: they solved the data problem before the AI problem, built one reconciled view across all asset classes, chose open architecture over closed platforms, and treated it as a leadership decision, not an IT project.

This matters more today than it did six months ago. Back then, the infrastructure simply didn’t exist. Three things have changed. The industry agreed on a standard: OpenAI, Google, Microsoft, AWS and Anthropic have all adopted the Model Context Protocol as the universal way to connect AI to data. Platforms opened up: Aleta, PitchBook, Morningstar and LSEG have all launched MCP integrations in the last six months, putting private market intelligence, public market data and portfolio analytics within reach of AI agents. And we moved from chatbots to do-bots.

From “what is my exposure?” to “rebalance, flag the tax implications, and draft the IC memo.” From query to command.

The family offices I see succeeding have stopped thinking about AI as a tool and started thinking about it as a stack. At the bottom: reconciled, machine-readable data across every custodian and asset class. One source of truth. In the middle: an open protocol layer that lets any AI model access that data securely. At the top: the agents themselves, handling liquidity briefings, what-if scenarios, PE document processing, IC memos. Many offices try to start at the top. The ones succeeding started at the bottom.

The families making progress are also not the ones with the best answers. They are the ones who started anyway, and who started with the unglamorous things:

1. Fix your data before you buy AI. 

2. Treat deployment as a leadership decision, not a tech-team side project.

3. Start with the boring, expensive processes. That is where AI pays off first, not in the impressive-looking use cases.

At Aleta, we sell the foundation AI needs to actually work – reconciled, structured data delivered through open architecture, so that families can connect their own agents to their own data and keep full control of it. Our principle is simple: be AI-ready, not AI-dependent.

The hallway conversations after the keynote were very rewarding. Offices are wrestling with the same questions: where to start, how to keep data secure, how to avoid ending up with five disconnected systems that each claim to have AI. There is no shortage of appetite. What people want is a clearer path forward.

If there was one message, I hope people carried home from my keynote, it had to be: AI is only as good as the data below it. Fix your data before you buy AI. Treat deployment as a leadership decision, not a tech-team side project. And start with the boring, expensive processes. That is where AI pays off first.