Strategy
When Principals Ask AI Instead Of Their Advisors

Here is the report of a panel that explored the intersection of wealth advisory and AI at the recent FWR fintech forum in Manhattan.
The following panel was held at this publication’s Family Office Fintech Forum in New York City earlier in April. The panel was moderated by Eduardo Arista, Holland & Knight; panelists were Michael Horwitz-Heilbronner; Greg Kammerer of 61Holdings, Rocio Ortega, WE Family Offices, and Mark Wickersham, of asseta.ai
The new first call
The closing panel of the Family Office Fintech Forum tackled one
of the day’s most provocative questions: what happens when AI
becomes the first place families turn to for advice? The group of
panelists was deliberately diverse, chosen to represent every
side of the principal–advisor dynamic.
Arista framed the stakes at the outset: the question is no longer whether principals are using large language models and generative AI to evaluate investments, assess legal positions, or pressure-test their advisors’ recommendations. They are. The question is what that means for advisory relationships, and whether the ecosystem is adapting fast enough.
Arista then asked each panelist to share how principals and families are already using AI as a first resort, grounding the discussion in practice rather than theory. Examples ranged from principals drafting “first-pass” diligence questions and investment memos, to pressure-testing legal and tax scenarios before bringing them to counsel, to using AI to benchmark an advisor’s recommendation for completeness and clarity.
Failure modes: Hallucinations, privilege, and the
unlearning problem
The discussion moved to the risks that emerge when families rely
on AI without appropriate guardrails. The panel focused on three
distinct failure modes. First, hallucinations and
confident-sounding errors: generative AI tools can produce
outputs that appear authoritative but are factually wrong, and
principals without deep domain expertise may not detect the
mistake.
Second, the potential destruction of attorney-client privilege: when a principal feeds sensitive legal, tax, or trust information into an AI tool, protections that would otherwise attach to communications with counsel may be irretrievably waived. Third, the “unlearning” problem: the difficulty advisors face in correcting misconceptions a principal has already internalized from an AI-generated response.
The unlearning problem proved to be one of the panel’s sharpest observations. Correcting a confident, AI-informed client requires a different skill set than traditional advisory, one that combines technical authority with the ability to reframe a question the client believes has already been answered.
When AI and a trusted advisor conflict
The panel then turned to a scenario that is becoming increasingly
common: a principal receives one answer from a large language
model and a different answer from a trusted advisor. How should a
family office navigate that conflict? The discussion surfaced the
need for a practical escalation framework – a shared
understanding of when AI output should be treated as a prompt for
further inquiry and when it should be set aside in favor of
professional judgment.
A simple version emerged: (1) verify the inputs (what data was used, what assumptions were made, and whether any sensitive or privileged information was introduced), (2) validate the output against primary sources and domain expertise, and (3) document the decision path, including what was accepted, what was rejected, and why.
A related thread explored the moment when convenience begins to turn into misplaced authority. As AI tools become more sophisticated and their outputs more polished, the risk increases that families will treat AI-generated analysis as definitive rather than preliminary. The panel cautioned that this is particularly dangerous in complex legal and tax matters, where a technically plausible answer may be incomplete or flatly wrong.
Guardrails and governance that work
The final substantive segment focused on what families and
advisors can do now. The panel addressed the practical question
of how to establish guardrails that are rigorous enough to
mitigate risk but flexible enough to capture AI’s genuine
strengths. Topics included policies for approved AI tools
and approved use cases; data boundaries that keep sensitive
information out of AI systems; human-in-the-loop requirements for
consequential decisions; and auditability, so that when AI
contributes to a decision, the inputs, rationale, and handoffs
remain traceable.
The discussion also surfaced the cybersecurity and data governance dimensions of AI adoption. Family offices that have invested heavily in protecting client data through traditional information security measures may be inadvertently creating new exposure by allowing principals and staff to interact with AI tools that lack equivalent protections. The consistency of AI-related policies across teams (investment, legal, operations, and family governance) was identified as a governance gap that many offices have yet to address.
Key takeaways
The panel closed with each participant offering a key takeaway.
The throughline was consistent: AI is not going away, and family
offices that ignore it risk falling behind. But adoption must be
deliberate, governed, and grounded in an understanding of what AI
can and cannot do. The families that will navigate this
transition most successfully are those that treat AI as a tool to
augment, not replace, the judgment, relationships, and
institutional knowledge that define effective advisory.
The session underscored a theme that ran throughout the day’s proceedings: technology is reshaping every dimension of family office operations, but the families and advisors who thrive will be those who maintain the primacy of human judgment, trusted relationships, and sound governance.
“AI does not eliminate the need for trusted advisors. It raises the stakes for choosing the right ones.”
Eduardo “Ed” Arista, Esq, CPA