Strategy
The Expanding Oversight And Audit Universe: Technology, AI, And Complete Coverage

One of the panels at a recent FWR fintech summit for family offices, held in New York, examined how the oversight and audit functions that family offices must contend with are affected, possibly revolutionized, by AI and related technologies.
The following panel, The Expanding Oversight and Audit Universe: Technology, AI, and Complete Coverage, was part of the Family Wealth Report Family Office Fintech Forum 2026. The panel was moderated by Anna Garcia, founder and managing partner of Altari Ventures (who is also the author of this account of it); Ralph Nanad, CEO and founder of 10clear, and Sabrina Palme, who is CEO and co-founder of Palqee (more biographical detail on the authors under this article).
My firm, Altari Ventures, invests in enterprise fintech startups modernizing institutional finance infrastructure. I'm particularly drawn to problems absorbing enormous human capital and carrying outsized regulatory risk. Oversight and audit fall exactly into this category: not glamorous but essential. And right now, they are undergoing profound technologically-driven transformation.
This is relevant whether you're running a highly-regulated financial services firm, a public company, or a family office looking to apply best-in-class tools and practices to maximize transparency for your stakeholders.
Here are the key takeaways from the panel:
The definition of audit is expanding
– in every direction
Until recently, audit focused on financial and operational
reviews and was constrained by data availability and human
capacity. Now, more granular data is available than ever and
complete monitoring is within reach – bringing deeper
accountability for everything happening across the business. AI
is simultaneously introducing its own oversight requirements, to
ensure it doesn't hallucinate. Those Nanad referred to as
"holding the pen" – signing off on representations – face
a scope of responsibility that has never been broader.
Palme framed this as a governance readiness problem. Institutions spent 2024 experimenting with AI and 2025 committing to implementations with measurable ROI. Many have now reached the implementation phase only to find that their governance infrastructure wasn't architected for what they wanted to deploy. Organizations faring best treated AI governance as a co-equal priority alongside AI implementation from the start.
Nanad introduced a useful distinction between capital-A "Assurance" – third-party attestation that isn't going away – and lowercase "assurance": the everyday contract of confidence between an audit preparer and a user. AI disrupts that contract. Once it enters the workflow, the question becomes: do you still trust your own output? Accuracy is non-negotiable. The moment errors appear even in something small, trust erodes – and AI may be out the door.
What accuracy actually means
Significant grey areas exist across rules and regulations, from
GAAP accounting to the interpretation of broader regulatory
requirements and their translation into internal policy – so
accuracy can mean materially different things at different
organizations.
Nanad's advice was direct: ask vendors for evaluations. You'll encounter the full spectrum – from no measurement whatsoever to claims of 100 per cent accuracy. A vendor who tells you they're 96 per cent accurate gives you something actionable: build your governance processes around that figure and concentrate human review on the 4 per cent that carries genuine risk.
Palme took it further: in generative AI applied to financial services, accuracy is inherently context- and organization-specific. The right question isn't a vendor's self-assessed accuracy rate – it's whether the AI is behaving within the range of expected behavior acceptable for your own policies, risk appetite, and operating context. That framing brings AI oversight into alignment with the same validation logic already applied to human decisioning, which is precisely where regulators are heading.
Regulators are sharply focused on incorporating AI into their own processes as well; the SEC, for instance, has recently mandated evaluating AI use across its operations. The practical implication for those facing audits: your auditor will hold you accountable for the outputs you deliver, and the regulator will hold them accountable for the rigor applied in reviewing those outputs. The answer they'll be looking for is processes, controls, and governance. Palme noted that regulators are actively moving away from accepting random sampling as sufficient – the 1 per cent to 5 per cent evaluation coverage standard that human-led processes have relied on for years will no longer hold.
Speed, scope, and quality – all three are achievable. Palme described Palqee's "optimized human oversight" model: rather than validating every AI output (which defeats the ROI case) or sampling randomly (which leaves risk undetected), the system flags only interactions that fall outside existing policy frameworks for human review.
Coverage improves, edge cases get surfaced, and the control framework strengthens iteratively over time. Ralph's framing was simpler: the old discipline was "measure twice, cut once." With AI, you can measure a thousand times and cut once. Both panelists agreed that all three objectives are well within reach with today's technology.
What's next for 10clear and Palqee?
Nanad is focused on extending 10clear beyond GAAP financial
statements to the full universe of corporate reporting
– internal management reports, board presentations,
financials – generated at the click of a button with
compliance verification built in. Palme's attention is on a
market shift already well underway: from measuring AI performance
through technical metrics alone toward evaluating AI behavior
against an organization's own policies, controls, and risk
appetite. The companies helping institutions navigate that
transition are building exactly what the market needs right now.
Closing thought
AI offers organizations a compelling path to significantly
expanding oversight coverage, reducing compliance cost, and
surface risk that sampling-constrained processes have
historically missed.
The governance infrastructure to support this transformation is being built right now – and family offices have a timely opportunity to put best-in-class technology to work across their operations.
About the panelists
Anna Garcia (pictured below) is the founder and managing
partner of Altari Ventures, a New York-based early-stage
enterprise fintech fund. Prior to starting Altari Ventures, Anna
was a B2B SaaS investor at Runway Venture Partners, a successful
angel and a long-time Wall Street executive. Over the course of
her 17-year banking career, Anna held senior investment banking
and asset management roles at Merrill Lynch, Jefferies and JP
Morgan. Anna has been actively engaged in the fintech ecosystem
since 2013 and sees enormous and continued opportunity in backing
companies modernizing financial services infrastructure, the
functioning of the capital markets and institutional processes in
financial services and other industries. Altari Ventures is
bringing together and leveraging all of Anna's professional
experiences, knowledge and networks to build and support a unique
growth portfolio aligned with these views.
Ralph Nanad (pictured below) is the CEO and founder of 10clear, an AI-native financial statement suite for CFOs, accounting firms, and regulators. Ralph spent a decade supporting finance functions navigate risks while enabling growth and has built 10clear to bring AI-driven technology to financial reporting compliance.
Sabrina Palme (pictured below) is CEO and co-founder of Palqee, where she led the development of Palqee Prisma, an independent AI oversight layer used by financial institutions to strengthen assurance over investigation narratives, contact center decisions, and AI-supported processes by systematically evaluating policy adherence and decision quality. A data scientist by background, she combines deep expertise in AI governance, data protection, and information security with hands-on experience designing production-grade oversight for financial institutions, ensuring that both human-led and AI-supported processes operate consistently and in line with internal policies and risk standards.