Technology

Confronting Misconceptions: How To Use AI Responsibly

Tom Burroughes Group Editor December 1, 2023

Confronting Misconceptions: How To Use AI Responsibly

Artificial intelligence has been, outside of geopolitics, arguably the dominant technological topic of 2023, even putting ESG investing into the shade to some extent. There remain worries about what AI means for business, work and society. We interview a firm operating in the space.

Controversies around AI – arguably the dominant non-geopolitics story of 2023 for wealth management – continue. As we have seen from the corporate fist-fights at OpenAI, for example, the ways in which people think about AI, in terms of its opportunities and risks, continue to range from cheerful to gloomy. 

This news service recently interviewed Ryan Pannell, chief executive and global chair of Kaiju Worldwide, an asset manager specializing in AI-powered investments and issuer of the New York-listed BTD Capital Fund. We explore what he thinks of popular conceptions about AI.

Family Wealth Report: You talk about the “control zone of AI” and how the autonomy of AI is misconceived, even exaggerated. Please elaborate on why you think people are getting AI wrong and whether such mistaken views might affect how wealth managers use or don’t use AI?
Pannell: For decades now, AI has been part of our shared social consciousness. It has been featured in books and movies, and when it is, it’s always the Antagonist. It can be accidentally so, as the child-like Joshua was in 1983’s WarGames, after a bored teenager who’d hacked it asked it to play what he thought was a game named “Global Thermonuclear War.” Or it can be more malevolent, like the ever-subconsciously present Skynet from Cameron’s Terminator. 

What our collective consciousness has taken from any of the myriad examples of “AI run amuck” is twofold: 1) AI is incredibly powerful, and 2) AI is a threat. Given our shared mythology, it's easy to see why we struggle with the decision to hand the reins (and our wallets) over to an autonomous system. The greed centers of the brain recognize the unquestionable power of AI, while the fear centers are still connected with the threat promised by our myths.

While wealth managers don’t think the AI we’re using in the space at present is going to result in any of the outcomes movies would have us believe are possible, those outcomes are still playing in their subconscious, and so there’s real fear around autonomous AI – because autonomy is where in our mythology it all goes wrong. Extend that further, and you get the following logic path: true AI means autonomy > autonomy = destruction > AI is dangerous and irresponsible. Which is, of course, not true. It can be dangerous; it can be used irresponsibly – but only if that control zone isn’t set. 

What we are educating wealth managers to understand that AI’s autonomy always (not generally, but always) exists within the boundaries of hard landscape. An AI system built to find volatility arbitrage opportunities and exploit them within a series of risk parameters can’t exceed those parameters any more than it can decide to trade currency pairs. It is capable of finding efficiencies within those boundaries, rewriting its own code to take advantage of those efficiencies, but only in pursuit of the goals it has been given.

In the theme of “black box vs responsible AI” and the need for open approaches, please can you give more detail on what the problems are with the “black box” approach? How important is it for wealth managers to be transparent about how AI fits into the value chain so that clients know what they’re paying wealth managers for?
The problems with black box systems are only problems in applications where explainability is required. If your goal is to “make the most money,” and you don’t really care how that’s done, black box generally outperforms all explainable systems. Why? Because AI doesn’t like constraints where raw performance is concerned. Want an AI driving system or flight control system to get from A to B the fastest? No problem. Want all the passengers to live and be insulated from all risk/danger? Performance will suffer, because it needs to work within what it sees is an arbitrary rule set.

When it comes to wealth management, transparency and explainability are both important – up to a point. At Kaiju, for example, we can explain very easily the framework within which our AI systems make decisions and describe the edges of the hard landscape. But when it comes to the “why” of the discrete decision, the AI cannot explain beyond “because I had reasonable certainty it would achieve my goals.”

It’s not unlike the way our brains come to certain aesthetic conclusions. Ask someone who has just seen a painting they like or a person they find attractive “why?” and they will struggle to articulate it in a linear, logical fashion – but that doesn’t mean that they are not certain.

Wealth managers need to be able to clearly articulate how they use AI, and where the edges of the hard landscape are with respect to its use, so that investors can make informed decisions about relative risk. They do not need to walk down to the last granular point in a decision tree which is not linear in the first place.

To what extent is AI likely to be primarily about efficiencies and hence improving profit margins at wealth managers rather than about generating new solutions and sources of revenue? What revenue-generating ideas do you think AI might drive in the next few years?
There’s a symbiosis. In other words, it’s a bit of both. AI alone is awful at generating new investment ideas, because the best way it does so is black box which, as we’ve discussed, probably isn’t ideal for most wealth managers. What it’s fantastic at, is refinement. 

So, as an example, let’s say we teach the machine to trade iron butterflies. We teach it how the options market functions, how liquidity works, how the options track their underlyings, and all the key criteria we are aware of that underpin a good iron butterfly set up. But we feed it all the additional market data as well, beyond what we use to run the strategy manually. We then ask it to improve the strategy. It’s bound by key rules (it’s an options strategy which is net short premium, it’s risk defined, it needs to sell the opposite sides of the ATM strikes and buy wings further out, and profits from stagnant price and a reduction in volatility) but it has room to maneuver within them.

It then comes back with a preference to skew the iron butterfly to the put side, buy the wings at disparate strikes (one further away than the other), and buy additional long put units way down below the put wings. Why? Because those optimizations produce vastly superior outcomes to the strategy as we taught it. 

It also decides whether there are better predictors of positive outcomes than the criteria we’ve been using, so it decides to use those instead and weight them accordingly. Still loosely an iron butterfly, still following all the rules – but a completely new, superior iteration of what we taught it. And this is repeatable in cycles, across almost all strategies.

We have chatbots and the like: how in your view will the front-office experience for clients be affected by AI? Please give a few real-world use cases.
For us, not at all (we don’t use generative AI in front-office applications), but there are a number of easy-to-find use cases of generative AI implementation in financial advice and front office scenarios, many of which I’m sure will be familiar.

There are a lot of fears about AI becoming self-aware and so on. How realistic are such fears?
Entirely unrealistic. Consider that first, the system would need to be global in nature – and AI doesn’t function globally. By this I mean that the system in question would not only have to have complete access to literally everything, but [would need] a purpose for that level of connectivity – which we would have to give it. We build AI systems and allow them autonomy to perform certain tasks. “Trade capital markets.” “Fly this plane.” “Write my essay.” But no-one is going to program an AI system to “do everything,” thus there will never be the capacity for self-awareness, because each system doesn’t have any idea of what’s “out there” beyond what it can see. The capital markets trading system doesn’t know that there are systems for driving cars, or for replacing Val Kilmer’s voice in Top Gun: Maverick. 

They don’t even know there’s a world or universe out there beyond what they interact with, and their autonomy doesn’t allow them to change their own awareness in any way. Even if there was a globally-connected “AI of Everything,” would it be aware of itself? The “Self” isn’t something AI can even recognize; when ChatGPT answers questions about the self, it’s really just trying to feed you an answer it made up from composite parts that it found out on the internet, which it thinks is the answer you want.

There’s just no link between A and B when it comes to sentience. AI is a hell of a mimic – and it may learn to mimic people effectively if instructed to do so for a specific purpose – but it’s not going to do that autonomously.


Silicon Valley rainmaker Marc Andreessen issued a sort of manifesto about the optimistic case for AI and tech in general. He was blasted for it from some who still seem to adhere to a more precautionary approach to tech and who see a number of downsides. What’s your take? 
When it comes to overall outlook, Marc Andreessen is basically my spirit animal. I agree with all of his optimism, and he’s exactly on point with respect to what AI can and can’t do and might and will not do. Where we diverge (well maybe not diverge; he may hold the same opinions and just choose to keep them inside), is that we’re close to AI being significantly leveraged for pretty awful purposes.

As I say it’s an excellent mimic, and advances in generative imaging technologies enable AI to mimic basically anyone, very convincingly. Because it can mimic speech, it’s capable of influence and manipulation in the online sphere in which global dialog exists. Add generative imaging advances, generative vocalization technology advances, and speech mimicry together, and AI can basically create a very convincing “you” that that does whatever the programmer tells it, and then share that “you” in a manipulated or completely invented video worldwide. 

At the highest level, very shortly it will be nearly impossible to determine what’s real and what’s fake, without personal tracking data and well-cemented alibi processes. This is very scary stuff, and it’s going to be used to manipulate thought in every active sphere. THIS is where we need regulation and legal controls the most, and while there is progress in that area, it’s not happening nearly fast enough.

I genuinely want to join Marc in widespread optimism, but I worked as a contractor in the defense and intelligence community for years, and I’ve seen too much to believe this won’t go horribly in some areas, concurrent with the beautiful vision of where AI can help humanity at large in the best way that Marc articulates.

There are tons of firms that seek investment and HNW individuals will want to surf the AI wave. What’s your general idea of the best way to tap into the AI story?
For an investor trying to decide when to jump in (or how), the way forward is relatively clinical. It’s the answer to a simple question: What are you comfortable with? If the investor is disquieted by the entire concept of AI, then there’s no need to directly seek out mechanisms for exposure. All global companies, regardless of what they do, are or will shortly employ AI in some capacity within their business models –and, as such, the hesitant investor will gain access to, and benefit from, AI’s substantial value proposition, whether they like it or not.

For those who have more faith in the promise this technology might bring but aren’t quite ready to put their capital pool directly into its hands, there is the opportunity for thematic exposure. Companies like Amazon, NVidia, Microsoft, Google, and Apple are all investing billions of dollars into trailblazing novel new uses for AI, while simultaneously offering extremely conservative investment profiles. 

They are huge, well-run (generally), and considered by global market participants as “too big to fail.” One step down in size, but several steps up in niche, are ETFs which offer thematic exposure directly to many of these companies – plus some smaller AI-only companies – in a dynamically-managed basket of opportunity. These ETFs will, of course, present a more aggressive volatility profile, given the inclusion of the smaller companies and the singular focus on AI, but they represent a reasonable mid-point between “I’m not comfortable with AI but I know it’s here to stay,” and “I am excited about AI, but I don’t trust it enough to make end-point decisions yet.”

Finally, you’ve got firms like ours who use AI directly to make end-point investment decisions. This is going to be for the investor who is more comfortable with bleeding-edge, or substantially understands the technology and its application by the investment manager sufficiently well enough to be comfortable with its direct implementation.

Are there any specific AI use cases/firms and applications that you find exciting and powerful?
Except for Deep Fakes, I’m excited about the entire promise of AI. I’m probably most excited about the increasing reduction in training cycles, advances in cloud computing and computing power. We get answers in days not weeks, retrain in hours to days, not weeks to months, and can deploy good models so much faster than we could even a year ago. And more models at higher quality, which learn faster, just leads to really high-quality AI.

Please can you tell me about Kaiju – when it was founded, by whom, what problems it exists to solve, and future strategy?
Kaiju was originally founded by me, and after launch I was joined by my late partner, and our CTO David Schooley, as fellow owners. David and I have worked together for almost 10 years, first as traders, and then as CEO/CTO respectively. Kaiju was always quantitative in focus, and about five years ago we switched to AI exclusively. 

I’d used AI in cryptography to assist me in building better RNG (Random Number Generation) systems and knew it could optimize what we were doing with quantitative trading. We brought in Nicholas Subryan and Dr Aitor Muguruza to build out the program we have today and turn that dream into a reality.

Register for FamilyWealthReport today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes