Technology

Confronting Misconceptions: How To Use AI Responsibly

Tom Burroughes Group Editor December 1, 2023

Confronting Misconceptions: How To Use AI Responsibly

Artificial intelligence has been, outside of geopolitics, arguably the dominant technological topic of 2023, even putting ESG investing into the shade to some extent. There remain worries about what AI means for business, work and society. We interview a firm operating in the space.

Controversies around AI – arguably the dominant non-geopolitics story of 2023 for wealth management – continue. As we have seen from the corporate fist-fights at OpenAI, for example, the ways in which people think about AI, in terms of its opportunities and risks, continue to range from cheerful to gloomy. 

This news service recently interviewed Ryan Pannell, chief executive and global chair of Kaiju Worldwide, an asset manager specializing in AI-powered investments and issuer of the New York-listed BTD Capital Fund. We explore what he thinks of popular conceptions about AI.

Family Wealth Report: You talk about the “control zone of AI” and how the autonomy of AI is misconceived, even exaggerated. Please elaborate on why you think people are getting AI wrong and whether such mistaken views might affect how wealth managers use or don’t use AI?
Pannell: For decades now, AI has been part of our shared social consciousness. It has been featured in books and movies, and when it is, it’s always the Antagonist. It can be accidentally so, as the child-like Joshua was in 1983’s WarGames, after a bored teenager who’d hacked it asked it to play what he thought was a game named “Global Thermonuclear War.” Or it can be more malevolent, like the ever-subconsciously present Skynet from Cameron’s Terminator. 

What our collective consciousness has taken from any of the myriad examples of “AI run amuck” is twofold: 1) AI is incredibly powerful, and 2) AI is a threat. Given our shared mythology, it's easy to see why we struggle with the decision to hand the reins (and our wallets) over to an autonomous system. The greed centers of the brain recognize the unquestionable power of AI, while the fear centers are still connected with the threat promised by our myths.

While wealth managers don’t think the AI we’re using in the space at present is going to result in any of the outcomes movies would have us believe are possible, those outcomes are still playing in their subconscious, and so there’s real fear around autonomous AI – because autonomy is where in our mythology it all goes wrong. Extend that further, and you get the following logic path: true AI means autonomy > autonomy = destruction > AI is dangerous and irresponsible. Which is, of course, not true. It can be dangerous; it can be used irresponsibly – but only if that control zone isn’t set. 

What we are educating wealth managers to understand that AI’s autonomy always (not generally, but always) exists within the boundaries of hard landscape. An AI system built to find volatility arbitrage opportunities and exploit them within a series of risk parameters can’t exceed those parameters any more than it can decide to trade currency pairs. It is capable of finding efficiencies within those boundaries, rewriting its own code to take advantage of those efficiencies, but only in pursuit of the goals it has been given.

In the theme of “black box vs responsible AI” and the need for open approaches, please can you give more detail on what the problems are with the “black box” approach? How important is it for wealth managers to be transparent about how AI fits into the value chain so that clients know what they’re paying wealth managers for?
The problems with black box systems are only problems in applications where explainability is required. If your goal is to “make the most money,” and you don’t really care how that’s done, black box generally outperforms all explainable systems. Why? Because AI doesn’t like constraints where raw performance is concerned. Want an AI driving system or flight control system to get from A to B the fastest? No problem. Want all the passengers to live and be insulated from all risk/danger? Performance will suffer, because it needs to work within what it sees is an arbitrary rule set.

When it comes to wealth management, transparency and explainability are both important – up to a point. At Kaiju, for example, we can explain very easily the framework within which our AI systems make decisions and describe the edges of the hard landscape. But when it comes to the “why” of the discrete decision, the AI cannot explain beyond “because I had reasonable certainty it would achieve my goals.”

It’s not unlike the way our brains come to certain aesthetic conclusions. Ask someone who has just seen a painting they like or a person they find attractive “why?” and they will struggle to articulate it in a linear, logical fashion – but that doesn’t mean that they are not certain.

Wealth managers need to be able to clearly articulate how they use AI, and where the edges of the hard landscape are with respect to its use, so that investors can make informed decisions about relative risk. They do not need to walk down to the last granular point in a decision tree which is not linear in the first place.

To what extent is AI likely to be primarily about efficiencies and hence improving profit margins at wealth managers rather than about generating new solutions and sources of revenue? What revenue-generating ideas do you think AI might drive in the next few years?
There’s a symbiosis. In other words, it’s a bit of both. AI alone is awful at generating new investment ideas, because the best way it does so is black box which, as we’ve discussed, probably isn’t ideal for most wealth managers. What it’s fantastic at, is refinement. 

So, as an example, let’s say we teach the machine to trade iron butterflies. We teach it how the options market functions, how liquidity works, how the options track their underlyings, and all the key criteria we are aware of that underpin a good iron butterfly set up. But we feed it all the additional market data as well, beyond what we use to run the strategy manually. We then ask it to improve the strategy. It’s bound by key rules (it’s an options strategy which is net short premium, it’s risk defined, it needs to sell the opposite sides of the ATM strikes and buy wings further out, and profits from stagnant price and a reduction in volatility) but it has room to maneuver within them.

It then comes back with a preference to skew the iron butterfly to the put side, buy the wings at disparate strikes (one further away than the other), and buy additional long put units way down below the put wings. Why? Because those optimizations produce vastly superior outcomes to the strategy as we taught it. 

It also decides whether there are better predictors of positive outcomes than the criteria we’ve been using, so it decides to use those instead and weight them accordingly. Still loosely an iron butterfly, still following all the rules – but a completely new, superior iteration of what we taught it. And this is repeatable in cycles, across almost all strategies.

We have chatbots and the like: how in your view will the front-office experience for clients be affected by AI? Please give a few real-world use cases.
For us, not at all (we don’t use generative AI in front-office applications), but there are a number of easy-to-find use cases of generative AI implementation in financial advice and front office scenarios, many of which I’m sure will be familiar.

There are a lot of fears about AI becoming self-aware and so on. How realistic are such fears?
Entirely unrealistic. Consider that first, the system would need to be global in nature – and AI doesn’t function globally. By this I mean that the system in question would not only have to have complete access to literally everything, but [would need] a purpose for that level of connectivity – which we would have to give it. We build AI systems and allow them autonomy to perform certain tasks. “Trade capital markets.” “Fly this plane.” “Write my essay.” But no-one is going to program an AI system to “do everything,” thus there will never be the capacity for self-awareness, because each system doesn’t have any idea of what’s “out there” beyond what it can see. The capital markets trading system doesn’t know that there are systems for driving cars, or for replacing Val Kilmer’s voice in Top Gun: Maverick. 

They don’t even know there’s a world or universe out there beyond what they interact with, and their autonomy doesn’t allow them to change their own awareness in any way. Even if there was a globally-connected “AI of Everything,” would it be aware of itself? The “Self” isn’t something AI can even recognize; when ChatGPT answers questions about the self, it’s really just trying to feed you an answer it made up from composite parts that it found out on the internet, which it thinks is the answer you want.

There’s just no link between A and B when it comes to sentience. AI is a hell of a mimic – and it may learn to mimic people effectively if instructed to do so for a specific purpose – but it’s not going to do that autonomously.

Register for FamilyWealthReport today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes