Technology

Addressing The Threat Of AI To Physical Security

Elizabeth Buckley and Tristan Flannery September 23, 2024

Addressing The Threat Of AI To Physical Security

This article summarizes the dicussions in a panel examining specific types of cybersecurity threats and the ways to handle and counter them.

The following article, which is linked to the Family Wealth Report Family Office Cybersecurity and AI Summit, is part of a series of articles.  

The editors of this news service are pleased to share this material; the usual editorial disclaimers apply. Email tom.burroughes@wealthbriefing.com if you wish to respond.

Introduction
The summit focused on the critical issue of cybersecurity within the sphere of family offices, with a special emphasis on the implications of artificial intelligence. The event gathered industry experts to discuss the integration of AI technologies in enhancing security protocols against the backdrop of increasing digital threats.

Overview of panel discussion
The panel provided insights into the evolving threat landscape posed by artificial intelligence, with a particular focus on deep fakes and their impact on physical security and executive protection. Scenarios addressed ranged from time-sensitive threats incorporating children to extortion or manipulation.

The panelists were Elizabeth Buckley and Tristan Flannery
Buckley is an expert on offensive cybersecurity, emulated attacks, and security-related cyber strategies. Flannery is an expert on physical risk management, protective operations, and crisis management.

Key points addressed by Buckley:

Ask "why?" or "Who?" one more time than you feel comfortable. People with a good-faith reasons for asking for something will have patience and be able to provide a reasonable answer.

-- The fundamentals of security remain, protect your business logic, operate on principle of least privilege.
-- Educate yourself on detection models, because there will be solutions as the challenges evolve: benchmarking and understanding how they work is essential to choosing proper solutions for your office as they are launched.

Key points addressed by Flannery
-- The importance of integrating AI detection tools into existing security protocols to enhance response times and accuracy; 
-- Training needs for executive protection teams to recognize and effectively respond to AI-generated threats; and 
-- Strategies for fostering collaboration between AI technology specialists and executive security teams to ensure seamless security management.

Discussion highlights
The discussion offered insights into the intersection of artificial intelligence and physical security, with a strong emphasis on practical solutions and preventive strategies to safeguard assets and individuals.

1. How do fakes work?
a. Things that can be faked:
i. Images; 
ii. Videos; 
iii. Voice; and 
iv. Text.

b. Types:
i. Plagiarism; 
ii. Computer generated text; 
iii. Cheapfakes; and 
iv. Deepfakes – a false sound, image, or video that has been designed to evade detection by naked eye or ear, and maintains visual or audio integrity within the metadata; 

c. Things that can be deepfaked:
i. Images; 
ii. Video; 
iii. Voice; and 
iv. Platforms: sora.ai, argon, DALL-E, Midjourney, Stable diffusion, DID, runway, etc.

2. How do you make a deepfake?
a. Video: creation through text to image and acute editing tools within the software; 
b. Speech: as little as three minutes for a good deepfake, with most platforms claiming strong cloning capabilities at 20 minutes; and 
c. All open-source and cheap.

3. How do you detect a deepfake?
a. Circumstantial
i. Your grandfather calls you for money...and he has been dead for a decade.

b. Modeling
i. Data input (yaml, imagery upload, whatever the medium); 
ii. Data processing: unzipping, structuring, bucketing; and
iii. Data modules. 
– Naive modelling; 
– Spatial modelling; and 
– Radio frequency modelling.
iv. Current detection software

Benchmarking is being done under NeurlPS and others; and GitLab has a lot of great integrated deepfake detection models in one place

Recommendations
The panel concluded with a consensus on the need for ongoing education and adaptation of security measures to keep pace with AI advances. Going back to the basics, rather than trying to create additional complexity will likely reduce vulnerabilities. Recommendations were made for continuous training, investment in technology, and a proactive approach to security planning, including the use of secure and duress words.

For further information or inquiries about future events, please contact tristan@presageglobal.com and elizabeth.buckley@praetorian.com

About the panelists

Elizabeth Buckley is a security consultant with over 15 years of experience in offensive targeting and operations, having worked extensively in both the intelligence community and the private sector. Her career includes positions at the United Nations, Federal Law Enforcement, and various commercial enterprises. Most notably, Elizabeth’s expertise was honed at the CIA, where she specialized in technical operations, focusing on high-value targets and long-term targeting strategies.

Tristan Flannery, partner, risk management, family offices, at Orbital Risk, steers a global risk management firm serving a diverse clientele that includes Fortune-ranked companies, family offices, and government entities. The team's expertise spans corporate and national security dimensions.

Register for FamilyWealthReport today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes