Podcast Summary: Business Daily – "The Deepfake CEOs"
Host: Ed Butler, BBC World Service
Episode Date: February 23, 2026
Episode Overview
This episode explores the alarming rise of deepfake technology targeting corporate executives, particularly CEOs and CFOs. Deepfakes—AI-generated, highly convincing fake videos or audio—have become tools for sophisticated fraud schemes costing companies millions. Host Ed Butler investigates real-world cases in the UK, India, and globally, speaking to experts, affected executives, and cybersecurity professionals about the technology's rapid evolution, the difficulty of detection, and the urgent need for better regulation and collective defense.
Key Discussion Points and Insights
1. Real-World Cases of Deepfake Attacks
-
Arup Fraud Case (Hong Kong, Early 2024) ([01:58])
- An employee, pseudonymously named “Joanne,” received a video call involving fake company executives (including the CFO) created using deepfake technology.
- She was instructed to transfer $25 million to five bank accounts.
- The deception was only uncovered after the transaction—highlighting how convincing and dangerous deepfakes have become.
- Stephanie Hare (Researcher): “You would never want simply to jump on a video call with someone and transfer $25 million. There would have to be a series of steps to unlock, to protect from this type of fraud. So that's the brave new world that we're in now.” ([03:26])
-
Bombay Stock Exchange CEO Deepfake (India) ([03:53])
- A deepfaked video of CEO Sundar Raman Ramamurti circulated, falsely “recommending” investments.
- Immediate action was taken to report and remove the video, but the incident underscored reputational and financial risks.
- Sundar Raman Ramamurti: “My image is used as if I am giving some advice on some stocks…we immediately, when we see this, we lodge a complaint...We don't want to have any impact.” ([04:35])
- The intention was to con people into joining a fraudulent WhatsApp group for supposed investment tips.
-
LastPass CEO Attempted Deepfake Fraud ([05:35])
- CEO Karim Tuba describes a 2024 incident where a deepfake audio message impersonated him, attempting to trick an employee.
- Detection was possible due to company security protocols (use of unauthorized communication platform, personal phone).
- Karim Tuba: “Forced urgency. Right. I mean, it may be a bot or it may be a deepfake, but it's targeted towards a human.” ([06:35])
- LastPass requires employees to immediately notify security on suspicion, which prevented any breach.
2. The Scale and Acceleration of the Threat
-
Exponential Increase in Deepfake Incidents
- Karim Tuba: “Over the last two years, we've seen almost a 3,000% increase in the number of deepfakes that have been utilized.” ([07:20])
- Both the sophistication and accessibility of deepfake tools are growing, making attacks easier and more frequent.
-
Difficulty in Detection and Countermeasures
- Matt Lovell (CloudGuard CEO): “Deepfakes are becoming very, very easy to do...in order to generate video and audio quality of extremely accurate specifications, it takes minutes.” ([12:17])
- Entry costs for deepfake attacks range from $500–$1,000 for basic operations to $5,000–$10,000 for more sophisticated ones. ([14:25])
- Detection software is often outpaced by advances in AI forgery tools.
- Attack vectors now span beyond corporate environments to less secure apps like WhatsApp and social platforms.
3. Cybersecurity Response and Best Practices
-
Importance of Multi-Layered Verification
- Do not rely on single communication channels for high-value transactions.
- Ensure staff are trained to recognize and report suspicious messages, especially those that use urgency as a manipulation technique.
- Immediate internal reporting and incident response can avert losses.
-
Industry Reluctance and Need for Collective Defense
- Many major companies (Ferrari, WPP, Arup) are reluctant to admit being targeted, even when they successfully repel attacks.
- Karim Tuba: “I do [think companies should be more public], because I think there’s a lot to learn from what’s happening collectively. So I do think it’s helpful. And we tend to talk about this quite publicly…It also helps other companies out to really get into the details.” ([10:26])
4. Broader Implications: Regulation and Societal Impact
-
Urgent Need for Legislation and Accountability
- Stephanie Hare: “Anytime that you're looking at AI, you have to remember that the people who invented this technology released it into the world knowing that these things would happen... You need to be able to press criminal charges so that you are... issuing fines, but hopefully sending people to jail. This is crime, this is fraud, and it's making people's lives a misery.” ([08:10])
- Laws are lagging behind the threat; regulatory frameworks are required to hold perpetrators accountable.
-
Changing Information Landscape
- Increased skepticism: people will soon need to double or triple check all forms of digital “evidence.”
- Stephanie Hare: “It almost makes me wonder if we will go back to a different type of verification. So for a long time we've had it where all of us feel that we can use our own eyes and ears to decide if we believe something or not. But when that becomes not viable... you will need trusted authorities who are able to be there on the ground...” ([19:06])
-
Opportunities in Cybersecurity
- Evolving threat landscape makes cybersecurity a growth career. More professionals are needed to address the escalating risks.
- Stephanie Hare: “It is not going to take your job if you are working in cybersecurity. Right. We have a shortage of cybersecurity professionals. We need more people to get into this. And this story illustrates exactly why.” ([17:20])
Memorable Quotes
-
Sundar Raman Ramamurti (Bombay Stock Exchange CEO):
“It is not about what I feel, what I do. It is about nobody should incur a loss by believing something which is untrue.” ([05:12]) -
Karim Tuba (LastPass CEO):
“Anytime you can create urgency from a human perspective, it increases the probability that the human on the other end will actually respond.” ([06:35]) -
Stephanie Hare (AI Decoded):
“The people who invented this technology released it into the world knowing that these things would happen...this is crime, this is fraud, and it’s making people’s lives a misery.” ([08:10]) -
Matt Lovell (CloudGuard):
“Attack vectors are accelerating faster than we can expect. Accelerate defense, automation, and protection against bots.” ([16:32])
Demonstration: How Easily Deepfakes Can Be Created
- Reporter Ed Butler visits CloudGuard to witness a live demonstration.
- Mina, an analyst, creates a 30-second deepfake of Ed in just 30–40 minutes using freely available tools.
- Mina: “Not very hard. A lot of the tools…are freely available...” ([13:39])
- For more polished results, slightly more time and minimal expenditure are required.
- Costs for sophisticated attacks (with fewer “tells”): $5,000–$10,000. ([14:25])
Emotional and Psychological Impact on Corporate Leaders
- Executives often feel shocked and vulnerable when targeted, even if no financial loss occurs.
- Matt Lovell: “People are surprised. Are people moving fast enough to respond to the speed the threat is developing? Absolutely not surprised.” ([16:55])
Timestamps for Important Segments
- Introduction and Main Theme: [01:16]
- Arup Deepfake Fraud Case: [01:58]–[03:45]
- India/BSE CEO Deepfake Case: [03:45]–[05:20]
- LastPass CEO on Preventing Deepfake Attacks: [05:35]–[07:55]
- Growth in Deepfake Threats: [07:20]–[08:10]
- Stephanie Hare on Need for Accountability: [08:10]–[09:44]
- The Reluctance to Publicly Disclose Attacks: [09:44]–[10:48]
- Deepfake Creation Demo with CloudGuard: [12:17]–[14:25]
- How Deepfakes Are Detected (Current Limitations): [14:47]–[16:55]
- Expert Panel on Future Trends and Verification: [17:20]–[19:35]
Conclusion
The arms race between those creating and those defending against deepfakes is escalating rapidly. While technology provides tools for both attackers and defenders, the key takeaways stress the need for robust internal protocols, collective industry openness, regulatory intervention, and public awareness to combat this growing threat. As deepfakes become democratized and ever more convincing, trust, verification, and vigilance will become paramount for both individuals and organizations.
