Risky Business Podcast Summary
Episode: Wide World of Cyber: DeepSeek Lobs an AI Hand Grenade
Release Date: February 21, 2025
Host: Patrick Gray
Guests:
- Alex Damos: CISO and CIO at Sentinel One, former CISO for Facebook and Yahoo, founder of ISEC Partners
- Chris Krebs: Policy and Intelligence Lead at Sentinel One, former Director of CISA
1. Introduction of Guests
Patrick Gray kicks off the episode by introducing his guests, Alex Damos and Chris Krebs. Alex humorously explains his dual role as both CISO and CIO at Sentinel One, attributing it to his failed attempts to avoid responsibility:
“I tried to avoid responsibility and I failed. That's what happened.” [00:35]
Chris Krebs is welcomed as the policy and intelligence expert at Sentinel One and the first director of CISA, highlighting his extensive experience in cybersecurity.
2. Overview of DeepSeek and Its Impact on the AI Industry
The conversation delves into DeepSeek, a Chinese company that has recently made waves by publishing an open-source Large Language Model (LLM) outperforming industry expectations. Patrick Gray raises questions about the validity of DeepSeek's claims regarding the model's training efficiency and cost-effectiveness:
“They also made claims with regard to the training cost. They said that it cost them very little to develop this thing, which has provoked some skepticism from some quarters.” [02:00]
Alex Damos provides context, explaining that DeepSeek has a history of publishing impactful papers and that while some of their efficiency claims are verifiable, others remain questionable:
“They published a new paper with this and they both made claims that can't be verified and claims that can be verified because you can do it themselves.” [02:26]
3. DeepSeek’s Claims and Market Reactions
The discussion highlights the market's reaction to DeepSeek's announcements, particularly the significant drop in Nvidia's stock value following a LinkedIn post teasing DeepSeek's advancements:
“They ended up losing more market cap in one day than any other company's ever lost.” [03:34]
Alex elaborates on the skepticism surrounding Nvidia's stock plunge, attributing it to exaggerated interpretations of DeepSeek's claims and the broader overreliance on Nvidia's GPUs in AI investments.
“It's really easy to manipulate Wall street when it comes to AI. I would not be shocked.” [04:45]
4. Nvidia’s Role and Market Implications
Patrick Gray and Alex discuss Nvidia's pivotal role in the AI hardware market, debating the sustainability of its market cap and the potential long-term impacts of DeepSeek's advancements. Patrick humorously notes:
“An excellent commodity model would actually mean people would need more chips on their devices to run the models. So perhaps fewer chips in the data centers to train the models, but more chips going out into consumer devices to actually run the model.” [05:16]
Alex warns against underestimating Nvidia, predicting that despite market fluctuations, Nvidia remains integral to AI's hardware foundation.
“I think one of the things you're going to see from this is that somebody else is going to try to do the same kind of thing.” [06:24]
5. Security Concerns with DeepSeek’s Model
The conversation shifts to the security implications of DeepSeek's open-source model. Patrick Gray raises concerns about data security and potential misuse:
“Oh my God, this is a Chinese model. So anybody, you know, entering a query into this thing, that information is going to be captured by China.” [16:47]
Alex counters by comparing it to other platforms like Baidu or WeChat, emphasizing that the security risks are not unique to DeepSeek:
“If you're downloading the Deep Seq model weights, that is something that's somewhere between totally safe and just as dangerous as compiled software.” [19:05]
6. Open Source Model Risks vs. Safeguards
Alex provides an in-depth analysis of the technical aspects of open-source models, explaining that while the models themselves cannot perform actions like executing code or accessing the internet, the real risks lie in how they are deployed and used:
“The LLM itself cannot talk to the Internet. It cannot execute code. It can't do anything other than give you a sequence of text for whatever sequence of text you gave it.” [21:20]
He warns against treating model weights as mere software, highlighting the complexities involved in their secure deployment.
7. European AI Regulation vs. US Approach
Chris Krebs shares insights from the Munich Security Conference, contrasting the European regulatory approach to AI with that of the United States. He notes Europe's tendency to "regulate first, ask questions later," which, according to him, hampers their competitiveness in the AI sector:
“The European model is regulate first, ask questions later.” [34:12]
Alex adds that Europe's stringent regulations, compounded by existing high-end policies, drive away smaller companies and investments, further impeding their AI advancements.
“Europe has overregulated AI, and that has made it uncompetitive in tech.” [40:11]
8. Geostrategic Implications and Munich Security Conference
At the Munich Security Conference, discussions centered around the "New World Order," transatlantic relationships, NATO, and the evolving battleground of AI. Chris Krebs emphasizes the cultural divide between Europe and the US regarding AI regulation and its long-term impacts on global tech dominance.
“American take has been for years and years is let, let the technology blossom and let's figure out what the harms are and then we can make those interventions at that point.” [30:12]
9. Conclusion
Patrick Gray concludes the episode by summarizing the key takeaways: DeepSeek's emergence challenges the perceived dominance of Western AI, raises questions about market dynamics and security, and underscores the critical role of regulation in shaping the future of AI. He hints at ongoing monthly discussions to further explore these evolving topics.
“It has shown us that perhaps models themselves for a long time since this all kicked off with the release of ChatGPT, the first big version that made everyone lose their minds.” [25:28]
Both guests express optimism about future discussions and initiatives within the AI and cybersecurity landscapes.
Notable Quotes:
-
Alex Damos:
“I tried to avoid responsibility and I failed. That's what happened.” [00:35]
“It's really easy to manipulate Wall street when it comes to AI. I would not be shocked.” [04:45]
“The LLM itself cannot talk to the Internet. It cannot execute code. It can't do anything other than give you a sequence of text for whatever sequence of text you gave it.” [21:20] -
Chris Krebs:
“We know they're not effective because the prior administration... there are companies whose entire job it is to operate in places like the UAE and Singapore...” [09:23]
“The European model is regulate first, ask questions later.” [34:12]
“Nvidia is going to be fine.” [28:30] -
Patrick Gray:
“The grand irony in all of this is that even if this were a model that was incredibly Efficient to train. The impact to Nvidia wouldn't be all that great...” [05:16]
“It is not like the open SSH backdoor, which is a backdoor that if it had not been detected, would have been every Linux machine on the planet you can log into.” [22:50]
Key Topics Discussed:
-
DeepSeek's Technological Advancements:
Examination of DeepSeek's open-source LLM and its claimed efficiencies in training and inference. -
Market Reactions and Nvidia's Influence:
Analysis of how DeepSeek's announcements affected Nvidia's stock and the broader implications for AI hardware providers. -
Security Implications:
Discussion on the potential risks associated with using open-source models like DeepSeek's, including data security and model manipulation. -
Regulatory Landscapes:
Contrast between European and American approaches to AI regulation and how these impact global competitiveness. -
Geopolitical Strategies:
Insights from the Munich Security Conference on the strategic importance of AI in international relations and national security.
This episode of Risky Business provides a comprehensive exploration of DeepSeek's impact on the AI industry, touching upon technological breakthroughs, market dynamics, security concerns, and the intricate dance of global regulations. For information security professionals and AI enthusiasts alike, the discussions offer valuable perspectives on navigating the evolving landscape of artificial intelligence.
