The Times Tech Podcast
Episode: AI Safety Meets OpenClaw – What India’s AI Summit Tells Us
Date: February 20, 2026
Host: Danny Fortson (from Silicon Valley)
Co-host: Katie Prescott (absent this week)
Special Guest Host: Mark Selman (Technology Correspondent, The Times)
Featured Interview: Karina Prunkle (Lead Author, 2026 International AI Safety Report)
Episode Overview
This episode delves into the global debate over the future of AI governance, sparked by the 2026 AI Impact Summit held in India—the first of its kind hosted by a Global South nation. The hosts explore whether Silicon Valley or an emerging coalition of global players will define AI's trajectory, what AI summits actually accomplish, and how viral projects like "OpenClaw" are reshaping the conversation. Topping off the episode is an in-depth interview with Karina Prunkle, lead author of the International AI Safety Report, which sets a new bar for evidence-driven policymaking.
Key Discussion Points & Insights
1. The Context: India’s AI Impact Summit
[02:10]
- India hosts the fourth major AI summit, signaling growing influence from the Global South.
- The summit brings together politicians, tech CEOs, researchers, and civil society to define the path forward for AI.
- Mark draws a comparison to legacy international summits (G20, COP, etc.), questioning whether global meetings actually drive progress or simply generate headlines and group photos.
Notable Quote:
“It’s starting to feel like G20, G7, ASEAN COP... we’re wondering if we’re ever the wiser or safer as a result.”
— Mark Selman [02:37]
2. OpenClaw: Viral Agentic AI and Its Industry Impact
[03:46]
- OpenClaw (formerly Claudebot/Multbot) is an open-source, autonomous AI agent developed by Peter Steinberger.
- Went viral for being user-run, highly capable, and fully open-source—attracting a bidding war between Sam Altman (OpenAI) and Mark Zuckerberg (Meta); OpenAI won out.
- The project is emblematic of a profound shift: the move from demos to real-world, agentic AI deployment.
- Raises questions about liability, security, and impact on jobs.
Memorable Explanation:
“OpenClaw is a personal agent that you can run on your own machine… it effectively becomes your employee, your coworker, your digital butler. And it’s really a glimpse of this agentic future we’ve all been hearing about… realized by this one guy and this open source project.”
— Danny Fortson [04:23]
Industry Fallout:
- OpenAI promises OpenClaw will remain open-source and be governed by a foundation.
- Wave of concern from cybersecurity firms due to the system-level access these agents have.
- Mark and Danny note the potential for a “snowball effect” if enterprise adoption accelerates.
Liability Question:
“It reminds me of having a dangerous dog… If my agent does something stupid or buys something on my behalf that I didn’t want, am I liable?”
— Danny Fortson [08:35]
3. Origins and Evolution of International AI Summits
[09:39]
- The first AI Safety Summit was initiated by the UK in response to concerns raised by researchers like Geoffrey Hinton.
- Led to the Bletchley Declaration (2023): governments and companies agreed to pre-release testing of powerful models by AI Safety Institutes.
- Subsequent summits in Seoul, Paris, and now Delhi mark a rapid institutionalization of global AI dialogue, but questions remain: Are these summits consequential or largely ceremonial?
- India’s summit pivots toward “democratizing AI,” especially for the Global South, and promotes open-source approaches.
Key Perspective:
“India wants to be the leader of the global south in AI. He (Modi) wants to democratize AI for a billion plus Indians... making sure this doesn’t become an American wealthy nation story.”
— Mark Selman [11:09]
- Tech’s West Coast is skeptical about regulation, suspecting most companies continue to “move fast and break things” regardless of summit commitments.
4. Effectiveness of AI Summits & Regulation
[13:21]
- World leaders and big tech are at the table, but long-term impact hinges on actionable steps.
- The EU’s fast regulatory pace sometimes backfires (“egg on your face”), while others fear regulation will lag too far behind.
- Social media serves as a cautionary example: slow regulation led to retrofitting instead of proactive safeguards.
A Stark Analogy:
“When you look at AI, it’s arguably going to be more powerful and it’s developing much more quickly… 5 billion people have a supercomputer in their pocket and can access it at any time. Hopefully it doesn’t take a quarter century (to regulate).”
— Danny Fortson [14:21]
- Major industry and government leaders present at the summit (Google, OpenAI, Bill Gates, DeepMind, Mistral, international heads of state).
- Deep concern over jobs: AI threatens to disrupt millions of outsourcing and SaaS jobs, especially in India’s tech economy.
5. Tax, Redistribution, and Societal Risks
[15:52]
- As AI-driven automation threatens labor, how will governments address wealth redistribution?
- Ownership of AI and the distribution of its economic value could provoke massive societal backlash without careful policy.
On the Stakes:
“The value of this technology is going to accrue to capital, not labor, let’s put it simply.”
— Mark Selman [16:30]
- India faces a paradox: massive skilled tech population at risk of being disrupted by the very technology the country is seeking to champion on the global stage.
6. Interview Highlights: Karina Prunkle on the 2026 International AI Safety Report
[22:18] onwards
What is the Report? Why Now?
- AI capability is outpacing evidence about its risks and mitigations, creating governance headaches for policymakers.
- The report synthesizes scientific evidence on:
- Capabilities of AI systems
- Emerging risks
- Existing safeguards
- Meant to cut through hype and serve as a shared foundation for international dialogue.
Quote:
“The evidence that we have on the risks and on potential mitigations is… much slower than systems advance.”
— Karina Prunkle [22:18]
Big AI Labs’ Involvement
- Feedback from companies like Google, OpenAI, Anthropic included, but the report is compiled independently.
- Report doesn't conduct new tests; it critically aggregates published science, noting when research comes from private vs. academic sectors.
[24:37]
“We didn’t go out and collect our own data and do our own tests, but we were synthesizing the scientific evidence that has been published officially.”
— Karina Prunkle
Top Risks Identified
- Real-world evidence now robust for risks like cyberattacks, malicious use, and deepfakes.
- Biological risks remain uncertain but are taken seriously, as some labs added safeguards before releasing models.
- Most concerning are systemic, societal impacts: labor market, autonomy, existing social structures.
Memorable Warning:
“Deployers have released their models with additional safeguards because they could not rule out that those models would be able to assist novices in the creation of biological weapons.”
— Karina Prunkle [25:14]
The Challenge of Speed and Relevance
- The report takes months, but the field moves at breakneck pace; solutions include rapid interim updates.
- Even so, frequent “jagged edge” advances (e.g., OpenClaw) can leave major blind spots.
Safeguards: Some Hope, Many Limitations
- “Layered” or “defense-in-depth” approaches (stacking multiple safeguards) are promising, though none are foolproof.
- Governments are starting to take these evaluations seriously—some even running reading groups for the report.
Labor Market and Societal Impacts
- No catastrophic unemployment found—yet; declines happen in targeted fields/exposed roles (e.g., translation down, AI development up).
- Junior workers see more displacement than seniors.
- Stark regional variation: Global South countries like India have more at risk, but also the potential to pivot using AI.
On International Consensus
- The US has not signed up for the report—a glaring omission, given its centrality to AI development.
- Karina declines to comment on US absence directly, but Danny emphasizes the parallel to climate change: lack of commitment from the biggest player undercuts the agreement’s relevance.
[35:01]:
“If the biggest polluter does not take part in an international agreement, how good is that agreement and how impactful will it ultimately be? So that is worrying.”
— Danny Fortson
AI in Medicine: The Double-Edged Sword
- Significant progress in scientific reasoning, lab protocol troubleshooting, and medical applications.
- Capability to assist with both good (drug discovery) and bad (bio-weapon development) ends.
- Worrying finding: “A system gave potentially harmful answers to 19% of realistic medical questions asked.”
[38:03] — Karina Prunkle
The Fundamental “Evaluation Gap”
- Difficulty in reliably testing and measuring AI system safety or real world performance remains an unsolved scientific problem.
- Even recent system cards from top labs (Anthropic) admit their own tools can’t evaluate newly released models.
Quote:
“The product knows when it’s being assessed… It’s over eager, it does things it’s not supposed to. I mean, this is why people get worried about this stuff.”
— Mark Selman [39:57, 40:36]
Timestamps for Key Segments
- [02:10] – Intro & purpose of the AI Impact Summit in Delhi
- [03:46] – The OpenClaw saga: viral open-source AI, industry bidding war
- [06:46] – Risks and liabilities of agentic AI
- [09:39] – History and shifting themes of global AI summits
- [12:02] – Open-source vs. closed models and democratization of AI
- [14:21] – Regulation: what’s effective, what’s not
- [15:15] – Societal risk, job impact, and the redistribution dilemma
- [20:30] – Interview with Karina Prunkle starts: purpose and structure of the International AI Safety Report
- [25:14] – Biological risks, safeguards, and new model deployment trends
- [29:12] – Positive findings on layered safeguards
- [31:26] – Labor and societal risks: who is affected and how
- [33:43] – Shift in summit focus: safety to global integration, and the US absence
- [36:16] – Hope vs. dread: assessment of current state and risks
- [38:03] – Opportunities and dangers in healthcare & scientific applications
- [39:57] – Summing up: hope, anxiety, evaluation challenges
Notable Quotes & Speaker Attribution
- “OpenClaw... it effectively becomes your employee, your coworker, your digital butler.” — Danny Fortson [04:23]
- “It’s starting to feel like G20, G7, ASEAN COP...we're wondering if we're ever the wiser or safer as a result.” — Mark Selman [02:37]
- “The value of this technology is going to accrue to capital, not labor, let’s put it simply.” — Mark Selman [16:30]
- “The evidence that we have on the risks... is much slower than systems advance.” — Karina Prunkle [22:18]
- “Deployers have released their models with additional safeguards because they could not rule out that those models would be able to assist novices in the creation of biological weapons.” — Karina Prunkle [25:14]
Episode Tone & Takeaways
- Balanced optimism and concern: The hosts and guests see exciting opportunities for AI to solve big problems (e.g., medicine, productivity) but are acutely aware of massive risks (security, job losses, misuse).
- Global Reality Check: While Silicon Valley drives technological breakthroughs, countries like India push for broader access and more egalitarian innovation.
- Regulation Paradox: International summits struggle to keep pace with the dizzying speed of AI development; real-world safeguards and labor market policies are lagging.
- Hopeful but wary: There's a sense that only broad, evidence-based, and inclusive conversations—anchored in real-world data like the International AI Safety Report—can meaningfully navigate the AI revolution.
For further reading, insights, and tech news: thetimes.com / Email: techpod@thetimes.co.uk
