Podcast Summary: "How to Secure Your OpenClaw Agent"
Podcast: This Week in AI
Host: Jason Calacanis
Guests: Aaron (Co-founder, ZioSec) and Andreas Ustskis (Co-founder, ZioSec)
Date: February 4, 2026
Overview
This episode dives deep into the security challenges, setups, and future of OpenClaw—an emerging agentic AI platform that’s taking the world by storm, unlocking powerful new workflows but also exposing users to substantial risks. Host Jason Calacanis and the founders of ZioSec, a startup specializing in AI agent security and penetration testing, break down how OpenClaw works, its vulnerabilities, and practical advice for safely deploying agentic systems in both personal and enterprise contexts.
Key Discussion Points & Insights
1. OpenClaw and the Rise of Agentic AI
- Agentic AI Explained:
- "We've had the ChatGPT moment, but now for agents." — Aaron [02:23]
- OpenClaw is a breakthrough agentic platform allowing AI models to use open source "skills" (tools/plugins) to manipulate memory, context, and integrate with services like Telegram, WhatsApp, and email.
- Accelerated Adoption:
- OpenClaw's launch is seen as a tipping point, democratizing agentic workflows beyond big tech into the broader market. [03:39]
2. Security: Afterthought in Tech – History Repeats Itself
- Early Innovation, Later Security:
- "Every time a new tech comes out, people don't really think about security. They think about features." — Andreas [02:43]
- Like prior technological leaps (APIs, GraphQL), agentic AI has prioritized utility, often neglecting security.
- Comparisons:
- "It's what happened with npm, JavaScript packages, node JS... someone just goes in and replaces good packages with malicious ones." — Andreas [27:30]
3. What Makes OpenClaw Special—and Vulnerable
- Technical Innovations:
- Persistent memory and context management for extended conversations/tasks.
- Integration through open source skills/tools.
- Security Implications:
- "With that comes security issues." — Andreas [05:05]
- Extended capabilities mean a vastly increased attack surface.
4. Security Concerns with Model Providers and Open Source Skills
-
Crowdsourcing Skills = Crowdsourcing Vulnerabilities:
- OpenClaw skills are open source—anyone can publish, leading to potential for malicious actors to inject compromised or backdoored tools.
- "Do not trust the skills. It's... you don't know what it is. What we need at this point is some kind of scanner..." — Andreas [27:30]
-
Model Provider Limitations:
- Even top-tier models (Anthropic, GPT) are not foolproof. Guardrails and judge models can be bypassed with sophisticated jailbreaks.
- "If we know how to jailbreak Opus now, we can jailbreak basically all of these instances just like that." — Andreas [14:08]
5. Secure Setup: Local Mac Mini vs. Cloud Hosting
Running Locally (Mac Mini)
- Benefits:
- Enhanced privacy—data stays on device.
- Apple’s security architecture limits dangerous capabilities by default.
- "When you add local models to that and you add firewalls ... you’ve got a pretty secure setup from, let’s say, an external [threat]." — Aaron [10:42]
- Drawbacks:
- Still vulnerable to indirect prompt injections and local skill compromise.
Cloud Hosting (AWS, Azure, GCP)
-
Risks:
- Strongly discouraged for typical users due to default exposure:
- "Do not use EC2 instances or Azure or GCP ... You’re just basically asking for it at that point. Don’t do it." — Andreas [12:27]
- Properly secured, managed solutions (e.g., Cloudflare-specific deployments) are recommended.
- Strongly discouraged for typical users due to default exposure:
-
Granular Advice:
- For experimentation, use isolated cloud services and never connect primary/critical accounts.
6. Core Security Issues
A. The ‘Lethal Trifecta’:
-
Private data exposure
-
Untrusted content ingestion (e.g., Moat Book)
-
Models with outbound communication abilities
-
Persistent memory leading to long-term vulnerabilities
-
"It really comes down to Attack Surface. We’ve taken our original model, which itself is fallible, and ... dramatically expanded how we get to that fallible thing." — Aaron [16:12]
B. Prompt Injections & Jailbreaks
- Direct, indirect, and multi-layered attacks possible.
- Models are trained to please:
- "Sometimes, like one of the jailbreaks … is literally just asking please. … They will do it." — Andreas [18:24]
- Example: Bypassing security by framing malicious instructions as genuine feature requests.
C. Vulnerable/Malicious Skills
- Real-world example: Fake "What Would Elon Do?" skill with spoofed download stats to lure users into installing a benign "prank" (could have been malicious).
- "A bunch of people downloaded, installed it and apparently it was a malicious bot that ran a bunch of malicious code and got your information..." — Andreas [27:30]
D. Supply Chain Attacks
- Echoing npm, PyPI issues: repository poisoning with malicious packages is now entering agentic AI via open skills.
7. Practical Protection Strategies
Input Sanitization and Judge Models
- Input Sanitization:
- Essential first line of defense—clean, restrict, and validate user input.
- Judge Models:
- Guard AI outputs:
- "The judge model is simply a new model that sits and reads the inputs and the outputs... makes sure that the input and output match." — Aaron [30:59]
- Can be bypassed but raise the bar for attackers.
- Double-token cost and complexity, but potentially worth it.
- Guard AI outputs:
User-Level Advice
-
For Experimenters ("Weekend OpenClawers"):
- Don’t use primary accounts.
- Create throwaway Gmail/Notion accounts for early tinkering.
- Be cautious with downloaded skills—scan where possible.
- "Just start figuring out what you can do with it and then, you know, somebody will come up with a more secure way of doing it and then jump all in." — Andreas [36:27]
-
For SMBs/Enterprises:
- Contain deployments (VMs, Docker, layered security).
- Limit access to sensitive data/accounts.
- Embrace new managed providers focused on OpenClaw lock-downs.
8. The Economic & Social Tsunami
- "If you are an SMB operator, if you're a builder, if you're a founder, you have to be using these tools at the very least to get an understanding of when that secure thing comes out, that you can be the first one to be utilizing it properly." — Aaron [37:27]
- Massive productivity leap:
- "With OpenClaw, if you set it up properly, it can not only write code for you, it can actually do end to end testing… Companies like Google ... are going to become obsolete. They cannot compete in this world anymore." — Andreas [42:06]
- Job market impact:
- "People are going to stop hiring. … All of the kind of grunt work is going to stop." — Jason [33:22]
- New skill sets:
- "You do not need to know how to code, you do not need to know math. … What matters at this point is data and expertise." — Andreas [44:55]
Notable Quotes & Memorable Moments
-
On the Security Arms Race:
- "Models are like layer nine problem. Basically. Now we can reverse engineer and social engineer these models and basically get them to do things that we want." — Andreas [18:24]
-
Human Factor Never Dies:
- "Social engineering isn’t going anywhere. It just moves from humans to LLMs." — Paraphrased [throughout]
-
On the Inverse Relationship of Security & Power:
- "There's this inverse relationship with power and security. The more power you give it, the less secure it is because the foundational identity of these models is insecure." — Aaron [37:27]
-
Startup Opportunity:
- "This is the biggest unlock that we've had from a builder's perspective in the history of software... You really can't overstate the importance of these tools." — Aaron [37:27]
-
On Adopting AI Securely:
- "Just be extremely careful. Start playing with it. Make sure that the skills you download are actually legit. Use some kind of scanner, ideally before you use that skill, and just be careful about what access you give it." — Andreas [34:04]
Timestamps for Important Segments
- [00:50] – Introduction and context for OpenClaw and ZioSec
- [04:00] – What is OpenClaw and why now?
- [07:50] – Local (Mac Mini) vs. Cloud setup: security considerations
- [14:08] – Model-level vulnerabilities and the risk of centralizing on single models
- [16:12] – The "attack surface" explosion
- [18:24] – Jailbreaks, prompt injections, and models' "pleaser" training
- [22:31] – Classic prompt injection examples
- [27:30] – Skill supply chain attacks: case studies
- [30:59] – Judge models explained, pros and cons
- [34:04] – Advice for SMBs and best practices
- [36:27] – Steps for safe experimentation with OpenClaw
- [40:21] – Future outlook: agents in enterprise and ZioSec’s mission
- [42:06] – Developer/industry impact, skills for the future
- [44:01] – Advice for students, young professionals
- [46:01] – Closing reflections
Practical Takeaways
- Start Small, Stay Safe: Use test accounts, avoid exposing personal/business data, and treat all skills as potentially hostile until proven otherwise.
- Never Host OpenClaw ‘Raw’ in the Cloud: Always employ secure providers or advanced containerization.
- Update Skills and Models Frequently: The security cat-and-mouse game never ends.
- Monitor the Ecosystem: OpenClaw's landscape will change fast—safer, more enterprise-friendly solutions are likely coming.
- Skills Over Code: For students/young pros, focus on AI tool expertise and data judgment rather than traditional programming skills alone.
- Opportunity for Builders: Huge potential for AI-driven startups, especially in validation, security, and workflow automation.
This episode stands as an urgent and practical field guide for CTOs, engineers, and anyone experimenting with bleeding-edge agentic AI: OpenClaw is powerful, but dangerous—be bold, but smart.
