
Loading summary
A
Cybersecurity Today we'd like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them@meter.com CST this is cybersecurity Today. I'm your host Jim Love. I was going to open today with a couple of familiar stories. Anthropic has rolled out Claude Cowork. I've spent some testing it, and Google has launched what it calls personality. Different branding, different companies, but the same underlying shift. Both are AI agents. They don't just answer questions, they take actions. On a local computer, they act in the name of the user, with the user's authority. They have access to as much data as the user will let them, and from a security standpoint, that should give us all pause. Once a software can read all your files, operate your browser, send messages, call APIs, and run tasks for you as you the risk model changes completely. And this isn't about bad answers or hallucinations, it's about delegated control. Personal intelligence from Google is designed to be the ultimate personalized assistant, having access to essentially all your data, email, documents, search history, behavior, everything that makes you you. You can restrict it, but realistically, we've been trading privacy for convenience and service for years. This just adds another level to the trade. And then there's Claude Cowork from Anthropic. Compelling for a different reason. It can autonomously do work without supervision, acting with whatever authority you give it. Once you let it loose, it doesn't just advise, it operates. And that would have been my stories for the week until another story exploded into the spotlight almost overnight. It is rare to see a brand new open source project explode into a global developer phenomena in a matter of weeks. That usually takes years. In this case it literally took weeks. The project was first launched under the name claudebot, now renamed Multbot, and it's become one of the fastest rising AI agent tools in recent memory. Not a chatbot, not a plugin. A fully local AI agent that runs on your own computer and can be controlled remotely by messaging. It was created by developer Peter Steinberger and released as an open source system that turns a large language model into an always on assistant with execution powers. You install it locally on a Mac or Windows or Linux machine, then you connect it to chat platforms like Telegram, WhatsApp, Slack, or Discord, and from there you send it instructions and it performs real tasks on your system. And this is a key difference. Most AI tools answer Multbot performs actions, lots of actions. Once installed, users can authorize it to write and execute code, manipulate files, operate the browser through an extension, read and summarize email, draft replies, monitor feeds, compare directories, upload data, and run scheduled backup ground jobs. You can message IT instructions while you're away from your machine, and it will carry them out locally. It can also chain together different AI models, using a powerful model for complex reasoning or cheaper local models for routine background jobs. It supports multiple AI engines, and it can switch depending on the task type. It also maintains persistent memory about user preferences and workflows. That memory is written locally in markdown or configuration files, so it can be inspected and edited. Behavior can be tuned through personality and policy files, including a configurable soul file that defines tone, initiative level and boundaries. Now around the core project, there's a fast growing skills ecosystem that's formed, and it's adding integrations for productivity tools, search, social platforms, and automation systems. The growth metrics are what makes seasoned observers stop and look twice with roughly two months of public release, the project climbed towards the 100,000 GitHub stars range. It's actually 108,000 GitHub stars last I checked in open source terms, that is blowing it out of the park. For perspective, the Linux kernel, after more than 30 years, has just over 200,000 stars. I stand to be corrected if I'm wrong, but I can only find an example of one software. I think it was AutoGPT that got close to that during that time period. In terms of activity, the commit activity shows the same momentum. The repository logged more than 8,000 commits in that repository early window. Now that level of change usually means either you've got a long private development before the open release, or you've got an incredibly intense contributor activity afterward. In both cases, it signals an energy, an experimentation, but also a code base evolving rapidly in real time. Now, whenever software goes that fast, attracts that much attention that quickly, there is always a correction phase where reality catches up with velocity and excitement. And that moment came equally quickly. The first disruption was more or less marketing. Claudebot's name and mascot closely echoed Claude branding from Anthropic and Anthropic asked the developer, and I give them credit, they did it nicely. They didn't go to big legal threats, they just asked him nicely to change the name and he did so. The project quickly rebranded to Moltbot. Now claudebot was a callback to Claude from Anthropic, but it was also a reference to a lobster that was their mascot. So Moltbot is a reference to a lobster shedding its shell. Now, in big companies, rapid renames rarely go smoothly. With one developer. Probably not a huge budget and not a lot of time, it was bound to have some bumps during the transition. The project's social accounts were briefly hijacked, Impersonation profiles appeared, and lookalike repositories and domains began circulating. At the same time, a crypto promoter launched a copycat token using the claudebot name. The tech and crypto press reported that token rose to a multi million dollar market value before it collapsed, taking the investors with it. The developer had to publicly come out, make a statement that says he had no association with this and he never would have anything like that associated with the project. So that's the marketing aspect of this implosion. But for cybersecurity professionals, this is where we need to shift tone a little, because under the covers, something else is happening. As promising as this technology is, it has some serious security concerns. And some of them are not simple bugs that can be patched. These are features, not bugs. This is how this class of AI agents works. Starting with the control model. Multbot is action driven software. It can execute real complex operations, but it's not controlled with deterministic program logic. It's controlled by large language models. And these models are inherently vulnerable to hallucination, confusion, prompt injection and instruction manipulation. And this isn't a coding flaw. This is how large language model reasoning works today. If an attacker can influence the inputs, they can influence the action. That's one piece. Second piece. Researchers scanning the Internet found many exposed Multbot control panels, in some cases with weak or missing authentication. And because the system stores configuration, memory and integration locally, often in readable markdown or JSON, a breach can reveal credentials, tokens, workflows, connected services, and more. This design choice improves transparency and developer control, but it increases the blast radius if it's ever compromised. And unlike many software breaches that could stay confined to data exposure. And that's bad enough. An agent system can create real world consequences because it can act. Sending messages, calling APIs, executing jobs. Misuse can translate into outward behavior. And that includes such things as burning through paid API tokens, triggering automated workflows, or operating inside email and social accounts. In other words, the damage model is not just leakage, which as I pointed out, is bad enough. It's activity and action. Now, just on the cost side of it, I was watching one YouTube video of someone who was experimenting with this and you had to see the look on his face when he saw his bill for tokens. Because token consumption for this can easily go into and we talked about this. It can pick the right model for the right cost. It can also pick the wrong model. It can also chew up a lot of tokens. He was running tens of millions of tokens per day and getting some unexpectedly high API bills when autonomy meets metered AI services Billing is also a security and governance issue, not just an operational one. Now I led with this story because it went mega viral and its growing pains are off the chart. But this is only the beginning. We talked about this in the intro. Google is rolling out personal intelligence driven assistance tied to user data. Anthropic has introduced cowork style agent systems. OpenAI has similar agent tool sets. We have entered the era of actionable AI agents, systems that operate not just respond. We've done that before. We solved some big problems with large language models, control security and privacy. So buckle up and expect rapid progress, maybe some real benefits. I'm not anti technology, as you probably know, but also expect a messy stretch ahead of us while protections catch up with activity. And although Maltbot overshadowed the other announcements, don't discount Google's personal intelligence for Anthropic's Claude Cowork. I've only had limited time to work with Cowork, but it was enough for me to stop the pilot and say I really needed to build a more reinforced sandbox and think very carefully about how I test it out properly when software starts acting with my authority on my machine. That's not something I'm going to experiment with casually, and I'm convinced that the onslaught of personal agents is coming whether I like it or not. I'm not interested in exaggerating the security risks, but it's hard to ignore the fact that we still have some basic unresolved issues at the foundation of generative AI. Prompt engineering alone is proof that architecturally these systems are not as secure as we want in the traditional sense. At the same time, we're moving ahead anyway in health, defense, even in security itself, and we don't get to opt out. The world won't wait for us. I wish it would, but if wishes worked, people would have stopped reusing passwords years ago. What Moltbot really shows us is how quickly we're being pushed into an agent driven future. This isn't going to be a slow transition, it's coming at us fast. And rather than just admiring the problem, I'll be looking to bring in guests who can help us think about how to cope with this shift in practical ways. Beyond that, all I can really say is buckle up. 2026 is going to be very interesting and that's our show. This weekend we have a research show. David Shipley from Beauceron Security will be coming in and we'll be presenting some research to you on phishing. It's going to be a great show. Hope you can catch it. Finally, we'd like to thank Meter for their support in bringing you this podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises and working with their partners, Meter designs, deploys and manages everything research required to get performant, reliable and secure connectivity into a space. They design the hardware, the software, the firmware, managed deployments and support. It's a single integrated solution that scales from branch offices to warehouses, to large campuses and to data centers. Book a demo@meter.com CST that's M E T E R.com/CST. I'm your host Jim Love. Thanks for listening.
Episode: The Rise of Actionable AI Agents: Navigating the Security Landscape
Host: Jim Love
Date: January 30, 2026
This episode dives into the rapid emergence of "actionable" AI agents—tools that don't just answer questions but directly perform actions, often with substantial control over users’ systems. Jim Love discusses recent product launches by Anthropic (Claude Cowork) and Google (Personal Intelligence), focusing especially on the overnight rise—and security ramifications—of the open-source agent Moltbot (formerly Claudebot). He warns of new risk landscapes introduced by these agents, explores the explosive growth (and the ensuing chaos) of Moltbot, and highlights fundamental security challenges not easily resolved as we hurtle towards an agent-driven AI future.
“Once a software can read all your files, operate your browser, send messages, call APIs, and run tasks for you as you, the risk model changes completely.”
— Jim Love (01:05)
“Most AI tools answer—Moltbot performs actions, lots of actions.”
— Jim Love (05:01)
“Anthropic asked the developer, and I give them credit, they did it nicely... The project quickly rebranded to Moltbot.”
— Jim Love (08:32)
“The damage model is not just leakage, which as I pointed out, is bad enough. It's activity and action.”
— Jim Love (13:50)
“What Moltbot really shows us is how quickly we're being pushed into an agent driven future. This isn't going to be a slow transition, it's coming at us fast.”
— Jim Love (18:18)
“When software starts acting with my authority on my machine. That's not something I'm going to experiment with casually.”
— Jim Love (19:05)
On the New Security Model:
“This isn’t about bad answers or hallucinations, it’s about delegated control.”
— Jim Love (02:18)
On the Rushed Development:
“That level of change usually means either you’ve got a long private development before the open release, or you’ve got an incredibly intense contributor activity afterward. In both cases, it signals energy, experimentation, but also a codebase evolving rapidly in real time.”
— Jim Love (07:34)
On the Unfinished State of Security:
“Prompt engineering alone is proof that architecturally these systems are not as secure as we want in the traditional sense.”
— Jim Love (20:20)
On the Inevitable Future:
“The world won’t wait for us. I wish it would, but if wishes worked, people would have stopped reusing passwords years ago.”
— Jim Love (20:54)
Jim Love's delivery blends urgency with measured skepticism, combining clear technical explanations with candid warnings. His tone is pragmatic—neither anti-technology nor alarmist, but deeply aware of the security challenges that accompany technological leaps.
Jim Love concludes:
The rise of actionable AI agents is opening a new security frontier, one with broad and rapidly changing attack surfaces. As adoption surges, robust dialogue and creative risk management are vital. In coming episodes, Love aims to bring in specialists to discuss practical responses and coping strategies for organizations and security pros navigating this new reality.
“Rather than just admiring the problem, I'll be looking to bring in guests who can help us think about how to cope with this shift in practical ways. Beyond that, all I can really say is buckle up. 2026 is going to be very interesting.”
— Jim Love (21:42)
End of Summary