Loading summary
A
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale, and you can find them@meter.com CST AI agents.
B
Are getting plugged into everything, and attackers are hiding malware in the skills that power them. New AI models are finding hundreds of serious software flaws. Cryptocurrency crime is increasing, becoming violent, online radicalization is pulling in minors, and states are debating whether the data center's growth should be paused. This is Cybersecurity Today and I'm your host David Shipley. Let's get started. OpenClaw, an open source agentic AI assistant formerly known as Multbot and Claudebot, has announced a partnership with Google owned VirusTotal. The goal is to scan skills being uploaded to ClawHub, open Claw's skill marketplace, and to detect malicious packages before they spread through the ecosystem. If you haven't heard of OpenClaw, check out Friday's episode where Jim covers this smoking hot security mess for this latest AI trend. OpenClaw now says that all skills published to ClawHub are being scanned using VirusTotal's threat intelligence, including a newer capability called Code Insight. According to reporting by Hacker News, the workflow is fairly straightforward. Each uploaded skill is assigned a unique SHA256 hash and is then checked against VirusTotal's database. If the skill isn't already known, the full bundle is uploaded for deeper malware analysis. From there, skills that receive a benign verdict are automatically approved. Skills marked as suspicious are flagged with warnings, and anything identified as malicious is blocked entirely. Openclaw also says active skills will be rescanned daily, an important step because something that looks clean today will not always stay that way tomorrow. Now, openclaws maintainers are also clear about the limits here. They've cautioned that virustotal scanning is not a silver bullet, and that cleverly concealed prompt injection payloads may still slip through. That matters, because what we're really talking about here is the security challenge that goes with all of these agent ecosystems. Unlike traditional software, gentic AI systems don't just execute fixed code. They interpret natural language, make decisions, and take actions across systems on behalf of their users. As OpenClaw itself puts it, these tools blur the boundary between user intent and machine execution, and they can and will be manipulated through language itself. Researchers have already found hundreds of malicious skills circulating in clawhub. Many of them masquerade as legitimate tools but contain functionality designed to exfiltrate data, inject backdoors or install stealer malware. Cisco warned last week that AI agents and system access can become covert data leak channels and bypass traditional monitoring proxies and data loss prevention. In other words, the prompt becomes the instruction, and traditional security tooling probably won't catch it. The reporting also highlights a broader enterprise concern. Shadow AI, OpenClaw and similar tools are increasingly being installed directly on employee endpoints, often without formal IT or security approval. And for the privacy folks out there, we know you're not being consulted either. And because these agents can be granted deep access credentials, messaging integrations, file systems, they can enable data movement and network connectivity outside standard controls. One researcher put it simply, these tools will show up in your organization whether you approve them or not. The question is whether you'll know about it in time. Among the issues raised in recent analysis plain text credential storage, insecure coding patterns, indirect prompt injection attacks, exposed gateway interfaces, and tens of thousands of Internet accessible instances. China's Ministry of Industry and Information Technology has even issued an alert about misconfigured deployments, urging organizations to secure exposed instances. The takeaway here this morning is fairly direct Agent marketplaces are becoming the new browser extension ecosystem, except with even higher stakes and even worse security models. When you install a malicious skill, you're not just compromising one app, you may be compromising every system that your agent has your credentials to access. OpenClaw's VirusTotal partnership is a meaningful step, but this space is moving fast and security maturity is massively lacking. Do not install OpenClaw on production machines unless you really, really want to be pwned. Okay, let's transition from AI security dystopia to an AI security win. Our second story is another signal that artificial intelligence is becoming a serious force for good in vulnerability research, not just in theory, but in practice. AI company Anthropic has announced that its latest large language model, Claude Opus 4.6, has identified more than 500 previously unknown high severity security flaws across major open source libraries. The vulnerabilities were found in widely used projects including GoScript, OpenSC and CGIF. Claude Opus 4.6 was launched last week and Anthropic says the model includes significantly improved coding capabilities, particularly around code review, debugging and reasoning about software logic. What's notable here is the company's claim that the model is notably better at discovering high severity vulnerabilities without needing specialized prompting, custom tooling or complex scaffolding. Anthropic describes the model as reading and reasoning about code the way a human security researcher might looking at past fixes, identifying patterns that tend to cause problems, and understanding logic deeply enough to predict exactly what kind of input might break it before release. Anthropic's Frontier Red team tested the model in a virtualized environment, providing tools like fuzzers and debuggers, but without instructions on how to use them. The goal was to evaluate how well the model could perform vulnerability discovery out of the box. Anthropic also emphasized that every flaw was validated to ensure it wasn't hallucinated, that the issues were prioritized based on severity, particularly the memory corruption vulnerabilities. The company is positioning AI models like Claude as tools that can help defenders level the playing field. But they also acknowledge the other side of the equation. As AI becomes more capable, barriers to more autonomous cyber workflows are coming down quickly. Anthropic's closing reminder here is a familiar one but increasingly urgent security fundamentals matter still today, even more so than ever before, especially prompt patching known vulnerabilities because the tools available to find and exploit these flaws are evolving fast. And on a positive note, we've been starting to feel a little guilty about beating on Fortinet for all of the many, many zero days over the last few months. Maybe Fortinet should give Anthropic a call. Our third story today is a stark reminder that cybersecurity doesn't always stay online, and it's picking up a disturbing trend that cybersecurity today covered numerous times in 2025. Authorities in Scottsdale, Arizona, say two California teenagers have been arrested following a violent, targeted home invasion, one investigators believe was motivated by an attempt to steal as much 66 million in cryptocurrency, according to court documents. The suspects, identified as Jackson Sullivan and Skyler Lapel, allegedly posed as package delivery drivers to gain entry to a home. Once inside, police say the homeowners were restrained with duct tape and assaulted as the suspects demanded access to cryptocurrency holdings. One of the victims reportedly denied having the funds, which investigators said led to further violence. An adult son inside the home was able to call police from another room, and officers responded quickly. The teenagers fled the scene but were arrested shortly afterwards when police located them in a blue Subaru nearby. Investigators believe the two suspects may not have been acting entirely on their own. Court records suggest they were extorted or directed by individuals known only as Red and eight and were allegedly sent from California with money to purchase disguises and supplies. Both teens are facing multiple felony charges, including burglary, aggravated assault and kidnapping. Police also reported they were in possession of a 3D printed gun, though its functionality remains unclear. The case reflects a growing trend as cryptocurrency becomes more mainstream. Individuals believed to hold significant assets are increasingly being targeted not just digitally, but physically. And it's a reminder that for high value financial targets, the threat model often extends beyond phishing emails and malware into real world coercion, surveillance and violence. This story follows horrific kidnappings and torture around the world in 2025 related to cryptocurrency, including attacks in New York and Paris. For organizations and individuals alike, this is where cybersecurity, personal security and financial security are converging fast. Our fourth story today comes from New Brunswick, where the RCMP have confirmed the province's first ever terrorism related peace bond involving a youth. According to CBC News, a minor was arrested towards the end of 2025 by the RCMP's National Security Enforcement Section under section 83.19 of the Criminal Code of Canada, which relates to facilitating terrorist activity. The individual has not been formally charged. Instead, they've been released under a one year Terrorism Peace Bond, a legal tool that allows a judge to impose strict conditions when investigators believe a terrorist offense may be committed. RCMP describe it as a way to enable robust monitoring and de escalation measures before violence occurs. A peace bond can include restrictions that limit certain freedoms, such as prohibiting specific online activity, limiting travel or restricting contact with certain individuals. In this case, the RCMP have not disclosed where the arrest occurred, what ideology or group may have been involved, or what conditions the youth is now under. The lack of detail has raised questions from experts about transparency and public awareness. CBC also notes a contrast with a separate case in Quebec where a teenager has also been charged with terrorism offenses after allegedly promoting the ideology of Atomwaffen Division, a neo Nazi group listed as a terrorist entity by Canada in 2021. The larger issue here is one that sits directly at the intersection of cybersecurity and national security. Extremist recruitment and radicalization increasingly happens online long before things get physical. Groups like the Comm are turning kids willingly or unwillingly into cybercriminals, real world thugs and in some cases terrorists and law enforcement, courts and communities are now grappling with how to intervene early, especially when the individuals involved are minors. And finally today, a story that sits at the intersection of cybersecurity, infrastructure, energy policy and the AI boom. New York lawmakers have introduced a bill that would impose a three year moratorium on new data center development in the state. The proposal makes New York at least the sixth state in the United States in recent weeks to consider pausing data center construction, citing concerns ranging from climate impact to rising electricity prices and grid strain. New York already has more than 130 data centers, and utilities report roughly 10 gigawatts of new demand, much of it driven by data center growth waiting to connect to the grid Supporters of the bill argue the pause would give regulators time to assess environmental impacts, consumer cost exposure and long term sustainability of rapid data center expansion. Similar legislation is emerging across both red and blue states, including Georgia, Virginia, Vermont, Maryland and Oklahoma, a sign that resistance to large scale infrastructure for AI and cloud computing is becoming bipartisan. The data center industry says it's responding with stronger community engagement and commitments to pay for energy use. But the larger trend here is clear. As digital services scale, the physical footprint behind them is becoming harder for governments and communities to ignore, and that sets a great deal of organizations up for a recipe for cloud service inflation. Cost Spiral as data center capacity becomes constrained, this may show up as rocketing costs for hot site backups and disaster recovery. This was Cybersecurity today for Monday, February 9, 2026. I've been your host, David Shipley. Jim Love will be back on the news desk on Wednesday. Thanks for listening and a special thank you for making CyberSecurity Today the sixth most popular tech news show in Canada, the eighth in the United States and the 10th in the United Kingdom. If you like the show, please consider sharing it with others. We'd love to reach even more people in 2026 and we continue to need your help. Consider subscribing, leaving us a rating or review on your favorite podcast platform.
A
And that's our show. We'd like to thank Meter for their support in bringing you the podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys and manages everything required to get performant, reliable and secure connectivity in a space. They design the hardware, the firmware, build the software, manage deployments and even run support. It's a single, integrated solution that scales from branch offices, warehouses and large campuses all the way to data centers. Book a demo@meter.com CST that's M E T E R.com CST.
Host: David Shipley (filling in for Jim Love)
Date: February 9, 2026
This episode delivers a comprehensive update on the swiftly evolving landscape of cybersecurity threats, with a sharp focus on emerging dangers from artificial intelligence (AI), novel approaches in cyber defense, escalating threats to cryptocurrency holders, online radicalization trends among minors, and regulatory responses to the rapid expansion of data centers. The host, David Shipley, covers real-world incidents and policy debates that underscore how the intersection of digital innovation and security is becoming ever more complex and high-stakes.
[00:19–07:30]
OpenClaw and VirusTotal Partnership:
Limits and Challenges:
VirusTotal cannot detect all threats, especially "cleverly concealed prompt injection payloads".
"VirusTotal scanning is not a silver bullet, and that cleverly concealed prompt injection payloads may still slip through."
Agentic AI systems don't just run code–they interpret language and make decisions, blurring user intent and execution. This opens up new attack vectors:
Major concern over "Shadow AI":
Quote, Host ([04:35]):
"These tools will show up in your organization whether you approve them or not. The question is whether you'll know about it in time."
Reported issues in recent analyses:
Takeaway:
“Agent marketplaces are becoming the new browser extension ecosystem, except with even higher stakes and even worse security models. When you install a malicious skill, you're not just compromising one app, you may be compromising every system that your agent has your credentials to access." ([06:45])
Advice:
[07:30–09:37]
Anthropic’s Claude Opus 4.6:
Validation and Red-teaming:
Implications:
Quote, Host ([09:25]):
"As AI becomes more capable, barriers to more autonomous cyber workflows are coming down quickly."
Bottom Line:
[09:38–11:15]
Violent Crypto Heist in Scottsdale, AZ:
Key Insight ([10:48]):
"Individuals believed to hold significant assets are increasingly being targeted not just digitally, but physically."
Trend:
[11:15–12:50]
First Terrorism Peace Bond for a Youth in New Brunswick, Canada:
Host’s Framing ([12:15]):
"Extremist recruitment and radicalization increasingly happens online long before things get physical. Groups ... are turning kids willingly or unwillingly into cybercriminals, real world thugs and in some cases terrorists."
Discussion:
[12:50–14:20]
New York State Proposes Data Center Development Moratorium:
Host on Trend ([13:42]):
"As digital services scale, the physical footprint behind them is becoming harder for governments and communities to ignore..."
Potential Side Effects:
On AI security limits:
“VirusTotal scanning is not a silver bullet, and that cleverly concealed prompt injection payloads may still slip through.”
— David Shipley ([02:15])
On Shadow AI proliferation:
“These tools will show up in your organization whether you approve them or not. The question is whether you'll know about it in time.”
— David Shipley ([04:35])
On agent threat landscape:
“Agent marketplaces are becoming the new browser extension ecosystem, except with even higher stakes and even worse security models.”
— David Shipley ([06:45])
On physical dangers of cyber wealth:
"Individuals believed to hold significant assets are increasingly being targeted not just digitally, but physically."
— David Shipley ([10:48])
On the convergence of digital, physical, and personal risks:
“…this is where cybersecurity, personal security and financial security are converging fast.”
— David Shipley ([11:05])
On online radicalization:
"Extremist recruitment and radicalization increasingly happens online long before things get physical…”
— David Shipley ([12:15])
On data center moratoriums:
"As digital services scale, the physical footprint behind them is becoming harder for governments and communities to ignore..."
— David Shipley ([13:42])
This episode vividly illustrates both the pace and stakes of cybersecurity change in 2026. From the subtle ways AI agents are introducing new, hard-to-detect vulnerabilities into businesses, to AI-powered breakthroughs helping close gaps in open-source security, to the ever more tangible threats facing cryptocurrency holders and minors vulnerable to online radicalization—the conversation brims with urgency, actionable insights, and a clear-eyed look at the challenges ahead. Listeners are left with a strong sense that cybersecurity, technology, and policy are interwoven like never before, and complacency is not an option.