Transcript
A (0:00)
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST
B (0:18)
AI tools used to Automate hacking Hundreds of Fortigate firewalls New Claude Code Application Security Tool scares Wall Street FBI says Salt Typhoon is still a big problem Amazon AI deletes Production Code and Youth radicalization and the Role of Big Tech this is Cybersecurity Today and I'm your host David Shipley. Let's get started. Our first story today is a stark reminder of how artificial intelligence is making the global cybersecurity dumpster fire even hotter. Over a five week period earlier this year, a Russian speaking hacker managed to breach more than 600 Fortinet Fortigate firewalls across 55 countries, including Canada. What makes this particularly alarming is the role of AI in supercharging this campaign. The attacker didn't use zero day exploits or a highly advanced technique. Note that zero days have been short supply for Fortinet devices lately, as we've been reporting on one too many fortabugs already this year. Instead, however, attackers exploited weak passwords and unprotected management interfaces, systems that didn't have Multi Factor Authentication enabled. Once access was gained, the attacker deployed AI powered tools to automate tasks that would have taken a human hacker days or weeks to complete manually. These tools, written by Generative AI in Python and Go, were used to perform reconnaissance, analyze network topologies, and identify vulnerabilities. AI has effectively lowered the bar for cybercriminals, allowing even less skilled attackers to launch sophisticated attacks on a global scale. This is no longer something that's a hypothetical scenario or even reserved for nation states. Now we have consumer criminals doing things. So what can your organization do to protect itself? First and foremost, secure your edge. Device management interfaces. These remain one of the most common entry points for attackers this year. Make sure they're not exposed to the Internet unless absolutely necessary. Second, implement robust multi factor authentication across all critical systems. And finally, a reminder, continue to enforce strong password policies because weak passwords are no match for attackers armed with today's AI tools. This attack is a wake up call for businesses and governments alike. As AI continues to evolve, so too will the threats we all face. Staying ahead of cybercriminals now requires not only vigilance but also proactive approaches to security, one that assumes attackers will have the most advanced tools available and for cheap for our Next story we turn to a recent report about Amazon's Kiro AI coding tool and an incident that sparked both concerns and a few laughs in the tech community. Amazon recently faced service disruptions in December, reportedly caused by engineers using its Kiro AI coding assistant. Kiro, which launched in 2025, is designed to tame the complexity of coding by autonomously handling tasks with minimal human oversight. However, during the incident, Kiro allegedly decided to delete and then recreate an entire production environment, leading to a 13 hour disruption that affected AWS Cost Explorer services in one of two regions in mainland China. Amazon has since clarified that the incident was not a case of the AI tool going rogue, stating it was caused by a misconfigured role. They explained that the same issue could have occurred with any development tool or even manual actions by a human engineer. However, the event has reignited concerns about the use of AI agents in critical environments. And of course, the story has drawn some hilarious comparisons. Fans of the TV series Silicon Valley might remember an episode titled Artificial Lack of Intelligence where the AI named Son of Anton hilariously deleted code in production. It seems that life may have just imitated art in this case. I mean, Amazon's defense sounds here a lot like what Guilfoyle, the Canadian sysadmin on Silicon Valley, once said. It's possible Son of Anton decided the most efficient way to get rid of all the bugs was to get rid of all the software. Artificial neural nets are sort of a black box, so we'll never know for sure. End quote. One user on Reddit summed it up perfectly, saying, oh, so when it works it's agentic, but when it fails, it's actually user error, end quote. But let's take a step back and add some clear, practical advice here. Yes, AI tools can be helpful for coding, but handling an AI agent the keys to prod is asking for trouble. Even human developers shouldn't be pushing code directly into PROD without proper checks and safeguards. The tech industry's traditional move fast and break things mentality over the past 30 years has contributed to the dumpster fire we now see with many critical systems and vulnerabilities in code. As Special Forces troops around the world are often fond of saying, slow is smooth and smooth is fast. When it comes to deploying AI powered tools, we need to adopt this mindset. Rushing to implement powerful autonomous tools without necessary guardrails is a recipe for disaster. Proper testing, layered review processes, the right credentials and careful implementation remain essential to avoid incidents just like this one and still don't let AI agents push directly to prod Our third story today comes from the Hacker News and Silicon Angle, and it's about a new AI powered tool that could reshape how developers think about securing their code. Anthropic, one of the leading companies in the AI space, has launched a new feature for its Claude code tool called Cloud Code Security. The tool is designed to help developers identify and fix vulnerability in software code bases. What makes Claude code Security stand out is its approach. Rather than simply relying on static rules or known patterns to flag issues, it uses generative AI to analyze code in a process much like what a human security researcher might do. Here's what that means in practical terms, the AI maps out how different parts of the code interact, traces data flows, and identifies weak points or vulnerabilities. And it doesn't just stop there, though. The tool also provides severity ratings for each issue, assigns confidence scores to its findings, and allows developers to prioritize fixes to address first. Importantly, it's designed with a human in the loop process, meaning the AI directly won't make changes on its own to the code. It provide recommendations, but developers still make the final decisions. That's good. The concept of automated code analysis isn't new. Static application security testing has been around for a while. However, many of the tools have historically faced some well documented criticisms and limitations. They tend to generate a lot of noise churning out false positives and require a lot of manual triage by developers. They're also not great at identifying logical errors that may only manifest at runtime. It's clear that AI has the potential here to address some of the shortcomings of traditional static analysis tools by providing more context where analysis. But it's not yet clear on whether it can fully deliver on all of the promise. I mean, the tool is only out in a limited test right now. Tools like Cloud code Security are only as good as well as the data they're trained on. If the underlying AI code models haven't been exposed to diverse or complex code bases, there's a risk they could miss critical vulnerabilities or produce inaccurate results. It's also worth noting that the financial markets also took notice of this announcement in a big way last week, with cybersecurity stocks getting pummeled the same way specialized legal software firm stocks were manhandled earlier this month. Shares of CrowdStrike and Cloudflare, 2 major players in the cybersecurity industry, fell by over 8% following the launch of Claude Code Security. And let's not forget the elephant in the room about this whole story. These tools are designed for new code. They're not going to help the mountains of legacy code that continue to underpin much of the Internet. Addressing vulnerabilities in decades old system is going to take a lot more than just the latest flashy AI tool. I'm no financial expert and I'm not here to give you stock market advice. I a cybersecurity expert and thinking this tool is going to transform all aspects of cybersecurity from MDR to actually cloudflare services anytime soon is one of the worst kind of knee jerk silver bullet overreactions that, well, I guess could only make sense on Wall Street. It's a tool and like any tool, its effectiveness will depend on how it's used and the quality of people using it. And now let's move on to a warning from the FBI about a persistent, successful and highly organized cyber threat group with a long track record of damage. The FBI has issued a new warning about the Chinese cyber espionage group known as Salt Typhoon. This group is infamous for its role in the massive compromise of US and global telecommunications infrastructure back in 2024. Now it seems their operations are far from over and they remain a serious threat to both public and private organizations in more than 80 countries. Michael Machtinger, the FBI's deputy assistant for Cyber Intelligence, recently spoke at the Cyber Talks forum in Washington D.C. where he laid out the group's tactics. What's striking is that Salt Typhoon isn't just relying on advanced cutting edge zero day exploits targeting the same weak spots we've been talking about for years. Unpatched systems, old code, or weak and reused passwords. In other words, they're not reinventing the wheel here. They don't have to. Basic vulnerabilities are still giving them enough access to critical systems and they're leveraging that access to devastating effect. Muktinger highlighted that the group also relies heavily on phishing campaigns to track their victims into handing over credentials, a gentle reminder that cybersecurity awareness and phishing simulations are still valuable. Once inside a system, they move laterally, gather intelligence and maintain a persistent presence, making them incredibly difficult to detect and remove. Now here's where things get really concerning. The warning from the FBI comes at a time when the U.S. federal Communications Commission has decided to loosen cybersecurity requirements for American telecommunications companies. That's right. At a time when telecommunications infrastructure is still very much under attack, regulatory oversight was scaled back. It's a move that many experts, myself included, argue is the wrong direction to take. Here in Canada, we face similar challenges. While there's been talk about implementing stronger cybersecurity infrastructure security laws, progress has been frustratingly slow. Meanwhile, groups like Salt Typhoon continue to exploit gaps. So what can your organization do in all of this? Well, first, prioritize your basics. Patch vulnerabilities as soon as updates are available, implement multi factor authentication and enforce strong password policies. It's easy to get swept up in all the talk of advanced threats, but as is pointed out, most breaches continue to start with something simple, which are often preventable. Now this is all great advice if Salt Typhoon or another threat actor is targeting you directly and the infrastructure and the environment you control. But if they're pwning your telecommunications providers, there's not much you can do about that. That's why we should all be supporting regulations for critical infrastructure and demanding secure by default equipment from network providers. In particular, China's had a field day because of the hilariously bad networking infrastructure security that lacks things like robust authentication and mfa. Our final story is a deeply troubling one, highlighting the rise of online radicalization among youth and the challenges posed by technology in identifying and stopping safety threats before it's too late. We start here in my home province in New Brunswick, where the Royal Canadian Mounted Police have issued a second terrorism peace bond to a youth linked to an extremist group known as 764. This group, which the Canadian government designated a terrorist organization in December 2025, is part of a disturbing trend of online extremist groups that specifically target vulnerable young people. According to reports, the youth in question was associated with the group's activities, which includes coercing others into self harm, threatening schools, and creating propaganda to boost the group's visibility. While the RCMP haven't provided many details about this case or the previous one from earlier this month, they've confirmed the two cases are unrelated. However, both point to a growing and concerning issue how online platforms and digital spaces are being weaponized to recruit, radicalize and manipulate youth. Experts have been sounding the alarm about this trend for years. Groups like the 764 Network are highly effective at targeting young people who may already be feeling isolated or struggling with mental health issues. These groups use social media and gaming platforms to draw on their victims, and in many cases they use coercive tactics like blackmail and intimidation to push them into dangerous actions. And social media firms aren't the only ones in the spotlight in Canada when it comes to violence. This brings us to the second part of this story, which comes from follow up reporting on the horrible school shooting tragedy in Tumblr ridge, British Columbia. Seven victims, including five students, a teacher's aide at the school and two family members were gunned down by an 18 year old shooter who also took their own life. The Wall Street Journal said Friday that the perpetrator had an account with ChatGPT suspended in June 2025. Employees with the AI maker had sought to notify authorities but were rebuffed by by the company, according to the reports. Canada's the Globe and Mail reported on Sunday that OpenAI did not initially disclose this when they met with the British Columbia government officials the day after the shooting. B.C. premier David Eby and Canada's federal minister for AI, Evan Solomon condemned OpenAI's handling of this matter in separate statements, quote, reports that allege OpenAI had related intelligence before the shooting in Tumbler Ridge took place are profoundly disturbing for the victim's family, families and all British Colombians, Mr. Eady said in a statement. Solomon said he was, quote, deeply disturbed by reports that concerning online activity from the suspect were not reported to law enforcement in a timely manner. Solomon said he's in contact with OpenAI and other companies about safety procedures. Quote, all options are on the table to ensure public safety and the protection of our children, end quote. The situation in New Brunswick and the tragedy in British Columbia are connected by a common thread the role of technology in both enabling and preventing harm. On one hand, the Internet provided a platform for extremist groups like 764 to reach vulnerable people, and AI tools can create tragedies when used by individuals suffering from mental health illnesses. This is not the first case where OpenAI is facing legal troubles related to the use of its software. OpenAI is currently being sued by the estate of an elderly woman who is murdered by her son in Connecticut. As these stories show, the systems we have in place right now to alert when things are going wrong in the technology space and to prevent tragedies clearly aren't enough. Whether it's government, law enforcement or tech companies, we need stronger frameworks for cooperation, accountability and intervention. Otherwise, we risk leaving young people vulnerable to those who would exploit them, and to failing to act in time when the warning signs are right in front of us. Here in Canada, we also need to address the gaps in our laws. And as these cases show, the consequences of inaction can be devastating. If you're a parent, teacher, tech expert or community leader, now is the time to have open and honest conversations with young people about the risks of online spaces and tools like AI Together, we can help protect our youth and other vulnerable individuals and push for changes needed to hold those who misuse technology accountable. That's Cybersecurity Today for Monday, February 23, 2026. I've been your host. David Shipley. Jim Love will be back on Wednesday.
