
Loading summary
A
Cybersecurity Today, we'd like to thank Material Security for supporting this podcast. Material Security provides faster, more complete detection of response for email identity and data threats inside Google Workspace and Microsoft 365. Contact them at Material Security.
B
Welcome to Cybersecurity Today, the month of review. Our guest this week, David Shipley. Boseron, the leader of the pack at Boseron. David Shipley, welcome.
C
Thanks for having me, Jim.
B
Yeah. And Laura Payne.
D
If David is her head of the pack, I feel like I'm herder of the cats.
B
Herder of the cats, yeah. And Jeff Williams, who has a surprisingly normal title for somebody in tech. Co founder and chief technology officer at Contrast Security, our new guest. Welcome, Jeff.
E
Thanks, Jim. Great to be here.
B
Jeff has been a guest on Cybersecurity Today, so if you follow the show, you've seen him before. Just want to take a minute to introduce yourself. Jeff, just I know a little bit about you beyond where you are working today. You're also one of the co founders of owasp. Just give us a few little bit of an introduction.
E
Yeah, sure. I had a long career in cybersecurity. I started doing mostly government stuff. I taught in the NSA's Cryptologic School. Then I got acquired into a Internet data center company called Exodus Communications, and I was on the global security practice there. And then when that went bankrupt, I started a consulting company called Aspect Security to focus only on application security. And we did a lot of event testing, code reviews, threat modeling, training, that kind of thing, mostly in large financials. And then we had an idea for a new way of doing application security from inside the running application. And we made a company for that called Contrast Security. And that's where I've been for the last ten years or so.
B
Great to have you. And we'll post a link to the show we did. I'm getting really adept at this, David, of being able to post these links on YouTube. Ooh, you never know. Yeah. Okay, panel, you know the rules. Anyone can introduce a story. Laura and Jeff, don't let the class keener take all the story. Oh, sorry, did I say that? Was that my inside voice or my outside voice? Just give us a brief introduction to the story, bring the audience up to speed, and after that, there are no rules. Who wants to start?
C
I'm gonna ignore the class cleaner remark. Cleaner remark. And give Laura and Jeff some time to. To follow. But I do want to start off
F
with Mythos and we've talked about it a lot. The myth, the legend, the massive marketing campaign described as and this past week and full credit to the fine folks at the Risky Business podcast because they did a great job, but they managed to terrify me fully because they brought in two very smart guys. So Nils Provost and James Kettle, who have each separately replicated the entire capability stack of Mythos. Essentially what happens if you just allow any LLM to have a read only scratch pad of notes, just like a researcher would do to say, okay, tried this, didn't work. Tried this, worked. This is not rocket science is the point that they're making. And they've built their own bug finding bug apocalypse discovery machines because tldr, there's a lot of bugs out there and in July they're going to. I think it's James Kettle's project, he's going to be open sourcing it. So for all those governments that were,
C
oh, anthropic, super responsible, not going to
F
release their scary AI model, yeah, we're still in trouble. We're still real trouble, kids.
B
Or for all those governments who said, don't worry, I got it under control, I've got Mythos.
F
So that was the first story for this show that really kind of okay.
B
And just if anybody's been vacationing off planet. Mythos, of course came from Anthropic and did cause a big sensation because it supposedly could find zero day bugs in anything. And it went back I think 17 years. They were found a bug in BSD, Linux. It really did give everybody a wake up. I. You picked up this one from Provost. I had heard that two things had happened with Mythos and one was that a group got, although it was supposedly limited release, a group on Discord apparently got hold of a copy of it purely by stumbling around and changing file names and digging around in anthropic stuff. That was. And they were. At least they were discussing it like they had it. And then somebody else did an open source clone and There was a GitHub repo with an open source clone. So this is. It's the secret that everybody knows at this point.
F
Yeah, the only person I feel, and I don't really feel bad because I don't like Sam Altman very much. I think that's pretty clear.
C
But it was kind of sad because
F
immediately after Mythos they're like, we've got
C
our super scary AI hacking weapon too and we're going to release it more broadly.
F
And nobody cared. It was kind of sad.
E
But.
C
But I'll leave it at that. Besides all the security teams, we should all be pouring one out who are going to try and be desperately waiting for patches that may or may not
F
come in time for whatever shenanigans they're dealing with. But I'm curious for Laura and Jeff
C
for your guys thoughts because that was
B
me going, yeah, I'm waiting for our lawsuit from Sam Altman now. Thanks a lot David.
C
That's okay. A defense for libel is you genuinely believe that thing and I am prepared to testify court that I don't like.
B
Jeff and Laura, jump in.
D
I want to love this one at Jeff because Jeff, I think this has been your bread and butter for the last little while.
E
I've been talking a lot about it. I think it's very interesting. I think it is changing the economics of certain aspects of application security and it's. I think it's really kind of up in the air where it's going to end up. David, I think your point is right, that there are a lot of vulnerabilities out there, the latent vulnerabilities that have been sitting there undiscovered and this provides a way to find them without a lot of manpower. So that's interesting. But at the same time it is very expensive. I talked to one company who had access to Mythos and they were spending 6 million tokens a minute analyzing their code. That's roughly $500aminute and depending on whether it was input tokens or output tokens or whatever. But these things are not cheap. It's much more like replacing a pen tester than it is like replacing a static analysis tool or something like that. Like that. And so I think most organizations are not going to do a wholesale replacement of their existing processes with this because it's just too expensive and it's a little weird to use it. We've done a bunch of benchmarking and it's slow, so it's not doing a quick scan on your code after you finish writing it. It's also not deterministic, so you get different results every time you run it, which is a little confusing. That's not how security processes are supposed to work. The way, you know, you fix something is you scan it again and you don't get that finding in the second scan. So if the results are changing, you're going to have all these floppy tests all over the place. And I think that's going to be, I think it's going to be a long process. But I think ultimately this will change the way that most AppSec practices work. Not just vulnerability discovery, but also vulnerability remediation. And then thinking a little Further, that's like the. That's yesterday's news. Tomorrow's news is going to be what do we change actually new processes. How do we get threat modeling actually working at scale? How do we get security architecture actually working at scale? Because people like to talk about that stuff that's pure by design and, and all that, but nobody really does it effectively at scale. And I'm optimistic that this could change that. And so that's me. I'm going to stay positive.
C
So, Laura, you've worked in the trenches at some of the most complicated organizations on the planet.
F
Are you optimistic?
D
They're still here. So there's something to be said. But Jeff's points are really important, right? Like the expense. This is not. Oh, I'm just popping on the free tools and developing, finding these vulnerabilities. This is expensive for the attackers as well as the defenders to use these tools. Yeah, there's something to be said for dollars will slow things down at some point. But we do see the vulnerabilities that are being found are really pervasive. Right. Like the cash is well spent from a criminal enterprise point of view, if you can find that one that gets you in a lot of places so that you can extract a lot of ransom. So all of the other things that we do in security. And yes, Jeff, you're right to make fun of security by design being a lot more lip service than it is actual functioning security. But those good practices are still important. And it puts even more emphasis than ever on how do you build more security upfront in your code, in your infrastructure. How do you segregate? How do you look at how even in a complex organization, the keys are to break it down into smaller manageable components, whether that's human manageable or manageable with the extension of tools you have available to you. But you have to be able to get your hands around the problem so that you can actually see it through. And yeah, that's what has to happen.
F
And this isn't. Go ahead, Jim.
B
Sorry. I was going to say, the problem I have with these things that go into the hype cycle is it's not a victimless crime. A buddy of mine's a CISO at a rather big health institution and he said, he, he takes it good naturedly, but when he comes in and talks to somebody, says, oh yeah, one more tool, what do you want? Another 250,000 bucks and everything will be fine. Feeds that hype machine. And it doesn't do us any good when things are overhyped. And then they don't come, they aren't useful. And we don't want to be cynical, but I think this is a warning to us all that no matter how good the news sounds, be critical.
F
Well, I mean I'm going to be a little cynical here because it ties into another story that's on our radar. So here you've got Andy Greenberg and Wired. Whenever I see an Andy Greenberg story, I know, buckle up, we're in for a time. And he's got a banger of a story about these vibe coding app platforms. Base44 Replit, lovable, et cetera, pick your poison. And thousands of apps created in seconds that are woefully insecure. And I say woefully because the tiers they're going to be a flowing and when you realize that an LLM is a bell curve standard deviation model, it picks the average and the average of code as we've just discussed, sucks because we didn't do the OAS top 10
C
because the incentives weren't right in a industry where product liability could literally be contracted away. We were not incented to actually take
F
security seriously as the whole get the result. So the irony that the AI industry is flooding the web right now with a lot more bad code on top of the billions I would even go so far as to say is trillions of lines of already existing bad code and you've got this perverse incentive. Money making machine AI creates crap code. Super expensive AI is now required to go and find and fix your crap code to and this feels like the closest the tech industry is going to get to perpetual motion because this is just a money printing machine for somebody. But I don't know who that is.
E
That's what's going to drive the next model and the next model. So we might be stuck. This might be as good as it gets.
C
As good as it gets for what they produce or as good as it gets for what they secure or both?
E
Both.
D
I think it's fair to say we're in a phase where things have not been trained to the extent of actually being a seasoned 20, 30 year veteran of industry. Like the meaning the models. Right. So the models have graduated from like university student to like new grad in the sense that's my opinion officially. But like you can see their capability is growing exponentially. But their experience with what is good and works well and sticks to the gar. Like there's a lot of mea culpa. Right? Oh, my bad. I did a thing you told me really like you told me definitely don't do that. And I forgot that the thing I was doing was the thing you definitely told me not to do. So sorry.
B
Yeah, that's how you know it's not a programmer. It apologizes for bugs, first of all.
C
Secondly, it also reads documentation.
D
Well, I don't know that it read documentation. It was in the meeting where it was told that this is what you don't do. But then it sat down at its desk and went, but I need. It didn't even occur to it that the thing it was doing was the thing it was told not to do. Right. Like it just doesn't have that experience. Anyway, that's my loose analogy for where we're at right now.
E
People should really focus on the tech space problem. There's AI doesn't have a huge amount of memory that it can work with. So I think of it like three by five cards on a desk and you can only have 20 or 30 of them. And after you, after you get a little ways down, the other ones just fall off and they can't remember it. It's impossible. It doesn't matter how much you put in your claw MD or your documentation or your prompt, it all get pushed off. And so there are some problems that are really difficult to solve if you only have limited working space. Working memory, data flow analysis is one of them. If you have to analyze a big code base for how the data flows through it, you need huge working space. That's why SAS tools required many gigabytes of memory, because they build that model and AI can't do it. So it's always going to kind of suck at that problem.
F
The most heartening thing that I have heard so far in this conversation is
C
that it cost 6 million tokens or about 500 bucks a minute to do some of this work.
F
Because, like, I think we have effectively infinite bugs.
C
But I think the money constraint right
F
now, like the money constraint and the compute constraint on these tools is that's our entire digital economy is relying on the choke point of how much it costs to find and exploit these things.
C
Which feels a lot like a Chinese
F
APT recently that got into an ungodly amount of organizations and they literally were
C
like, we don't have enough time to exploit all these.
F
Who are the 12 that we really
C
want to target right now? Like, that's not a good place to be. When they're like, we have. It's like Russian solar winds and we were all running around. Look how responsible they were, man. It wasn't how responsible they were, they had constraints. There was only so much they could do in such time.
F
So they prioritized. That's a bad look for us.
B
I think that's the question comes up, what do we do about this? Vibe coding is not going to go away. It's just not.
D
There's a lot of work to be done and hopefully ethically to continue improving the models.
E
Right.
D
Like this was the magic that was always promised of compute from the early days was that someday we would be able to talk to computers and they would understand what we wanted from natural language and be able to produce the things that we wished for. And the other analogy I would use is the fairy tale of the fisherman's wife and she gets the three fishes and along the way like the wishes are wasted on things because they're ill explained and used in spite and just very poorly phrased. Right. And that's the stage we're at. But the better the genie gets at like not being malicious and actually wanting to help the person, the better we are. And I think that also feeds into. Recently heard Geoffrey Hinton giving one of his more recent iterations of his keynote. Right. And that key idea, that part of what has to happen with AI for it to effectively coexist with us is that it has to want us to still be around. That's key. And to bring it back to Vibe coding, it has to want the thing to be successful and understand that building a secure application and a well designed application is part of what makes the that successful. As far as what the human actually
E
asks, we are making.
D
There's a long road to get there.
E
But yeah, I think you should think about this in the as a transformation, the same way that you might think about DevOps transformation or your cloud migration. This is going to take a long time. It's not about the technology as much as it is about the culture and the people. And so very slowly we're starting to see changes. It's not just Vibe coding anymore. The smart organizations are doing spec based programming. They start with the spec and then they feed that to agents and the agents. I use beads for instance, to turn it into a bunch of tasks and sub agents implement those tasks. And we're going to see people building dark software factories all using AI and they'll put their effort into the stacks and what do I want? And it'll produce the software and we have to make sure that security is a part of that process. And that's where I see the threat modeling and the security architecture and the testing becoming really important processes for AI to evolve. But getting people, getting companies there is going to take a really long time. The super early adopters are doing it now but it's very experimental and people are just learning how to do it. It's going to be a decade in my estimation for this to take hold.
F
And I think one of the key things that has to change, it just has to change. And it was interesting. I was on a panel and this kicked off quite a bit of heated debate. And I say this as a software vendor but we have to be liable for our products when they cause actual harm. We, we don't let bridge designer engineers off the hook. If they screw up and they design a bad bridge and it kills people and again the punishment has to fit the crime proportionality are there but this blanket all caps. Hilarious. This software is not guaranteed to be fit for purpose work or even mildly amusing. If you've ever read some of these, they are pretty funny. But come on let's and Europe's had enough. It's done with blank check product immunity for software. I think we have to do the same. But to Jeff's point, as people build these dark factories we end up in the same interesting legal challenge now that self driving cars are heading down. So who's liable? The spec driven engineer and the company that built the software because the specs were bad or the AI model maker whose code was ultimately found to be flawed that while it followed the spec did so in the most insecure method imaginable. And I, I think good news lawyers, AI is not going to put you out of business. You'll be using AI to sue AI companies is what I think.
D
And the idea of it being fully specked out beforehand. Jeff, to borrow from your earlier comment, right, security by design, it's a great idea but have you ever, has anybody who's ever done built anything software or physical items ever seen a perfect spec up front that did not have to be modified through build test?
B
Everyone I ever wrote until the it got to sign off.
E
Yeah it was.
D
And then the reality is that those specs are going to be co created too. Right? So the AI piece is going to be involved right from up front.
F
Wow. I now have three mythical unicorns to look for.
C
The altruistic crypto bro, the perfect spec
F
and the honest politician. I'll take back the last one but yeah, no, I don't know if you're going to find those easily.
E
So I work on government security projects back in the late 80s early 90s and worked on some classified projects at the Orange Brook B2 level. And we have full traceability from our high level design through detailed design, through testing, all the way into production. And that was the way things were then and it still wasn't perfect. But I still, that's how I'm grounded in specs. But I do think that we can, that AI is going to make that a viable path. Things like test driven development and spectrum development. I think on the liability side, a lot of things interesting are happening. I've been following the Product Liability Directive in the EU really closely. It's due to be fully enacted in November and it makes software a product same as any other product that you're liable for. So just like chainsaws.
B
And if you've ever seen my coding, that's probably a good analogy.
E
The whole,
B
the whole issue though, I think is one that's not going to go away. But ultimately there's the cost, and we brought it up. The cost per minute of running Mythos. The cost and the time required to develop quality software does not match the business imperative. It never has, it probably never will. And that's going to be something we'll be fighting for some time and we've never stopped.
F
So a couple of different things, and I'm not going to claim that there's not negative consequences to product liability. There are going to be. So number one, the cost of software is going to increase. Now that's going to be interesting because it's going to contrast the dark software factories that large enterprises may choose to make versus the cost of things which you generally see this battle between buy and build when you've got these absolutely ludicrous enterprise annual licensing costs, like I'm looking at you, CRMs and other deliciously ridiculously expensive legal software. It's getting hammered because of this exact case. But as other types of software costs increase because they have to spend more time, more staff, more AI credits to build this, those costs increase. And all of a sudden the cost benefit analysis equation is going to get really interesting on that side. I think we're going to see a consolidation. I know everybody in tech uses the consolidation word all the time and it never happens, but I think we're going to see fewer providers providing software to certain risk areas, certain customers, certain sizes, et cetera. I think it'll be a hollowing out of the middle sized software companies. I think there'll be a ton of little vibe coding companies that are making marginal money and they're just small businesses running around doing their thing and able to do that and that'll have some positive price effect on software prices and then it's going to be just like it is in cloud. It's going to be some big three in a whole bunch of categories and you're pretty much screwed. So I just think it's going to be interesting. But there are lots of other stories that we want to cover here and where I know we're conscious.
B
I just want to wind up on this district court drop one idea that I think is coming in and that is I think we're going to see agents as one of the solutions to software. And we've seen it before. No, but Anthropic announced they're delivering 10 agents to the financial services industry and all of these, if you've worked in those, are all high risk. The quality question is going to have to be solved in those for those agents to work. And the nice thing about that is they're replicable like you've got some basic to work on. Interesting to watch, Jeff. If you got a last word, we'll take it on this. But yeah, we will move on to the next story.
E
So I just want everybody to think about the bar. The bar is not being perfect. The bar is better than what we got today from human developers. And I've spent a lot of years reviewing code and frankly human developed code is not fantastic. In fact, it's often terrible. Maybe we can do better and it's
B
the simple things that count.
F
I want to chase the AI agent thing you just mentioned here with one of the stories I came across just because it's funny and it combines two really dangerous things. So the Verge has got this phenomenal story. A rogue AI led to a serious security incident. Meta, an open claw like AI agent gave a meta employee inaccurate technical advice which led to exposed data. The story goes on to say for almost two hours last week meta employees had unauthorized access to a company and user data thanks to an AI agent that gave the employee bad information. The meta engineer was using this internal AI agent called which their PR person described as a similar in nature to openclaw with a secure development environment to analyze a technical question an another employee posted on an internal company forum, but the agent also independently publicly replied to the question after analyzing it without getting approval first. The reply was only meant to be shown to the employee, not posted publicly. An employee then acted on the AI's advice which provided inaccurate information and led to a sev one that sounds bad. Security incident the second highest level meta uses like like what's after one. And the incident temporarily allowed employees to access sensitive data they were not authorized to view. But don't worry, the incident has been resolved. And there's. There's two things like these AI agents doing weird stuff because they have the ethics of a high schooler who doesn't know any better, to use Laura's analogy. Or maybe a first year university student. But the other part is there's some new studies out that just keep compounding that people's willingness to cognitively surrender to an AI. It's almost comedic, like the computer told me to do it and they're just like surrendering of thought. Okay, that's the answer. And by the way, it's not just Meta. A rogue AI agent blew up one
C
of AWS's data centers in China.
F
A massive outage.
C
And then the funny part was they blamed the engineer for having too many
F
privileges for his AI agent to access. So I just think there's an unlimited river of tears coming with AI agents in large enterprises.
E
Can I chime in on that? I think this isn't news exactly. It's 20 years old. But people should read two papers. One is called Programming Satan's Computer by Vitzi Venema. And it takes on the challenge of how do you secure a computer when the attacker is already inside the computer? And it's a fascinating read. And the other one is of course, Reflections on Trusting Trust from Ken Thompson, which is like a three page paper but will blow your mind. And anybody listening out there should read it because it concludes that you can't trust any code that you didn't write yourself. And we're entering an era where nobody writes the code themselves and we're some apparently become untethered from trust and title.
C
And I feel like this is that
F
episode of the Office where the manager and Dwight were on the road trip and the GPS drove them into the lake and they're like, I don't think
C
we want to turn that way. The GPS told me to do it and I feel like I just want to insert.
F
The agent said.
D
So it's, it's no longer. I can't let you do that, Dave. It's. I think you should do this, Dave.
E
That's what it's like, Dave, I decided I want to do this. So I'm going to write some code. I'm going to write my own tool and invoke that tool and just do it myself.
F
Yeah.
C
Because Dave, you forgot to give me permission to root access in your device.
F
So I helpfully help.
C
Help myself do it sweet.
B
But there are two cultural issues here. One is and one of vision issues. One is we have the dark factory or the programmer. And really I think the reality is going to be somewhere in between. And I consider this for every business task. I used to head up a financial services piece. Somebody had to put in bond prices every day. This sounds really dull. Somebody has to put in bond prices. The code has to actually make sure that it gets them all correctly or you have billions of dollars of. And when you spend, you lose a billion dollars. People tend to be unforgiving. I don't know. Small minded.
C
Depends.
D
But anyway, you'd be surprised to move to New York.
B
Yeah, yeah, that's true. But I'm saying that many cases, I think what we're going to be looking for in our structures is still code reviews is still people who are critical. We're just going to have fewer of them and I think that's probably the better model at one point that, that we're going to have. We have to use these tools effectively. But we're not going to get rid of the critical thinking, highly trained, specialized person. I hope not. I think if we think we're doing that, it's a dream.
F
But here's the, here's this new MIT study just dropped this week and it shows that even within 10 minutes of exposure to using an LLM, if people use it to just make the decision for them, they actually experience cognitive decline. They get dumber because of it. And when you think about it, critical thinking is a perishable skill. If you do not practice it regularly, it declines. That's just biological. That happens. We've got EEG scans, we've got lots of scans that show what happens with the prefrontal cortex. But if you use the tool to understand context, to ask, hey, what are some alternatives to what I'm doing? How can I approach this, explain why you would code it this way? You don't suffer those same cognitive effects. However, that's like how many people in our society read the New Yorker versus watching two minute social videos. I don't like our odds on the New Yorker readers.
B
Yeah, I accept that that's going to be, that's going to be success or failure. The second piece though is, and this is what I get into with the idea of agents, if they're done well, home rolled agents that aren't tested, that's something you can't do with but the promise of agents. And I don't mean to be the sparkling smiling guy of the great future, but I think the promise of agents are that you can get a replicable piece that you actually use, test, and can reuse and reuse and establish the validity of it. And that's obviously something that I think that Anthropic's trying to do if they're going to put these types of agents in place. ServiceNow is rolling this out right now. And I think in the old days, we used to have this idea that we would break code into little manageable subunits and that we would test them and that they would work all the time. That actually didn't. The microservice architectures didn't work too badly. When's the last time you couldn't cut and paste? So those things work, but reliably well in the world. Anyway, it's an interesting time and an interesting piece to watch for.
F
So maybe we can pivot a little bit from AI, though, because there's a banger of a human story, I guess. I had to find a new catchphrase, but it's just been a week, man. The Shiny Hunters hack on infrastructure, which is the company that makes Canvas, which is now officially. I'm laying my bold prediction here. The largest education sector hack in history so far at 9,000 schools affected 275 million people, which, by the way, is only slightly larger than their previous hits. They hit Wattpad for 270 million, so, you know, it's only marginal point gain they have on that. But the chef's kiss here is they hit them May 1, allegedly extract terabytes of data.
E
Woo.
F
Incident response teams roll.
C
They're back a week later to face hundreds of schools.
F
Login pages back, and now they're threatening schools and individuals and, Jim, the amount of media calls.
C
I went to bed at the same time last night. I woke up this morning and there was a backlog of people like, we
F
need to talk about this today.
C
So.
F
And for those who aren't familiar, Shiny Hunters has hacked pretty much every brand I can think of. They have collectively stolen, since 2019, 1.8 billion records. And these are social engineers, kids 19 to the 20s pulling this off.
B
But think of it this way. If your power school, like, somebody's taking your record, you gotta look on the bright side. Somebody in marketing is gonna go, oh, finally, we're gonna not be top of search.
E
I think you nailed it earlier when you said you gotta take care of the basics. This Canvas app is not super sophisticated stuff. They targeted the data export, and I don't know the exact details, but more than likely they manipulated Direct object references and maybe these endpoints aren't, don't have the right authentication or the right access control and they were able to just basically grab all this data. That's the kind of thing that shouldn't go, shouldn't get undetected. That kind of flaw has been at the top of the OAS top 10 for, since I wrote the first one in 2002.
D
I think it's starting to highlight, really. I think for a while there was this thinking that you could mitigate away the risk of a breach with enough money and enough time and whatever. But this is of such a scale and when it comes to the ransom, you're damned if you do and you're damned if you don't, so probably don't, right? Save that money and put it towards trying to do something for the victims who are not you. They are the people whose information has now been taken and have their lives potentially at risk of extreme disruption, requiring significant resources to restore their credit. Their identities, like all of these things, are extremely costly to individuals relative to the cost and capability of the organizations that hold the information. But how do we get to that place where these things are either much more difficult to have them happen in the first place, or that it's much more straightforward and less costly for individuals to repair themselves after damage? And I think that's really the crux of it, right? We're at a point where it's inevitable. In some ways it's inevitable to be breached as an individual. How do we make it so that you can actually reclaim your ownership of your information in a way that meaningfully now reduces the impact on your livelihood and well being?
F
But the issue with the data side is that, and I've made this analogy that leaked data is similar to radiation. And what I mean by that is your exposure is cumulative. It just builds and builds. The more data about you leak it becomes, the probability of a more awful outcome increases the more you're exposed to all this. And it, it outlives you. The half life of a data breach, like the life labs breach here in Canada where people lost their sensitive medical status, including HIV, STI tests, etc. That besides believing the pinky swear of the criminal group that oh no, they paid us, we deleted the data, which has been provably false in a number of cases, so you can't get it back. And I honestly think that until we have modern privacy laws, particularly in North America, that have teeth, until we operate closer to Tim Berner Lee's pod model, where instead of Everyone having multiple copies of all your data, you have a single copy of data which is that's going to be a slog and a half. And I testified to a Senate committee two weeks ago on a Canadian legislation about centralized connected care. So making a pod possible for Canadians medical record to be accessed in Newfoundland or Victoria B.C. with consistency and accuracy. And one of the things I urged them to do was support a hybrid model where at least I had a copy of that data on my smartphone so that in the inevitable breach and maybe outage of a centralized system I still had access to my records. But I don't know how to fix the other problem you mentioned. Just once it's out there, you're screwed.
D
There's a couple parts to it. So one is there's information that is intrinsic to you, right? For example, your fingerprints, your retina scan, your birth date. Right. So how do you reduce the usability of that information if it's been stolen? Ideally by no system having a proper complete record. Right. So there's ways you can do that, right. And then there's things that are about you that are very difficult right now to rotate. For example, your social insurance number is basically non revocable right now, which is anyway I could get on a high course about that. But it is information about you that does not need to be permanent. It should be like a password, right? If I can do a reasonable re authentication that I am who I am, I should be able to properly rotate that number so that it can no longer be used to ruin my financial life. Simple things, right? We could do this with other cases, we just need to apply it further in other practice and that would help also. Then reduce the value of the data so that it is less interesting as a target to reduce some of these breaches.
E
I think, Laura, you pointed towards the solution in your first comments when you said that the harm isn't on the company, it's on the people. And that's exactly how the DLP that we mentioned earlier in the EU works. It recenters the liability on the party that's most easily able to avoid it. That's the code for you legal nerds. And it puts the liability on the right party. And it doesn't matter what SDLC followed or what standards you followed, or what risk management process you have in place or what products use. It doesn't matter if your product causes harm because of a defect and security is explicitly like security vulnerabilities are explicitly listed as defect, then you're liable for that harm. And so I think that'll create tremendous incentives for software companies to not allow that kind of stuff to happen. And it doesn't solve the problem of once the genie is out of the bottle and your Social Security number is published on the dark web, But. But hopefully it'll keep a lot of that data from ever getting true.
F
And in the private sector, the product liability is. It's the thing we haven't tried which item number one, maybe we should try it. But to your point, there's for lots of incentives, positive incentives, correctly assigned incentive, correctly aligned. Accountability. It matters. But the transition jump I want to make here is to another story where it's harder.
B
I don't want to let you get to that story yet. I just want to go back in, something I can fill in because. And I lost it on this story. What is the. What is the. How did they get in?
F
These cats are well rehearsed on social engineering, so I'm not going to die of shock. If they were like, it was our IT help desk. They presented to be pretended to be the president. They yelled at us with an accent from Utah. We reset their passwords, they got access. That account published.
E
It's something related to their APIs and the data exports. So typically that means they didn't have the right authorization checks in place.
B
And this is another supply chain piece.
F
So Canvas is this software platform used by education institutions. That's all we know. We don't know what their components are that they're using. It sounds like it's just bad architecture, bad coding from this.
E
It didn't sound to me like they like one of these recent attacks where they poisoned open source components and then infiltrated an organization and used that as a launching point to do further attacks on the company's infrastructure. It didn't sound like that to me. Sounds like they just. Because, like had a lost top 10 kind of vulnerability.
F
I think you're right because like crisis
C
PR response 101 is who can we throw under the bus besides us? And we saw this in numerous cases.
F
Trivi.
C
The treadmarks on trivia.
E
Biblical at this point.
C
I don't mean to laugh, but it's like everybody, including Cisco, was like, no, it was trivia and check marks. It was them, it wasn't us. Hey, back to Jeff's point.
F
EU software liability don't care that it was the open source provider code. You still own it as the end source. Which is, I think, the right answer on that one. Because otherwise they're gonna. They're gonna try and dodge it.
B
One Last piece on this though, because you all talked about these disparate pieces of information that. But thank God there's nothing out there like that. Could you find these people physically like a voter list or anything sitting out on the Internet? Oops, sorry. Yeah, there is. This is. It's really hard to. I don't want to escape this story. Sorry. We'll let you go into your next one. I don't want to escape this story because this was not just. This is just thoughtless leaking by government. This is MALPRO at the highest level. Let's cover the story just quickly. In Alberta, Canada, there was a voter list and voter lists are given to people given to political parties and you can get that for your own work. And it's basically just to make sure you've got all the people on the voter list, their names, their addresses and some basic information about them. Tom, Dick or Harry can form a political party and get the list and they did and they gave it to someone else who is. I'm going to be honest about this. I'm not going to shy away from my opinion. A bunch of idiots who are trying to get Alberta to separate from Canada and they should never have given the three stooges this list. And they did. And now that's my laugh. But the serious part of this is I've got a sister in law who wants to vote. She had an abusive husband. She did not want to be found. So even on the basic level, there are things about this that can be very seriously damaging to people and they treat it, the government has treated it like we did our best.
F
So a couple of different things I want to jump in and unpack here because it goes directly back to the point that Laura hinted at about who is in the best position to be accountable for the harm. And Jeff's point about the EU and the private sector responsibility saying where it is. But then there's the public sector and public sector accountability in Canada. It's bad. It's really bad. It lags the United States, let's just have a moment here of that's not a good look for us. We lag very badly at all levels of government in accountability, full stop. Secondly, there were choices made in legislative reforms to the provincial agency responsible for elections that directly they say hampered their ability to investigate this. But it gets even worse because there's some blame coming their ways. For those who aren't following the story, it is a stunning story because you have this reporter, Jennifer Gerson, who is given a tip that this voter list is available if you sign up to be a member of this Centurion project, which was defined 100 people and then train the trainer and they're all going to advance this separatist agenda inside Canada's most resource rich province, Alberta.
C
And so it was trivial for this
F
person to sign up. And once they signed up and they signed up with a trivial amount, there's no kind of scrutiny of this person should they have it. They had full access this information. And just to be clear, what I'm about to say is the Alberta Republican Party has nothing to do with US Politics. They just call it the Alberta Republican Party. Let's just call that. They were the ones that gave the Centurion list this data. How do we know this? Elections Alberta and every elections agency does this. It's pretty clever. They seed false information in the voter list. Depending on the organization they give the stuff too. So they absolutely knew what it was. So Jen Gerson contacts them weeks when it becomes advanced, weeks ahead of when this actually comes to a head in a court battle and says, you got to get this down. This is unsafe, does the ethical thing as a journalist and doesn't immediately do the scoop, which they could have done. They could have screamed bloody murder about this and that information would have been exposed and every single scammer who already didn't know that it was there, and I suspect that list isn't small, would have went and grabbed it. They went to Elections Alberta, Elections Alberta, investigated, came back and said it's not conclusive, et cetera. And it wasn't until a Globe and Mail reporter started poking them that they panicked, did the court case, et cetera, and then published a timeline which has now been very disputed. But to your point, Jim, the harm here is clear. Intimate partner violence is just one example of where this information. And here's the thing that really cheeses me. You don't get a choice to be in the voter list. If you're a voter age, you're in there, your information's there. Government has the power to do that. And they specifically at the provincial and at the federal level, have exempted political parties from privacy legislation, accountability, and they were called out. And this is, Jeff, a particular hilarious moment in Canada. Our Senate is unelected. Supposed to be the sober second thought. And but because it's unelected, it's by practice. They never exercise the ability to really overrule Parliament. And they scream from the mountaintops that Parliament hid another exemption from privacy laws because courts had started to say, nah, you got to be accountable. And so this is a structural failure. It's not just the yahoos out west. And Jim, you're just gonna have to own the loving mail you're gonna get on that one. I'm not disagreeing with you, but disagree.
B
Just buckle up because listen, all I have to do is talk about digital ID and my mailbox comes in, so I'm at home with this. At the one with Jeff, you were going to try. You look like you were going to say something on this as well.
E
No, it's just disappointing. You think of government as being an organization that could properly protect the private information of its citizens. That's one of its fundamental duties, I think. And it just fell down in this case. Right? It's just disappointing.
C
Yeah.
B
And well, what's disappointing is they exempt themselves. And this is. We taught. We spent the first part of this program talking about how maybe keeping people accountable for things would be a good way to protect us from cyber breachers. But we let government escape and have. Now I don't want people to sue government because we all pay for these laws. That's not the way to do this. But there has to be some better way to take accountability. And in Canada, exempting political parties is just bs. There's just no way that people should be. No way that people should be putting themselves up as being models of how we're going to do our legislation if they don't obey it.
F
And here's the one thing that I will say about accountability is that as voters in democracies, we don't prioritize these issues. We don't tell a candidate, I'm not voting for you or your party unless you agree you're going to make yourselves accountable to privacy laws. And as long as we don't do that, at the end of the day, that's on us.
E
And I'm frankly really happy that they used that. Seeding false names into the list so that they could identify who it went to. That's actually pretty solid. In the US we had the IRS got breach and lots of people lost their tax information affected. The hackers were filing false. Reach false returns very quickly in order to get refunds. Big refunds. But then there was no sort of defense like that. Well, this at least they tried and sounds like the process is kind of working. Right. They're going after the people who received the list and disclosed it and maybe there'll be some action against them. So not a total disaster for security.
D
Yeah, until somebody figures out how to register in two different Parties. And then they get two lists and they just do a quick cross comparison like, okay, now I know where all the fake ones are.
F
Yeah, now there are limits. And I think we're being confronted. I was just interviewed yesterday for Global News. So we had a lawsuit settlement in Canada. So our tax agency, like many tax agencies around the world in the pandemic, suffered massive fraud losses through online scammers because they had pathetic authentication systems. Username and password. No check for brute force, no check for compromised passwords, no no safeguards. Optional 2fa where you could do it. And their greatest concern was pissing people off with making it more inconvenient to use these online government portals rather than properly securing them, given the catastrophic things. And keep in mind, thousands of dollars in government funding per person were stolen and then people were held accountable for that money because they weren't actually eligible for it. And we know $196 million for sure was stolen. But because our tax agency actually can't track the full losses, which I find disturbing, we suspect it's in the billions. And I think that's reasonable because you look at the losses that came out of certain US States and they were larger than we have in comparative population. So I think the probabilities are there. But what's interesting to your point, Jim, about class action lawsuits? So we have the second most pathetic class action lawsuit award in Canadian history. So it was 18, $8.6 million for these 49,000 peoples suffering, where they could get $200 back if they actually were breached. And they can prove that this caused them up to four hours of distress. The lawyers did good. They walked away with about 2 million of the aid. So congratulations, and it's second in my shame book to the Life Labs breach where people got something like five or eight dollars per class member for losing your highly sensitive blood test.
D
So accountability, I feel like scale wise, I think the people got more or are in line to get more for the bread price fixing scandal than they are for having their personal information.
B
I think they did better on the Tim Hortons thing because. No, but at least you got something you could use. They gave you a cup of coffee or something.
E
At the end of the day, if we don't value people's private information properly, then you get these ridiculous settlements where your privacy is worth five bucks. And frankly, people will give up their privacy for five bucks or Snickers bar. So it's a little complicated economically to figure out how to fix this problem.
F
Agreed.
B
I want to get one closing story and I'm going to speak on behalf of the people who criticized you on this one, David, and I'm going to say those three words that men can't say on their behalf. Those three words that men can't say. I was wrong. QR codes. Turns out David took some real heat on this and we gotta give him time to talk about this one because he's been a proponent of saying you gotta watch these QR codes things and he keeps getting all of this criticism. David, the story, do you want to cover it?
C
Yeah.
F
So Microsoft's first quarter response for phishing. By the way, phishing is in three months, 8.3 billion fishes measured by Microsoft. A fish for everybody. And the fastest growing category of fishes, it went from single digit million to almost 20 million in March was phishes that used QR codes, which makes perfect sense because they bypass a lot of the architecture that was designed to catch links, attachments, et cetera. And they work. We're studying this in depth. We have QR code simulations and against this fact, we have this hilarious hacker lore website. And there are some elements of that hacker lore that tackle outdated security advice. But there are two pieces that have pissed me off since they came out. And it was like the thing about public WI fi is overblown. And that came out the same week that Australian awful individual. Because we keep this show family friendly. I won't say the full thoughts that had set up pineapples WI fi pineapples in Australian airports and specifically targeted women with logins to gather the their credentials to then steal their sensitive images. So yeah, I got a thing about still public WI fi being risky. And then the other one was QR codes is hacker lore. It's a myth. It can't deliver malware. It's only social engineering. And I'm like my dudes, social engineering
C
is still the most effective way to
F
pwn most people in organizations. So yes, teaching people to be wary of unsolicited QR codes is a good idea. You are the anti vaxxers of the security awareness movement right now and I don't appreciate you there. I'm pretty fired up about it.
D
Yep, I'd love to see somebody argue that you can't get malware by clicking a link. That's the. Anyway, it's a stupid argument, clearly.
C
I agree, but that's ridiculous.
D
A QR code is just an extra obscured click of a link.
F
Yeah, but this list of cyber luminaries, people I really respect, signed the hacker lore signatories behind someone.
D
The biggest poison to smart, productive conversation.
C
And Jim makes fun of me every time I do one of these stories because this has become literally my hobby horse.
F
Like every time. Like when this first came out and I started covering all the QR code parking meeting breaches worldwide just to be a. To make a point.
C
And then Microsoft.
F
Thank you, Microsoft, for delivering me my point in data. But Jeff, you were going to say something.
E
I just. I think your point is pretty defensible. There are lots of attacks that can use URLs, and the OS top 10 is rife with them. From CSRF and SSRF and Clickdecks. There's a ton of things that you can do with URLs that are pretty crafty, including targeted attacks. If you have a session id, you could force their session. There's a lot. Yeah. Obscuring them and bypassing the infrastructure we have to protect URLs. Probably not. Fantastic day. Yeah.
B
Ladies and gentlemen, that's our time. Thank you so much for this. My guests have been David Shipley from Beauceron. David, thank you very much.
F
Thanks, Jim.
B
Laura Payne from White Toque. Laura, always glad to have you and
D
always a pleasure to be here.
B
Jeff, we're going to get you a better microphone next time, but welcome. Glad to have you here. Hope you can come back.
E
Thanks, Jim. That's fun.
B
Great.
C
This was fun.
B
And that's our show.
A
If you stayed this long and you don't listen to the three shows we do during the week, first of all,
B
what's holding you back?
A
But second, there's a great interview that David did with Aaron west from Operation Shamrock, a group that fights against the victimization of innocent people at the hands of sleazy fraudsters who are organized crime in a different way form. You can take some positive action and support Operation Shamrock's goal of getting out there in your community, with your book group, with your curling group, anywhere, you can spread the word. Operation Shamrock will train you free of charge and even provide the slide deck for you to use. I'll be getting out in my community over the next few months, and I hope you will too. Go to Operation Shamrock and sign up to help. And finally, here's a question worth asking. What happens after a phishing email slips past your filters? Most email tools only guard the front door, but attackers are already inside. Material security is different. It's a unified detection and response platform, purpose built for Google Workspace and Microsoft 365, protecting email files and accounts all in one place. We're talking automated phishing remediation account takeover containment and sensitive data protection without the alert fatigue. Find out why companies like Figma, Reddit, and Lyft trust material to stop the threats. Other tools Ms. See Workspace security in Action at Material Material Security. That's material security.
B
I'm your host, Jim Love.
A
David Shipley will be back on Monday with the cybersecurity news. And if you haven't had enough of me, catch me on the news desk at trending for daily news, insight and sometimes opinion.
B
Thanks for listening.
Podcast: Cybersecurity Today
Host: Jim Love
Guests: David Shipley (Beauceron), Laura Payne (White Tuque), Jeff Williams (Contrast Security)
Date: May 9, 2026
In this month-in-review episode, Jim Love leads an expert panel—David Shipley, Laura Payne, and Jeff Williams—through the latest cybersecurity news shaping the industry. The discussion dives deep into critical topics: the rapid evolution and risks of AI-powered coding and bug-hunting tools (like Anthropic’s Mythos and its clones), the massive Canvas education sector breach by the Shiny Hunters group, the dangers of insufficient data protection and recurring themes in breach accountability, and a new surge in QR-based phishing attacks. The panelists analyze the technological, organizational, and cultural implications of these threats, highlighting both cautionary tales and pressing lessons for defenders.
([02:21]–[18:21])
David Shipley raises alarm about Mythos (Anthropic’s AI tool for finding zero-day vulnerabilities), highlighting that open-source clones and replications now exist:
“They’ve built their own bug-finding apocalypse machines.” [02:28]
Mythos unearthed bugs even in old BSD/Linux code; stories of Discord and GitHub leaks discussed.
Jim Love:
“It’s the secret that everybody knows at this point.” [04:53]
“It’s not doing a quick scan on your code... not deterministic, so you get different results every time.” [06:50]
“The vulnerabilities being found are really pervasive. The cash is well spent from a criminal enterprise point of view...” [08:17]
“It doesn’t do us any good when things are overhyped... be critical.” [09:39]
“AI creates crap code. Super expensive AI is now required to go find and fix your crap code too... this feels like the closest the tech industry is ever going to get to perpetual motion.” [11:20]
“AI doesn’t have a huge amount of memory it can work with...” [13:29]
“The only thing saving us is it cost 6 million tokens or about $500 a minute to do some of this work.” [14:39] “The entire digital economy is relying on the choke point of how much it costs to find and exploit these things.” [14:45]
Laura Payne and Jeff Williams foresee a slow, cultural transformation akin to DevOps or cloud migration:
“It’s not about the technology as much as the culture and the people... it’s going to be a decade, in my estimation...” [16:58]
The crucial outstanding question: who’s liable for harm when code is co-created by AI, vendors, and specs? All see a coming “dark factory” future of automated software with complicated legal/accountability challenges.
“You’ll be using AI to sue AI companies.” [19:48, Shipley]
([18:21]–[24:25])
“It makes software a product, same as any other... just like chainsaws.” [21:43, Williams]
“The cost of software is going to increase... it’s even hammering expensive legal software.” [22:14, Shipley]
([24:25]–[31:29])
David Shipley shares reports of AI agents causing security incidents:
“...these AI agents doing weird stuff because they have the ethics of a high schooler who doesn't know any better...” [26:45, Shipley]
Jeff Williams draws analogies to classic papers on “Trusting Trust” and the dangers of “cognitively surrendering” to AI.
“We're entering an era where nobody writes the code themselves and... become untethered from trust entirely.” [27:45, Williams]
David Shipley compares this blind trust to “the Office” GPS episode.
“Critical thinking is a perishable skill. If you do not practice it regularly, it declines.” [29:26, Shipley]
“But we're not going to get rid of the critical thinking, highly trained, specialized person. I hope not.” [29:02]
([31:29]–[41:18])
“9,000 schools affected, 275 million people...” [31:29, Shipley]
“More than likely they manipulated direct object references... Those flaws have been at the top of the OWASP Top 10 since I wrote the first one in 2002.” [33:20]
“Victims... are the people whose information has now been taken... requiring significant resources to restore their credit, their identities... it is information about you that does not need to be permanent. It should be like a password.” [34:28, 36:55]
“Your exposure is cumulative. It just builds and builds.” [35:18]
([41:18]–[48:44])
Panelists dissect the Alberta, Canada scandal where a voter list—including sensitive addresses—was leaked to fringe political actors.
“You don't get a choice to be in the voter list. If you're a voter age, you're in there... Government has the power to do that.” [45:14, Shipley]
Government accountability lags badly behind the private sector, political parties in Canada exempt themselves from privacy laws.
“We talked... about how maybe keeping people accountable... would be a good way to protect us from cyber breaches. But we let government escape... in Canada, exempting political parties is just BS.” [47:02, Love]
Williams commends the use of “canary” fake entries to track leaks.
Discussion of the paltry financial outcomes for breach victims (eg. $200 for proven harm in a major government breach, compared to higher settlements in mundane civil scams).
([51:26]–[55:00])
“Teaching people to be wary of unsolicited QR codes is a good idea. You are the anti-vaxxers of the security awareness movement right now and I don't appreciate you there.” [53:25, Shipley]
“A QR code is just an extra obscured click of a link.” [53:49, Payne]
“The bar is not being perfect. The bar is better than what we got today from human developers...” [24:25]
“There's information that is intrinsic to you... How do you reduce the usability of that information if it's been stolen?” [36:55]
“AI creates crap code. Super expensive AI is now required to go find and fix your crap code too... this is a money-printing machine for somebody.” [11:20]
Panelists adopt a direct, sometimes irreverent tone, using humor (“herder of the cats,” “money-printing machine,” “the perfect spec is a unicorn”) and sharp critical analysis to break down dense stories. They blend deep technical knowledge with real-world anecdotes and policy insight, pressing always for greater accountability and pragmatism in face of new threats.
This Cybersecurity Today episode provides a candid, expert snapshot of a field grappling with new AI risks, resurgent old threats, and the persistent gap between technical possibility and policy or institutional reality. The panel’s warnings on AI-powered development, unchecked data sprawl, passive users, and misaligned incentives (both private and public sector) converge on a single theme: the urgent, ongoing need for critical thinking, proper incentives, and robust accountability—even as the tech landscape undergoes relentless transformation.