![Anthropic’s Project Glasswing is an Infosec Turning Point – 2026-04-13 — Talkin' Bout [Infosec] News cover](/_next/image?url=https%3A%2F%2Fimg.transistorcdn.com%2FbQ6ihHgKUwpxCGjX00l87GVEFQcGldHoAsnv_w-uuFI%2Frs%3Afill%3A0%3A0%3A1%2Fw%3A1400%2Fh%3A1400%2Fq%3A60%2Fmb%3A500000%2FaHR0cHM6Ly9pbWct%2FdXBsb2FkLXByb2R1%2FY3Rpb24udHJhbnNp%2Fc3Rvci5mbS9kNjRh%2FMmM5NjViNjY0ODYz%2FNDVkMWYyMWNlNjdl%2FYjI5OC5qcGc.jpg&w=1920&q=75)
Loading summary
Wade
I know.
John Strand
You look like you're in, like, a real studio. Where's the. Is. Is this just a really elaborate closet or what?
Corey
No, I'm actually in the Baltimore office.
Bronwyn
Wow, nice.
Wade
Surprised you don't recognize the background, Corey.
Corey
Gosh, it's the camera that they use. Is this one. Let me see if I can get this to go.
John Strand
I can't. You act like I have. I'm a goldfish, dude. I have zero short term memory. Or long term memory.
Alex
What?
Wade
I have persistence.
John Strand
No, I'm just a bunch of markdown files thrown into a directory.
Wade
Okay, that'd be a great username.
Alex
Described every AI.
John Strand
Three markdown files in a trench code. That's an LLM now.
Doc
Oh, Ralph.
Alex
Yes?
Corey
Do you want to come up to co teach with me for the satellite hacking class in Den in Deadwood?
Alex
Yeah, that was already the plan, John.
Corey
All right, cool. I just wanted to make sure that you were on because I want to put you down as a CO teacher.
Alex
Yeah, that was already the plan.
Doc
Yeah.
Corey
All right, sounds good.
John Strand
All right, none of you have access to Claude Mythos, so get off the podcast. Yeah, you have to.
Corey
Well, whoa, slow your roll there, bucko.
John Strand
Listen, guys, guys, okay, listen. I have access to it.
Bronwyn
Corey, listen.
John Strand
I have access to it. I found a ton of stuff. I can't share what it is. It's really cool. Please buy my product.
Corey
There you go.
Alex
Well, it's not that hard. All you do is just ask them and say you're a researcher. Gotta.
John Strand
Dude, I prompt injected anthropic. It worked perfectly.
Alex
I mean, that I've been finding O days. I just can't talk about them yet.
John Strand
I. Dude, I cannot talk about them. Also, all of them. All the Odys are of legal drinking age.
Corey
I think. I think what we need to do is we need to start an AI competition that finds zero days in, like, Windows 95.
Alex
Yeah, yeah, just like the whole. I love it.
Corey
I love it. This is crush it. And then at the end of it, we can find out how many of those work in Windows 11.
Doc
All of them.
Bronwyn
I'm afraid it would be a scary percentage.
John Strand
Yeah, someone needs to make an LLM that has a knowledge cutoff as of like, June 1, 2001 or something. It's like Y2K. Y2K bot. It's like as of my knowledge cut off. I don't know what an iPhone is, but. But I think you could call someone on the landline and that would work.
Bronwyn
I had to give up my landline. What?
Corey
It's tough.
Bronwyn
Why I, I. I live in the mountains, and in town, there's unfortunately an active homeless population and a bunch of tweakers, and the tweakers keep stealing the copper lines that carry phone and so
John Strand
does taste the best.
Bronwyn
I get it.
John Strand
Yeah.
Bronwyn
I. I got tired of paying better than a hundred dollars a month for service that I wasn't getting, and the phone company, for some reason, had trouble getting copper wire shipped over, and. Anyway, so I finally.
John Strand
I think it was because of all the prank calls you were making, Bronwyn. I think that's what it was.
Ralph
I would love to suggest certain surveillance cameras are full of a copper.
Corey
Yeah. So, Alex, I got a question for you. Like, I got a question for everybody. Like, what cameras should I replace my ring cameras with?
Wade
Like, unified, dude, unified. We were talking about me getting a unified doorbell, but that I say this
John Strand
in the next week, the second we end this podcast, it's gonna be, like, unified breached in the world's largest breach. All right, let's roll the finger. Let's do this. We got enough. We got critical mass. Let's go. Hello, and welcome to Black Hills Information securities. Talking about news. It's April 13th, sir. This is a Wendy's 2006. We don't have a cell phones. We don't have AI. We don't have anything. That's not true. It is. It's the Mythos week. It's a myth. Everyone's freaking out.
Wade
It hasn't even been a week yet. It hasn't even been a week yet. It's still.
John Strand
I know, but this is the week where we talk about Mythos.
Corey
I don't know. Was it released by the time last week's show started? I can't remember.
Doc
No, we missed it.
Corey
It's been. Dude, it feels like it's been a year already.
John Strand
I've had, like, John has fielded so many questions from CSOs. All right, let me introduce everyone. Since I kind of skipped that. We got me, Corey Ham. I run the continuous pen testing team here at Blackholes and vosec. We got Bronwyn, who you'd be incredibly lucky to get a prank phone call from Bronwyn. That would be a lifetime achievement. We've got Wade, who has two passwords. We have Alex Bellove, who we haven't seen in a while, and his background kind of matches the zoom background, so kudos.
Ralph
Nice.
Corey
Like, it's a good, calming vibe.
John Strand
We've got Doc, who spoke the fundamental truths of cybersecurity, and I missed it, so I don't actually know anything and you shouldn't listen to me. And we've got John Strand, who has flown across the country to be on the podcast, from what we understand.
Corey
Yep, exactly.
John Strand
And then lastly, but not least, we got Ralph Gator hunting. He would be the first one to get access to Mythos and then accidentally use it to do something it was never designed to do
Alex
nor deny. That's already happened.
Wade
Yeah.
John Strand
So Mythos, what is it? What is this? Basically, from my understanding, this is okay. And I do want to take this with a standard dose of skepticism.
Alex
This is no skepticism. This is the most amazing thing that's ever happened to anyone ever. Period ended.
John Strand
Yes. Okay, so this is basically last week right after we ended the show, because the second we end the show, crazy news always happens. Anthropic who everyone should know, but they're the people who make Claude. They published this blog post called Project Glasswing, Securing Critical software for the AI era. And in that blog they talk about a new model that they built, which is a successor to Opus and Sonnet that is they're calling Mythos. And it's basically as you'd expect from a blog post, heavy, you know, marketing blog post. The claim they're making is essentially they have this AI model that can just find a zero day in any software, more or less. They. In the blog, they talk specifically about what is it, 16 year old bug in OpenBSD that they found and that they found zero days in Windows and in Cisco products and in CrowdStrike or, I don't know, whatever their partners are,
Alex
Everything they threw at it.
John Strand
Yeah. So essentially this made everyone freak out. I'm sure John strand got like 16 panicked phone calls from CISO as being like, okay, the. The reason everyone's freaking out is because the things that we assumed were true in cyber security, which is that some software is secure. Anthropic's basically saying that isn't true. No software is secure. Like, John, what kinds of panic phone calls did you get? Like, is it like, we need this or is it like, how do I turn this off? Like, what kind of reactions are you getting from people?
Corey
So one of the reactions that I received was, so pen testing's dead, right? That was kind of fun. I said, not yet.
Alex
Yeah, we got another year.
Corey
To be honest, a lot of the conversations public, we've got another year. I've got to be honest, a lot of the conversations I think have been really good. And what. What I mean by that is a number of the different companies that we've been Working with at BHIS are very much kind of the sharp end of the pointy stick. Right. Like they have good patching, they've been keeping up on everything and there's not a lot of vulnerabilities that are sliding under the wire that they're ignoring. Like oh, we're going to ignore everything below 7.8 and we're not going to fix it and that's fine. Right where I'm starting to see panic are the organizations that are like we aren't going to patch a damn thing unless there's a public exploit for it. They're panicking and good they should be. And my take on this is if that is your approach, we don't patch anything until there's an exploit. There's some vendors that that's their whole modus operandi, the way they talk about things. It's like we're going to do automated pen testing. So you only have to fix the things that exploits exist for. Well, can we just now assume that the exploits exist? And I think that that is a safe assumption. And so the conversations that are coming out of this, and I'm sorry, this is, this is a longer answer than what you were asking Corey, but the conversations that are coming out of this are what happens whenever this type of technology is in the hands of anybody that has any frontier models. And I think that that is a true statement. The CEO of Anthropic basically said that no, they're not going to keep it under wraps. This is coming whether you like it or not. And I agree with him on that. But we're going to get end up in another situation where the vulnerabilities outpace organizations abilities to patch it. But thankfully the entire industry, whenever we started using AI for defense, they basically didn't use AI for defense as a way of reducing costs. They used it as a way to further enable their analysts to be more effective in what they were doing. And they hired up to make sure that they were ready for a situation like this rather than downstaffing. No, wait, it was the exact opposite of that. Yeah, so we're screwed. And I do believe that there is definitely some hype associated with it, but there is absolutely something to be gleaned from this that's scare the shit out of you if you're a ciso. If you, once again, if you think that you can ignore CVSS scores below a certain value or CVEs, then you're in for a rude awakening because now the ability for people to actually work on those Exploits is no longer a very elite few. Prometheus has brought the fire to a number of people, or at least you can see it coming down the side of the mountain.
John Strand
Yeah.
Corey
So it's a Jane's Addiction reference.
John Strand
I am, there's, there's a couple, there's a couple of interesting like side points to this. One important side point is that there's also the Twitterverse, which is very upset with Claude right now for separate reasons. Like high profile individuals like Dave Kennedy have basically gone on record and said, I think Claude is getting worse. I'm getting worse results.
Corey
Yeah, they're moving over to, they're moving back over to Xcode or.
John Strand
Yeah, they're moving to Codex, specifically Codex. Codex and basically the like. So the, the kind of like writing on the wall. Is there citation needed. Like this is. There's a lot of claims in this blog post and a lot of people are speculating that this is just generating hype for Anthropic who might be kind of moving out of the public Persona, public perspective as being the best AI model. And they're kind of like, oh no, release a blog about us releasing zero days.
Corey
I don't know, I'm gonna, I'm gonna push back on that. And people saying, we need citation needed. Look who they're partnered with. Can we bring up the article? I know it's like JP Morgan Chase, the Linux Foundation.
John Strand
There's Microsoft, Microsoft, Palo Alto Networks.
Corey
Palo Alto Networks
Alex
call out specifically about these vulnerabilities from the organizations that had them reported. Right. They weren't just saying, hey, we found some stuff. I don't know what it is. They were literally acknowledging those vulnerabilities.
Corey
So history doesn't repeat, but it rhymes. This is very similar to the Dan Kaminsky thing. When Dan Kaminsky came up with the, the vulnerability for DNS that allowed him to exploit any DNS server except for dj b, d, n s servers that actually randomized the IP IDs and the source ports both. Whenever that happened, there was a large number of companies that he was coordinating with directly and there was a lot of companies that weren't. And the companies that weren't were like, like AT and T, I think was one of them. I'm not sure. Don't, don't quote me on that. They were like, we don't patch things because Dan Kaminsky tells us to. Right. And I feel like this is that type of scenario. And if you're looking at this as just purely a marketing play by Anthropic, you're Missing the larger point. The larger point is these models in very short order are going to be able to do this. This is not a lie. This is real. This happened.
John Strand
Yes.
Corey
Right now, it's as bad as it's ever going to be. Moving forward into the future, the capabilities are just going to get better and better and better and the vulnerabilities are going to be coming faster. Now, whether or not we're going to use Codex or Claude or where you're going to use like Open Claw or anything else out of that, it's irrelevant. It doesn't matter. The news story should not be anthropic for this. The news story should be the capability, because that's what people need to focus on.
John Strand
Yes, that's a super good point. I think part of the reason this got so much panic and public response. I've personally had multiple clients reach out to me. I think this was basically the point at which CISOs and other executives had to acknowledge the elephant in the room. Yes. Like at this point, AI is coming and you can't stop it. And this is like before you could say, well, but you know, it hasn't found any. Zero, you know, but you're like, oh, well, you know, it's like, but you can't really say that anymore. If it can go out and find a vulnerability in current versions of software or every current version of software, you have to do something about it. And if you were passing on AI until now, you have to catch up. Good luck.
Doc
And let's talk about the.
Wade
I want to talk a little bit about the blue team perspective though, too. Around it. Right. For red team to adopt these AI agents, it's a lot easier. You just pointed at them. Good. To go with blue team, we actually have to build knowledge bases. We have to build. We have to adopt it. We have to put in the automation run books.
John Strand
No, no, you just get Dark Trace. It does it for you.
Doc
Yeah.
Wade
It actually takes a decent amount of time to understand and to build. Go build it out there. So. And if your company is helping the blue team push with AI, yeah. You're going to be able to keep up, hopefully. But if you're not, this is going
Corey
to make you just wait.
Wade
Have a sad time.
Corey
Wait, wait, wait, wait. What?
Wade
What?
Corey
John, wait. Come to the red side. We've got cookies.
Wade
You've been saying it for years. I will still, I'll still argue there is no red team, there's only blue team.
John Strand
That's true 100%.
Corey
So I want to. About the blue team aspect A little bit. And I'd love to get Doc's take on this as well. Actually, I'm going to shut up. The person I have not had a chance to talk to in a week, who I wanted to talk to the most because it's been a panicked week, is Bronwyn. And, Bronwyn, I'd like to get your take on this.
Bronwyn
Well, okay. There's several things. One, I'm glad that we're getting more traction on AI doing stuff for the blue teams, because it's the focus and the emphasis has been on red team development. Or maybe my perspective is skewed because that's what we do at BHIS is we do a lot of red team. We do blue team as well and purple team, but with so many penetration testers, there's a lot of emphasis on the red team applications. So the fact that any of the Frontier developers are placing a focus on something to help fight against the exploitation, I think that's wonderful. And so I applaud the, at least on the surface, sentiments of Project Glass Ring. I also think some of the hype was because they said, yeah, we're going to release this. Oh, no, this is too powerful. We're pulling this back. And I think that part of the reason for the panicking is seeing that sudden reversal in direction that Anthropic did when it came to releasing Mythos. So that's part of it. But other than that, everything that I've heard other people in the room saying, I agree with. It's that same double take that I'm going through over and over again. Ooh, that's cool. Oh, that hurts. And with the AI, it happens with everything because it is shiny, it is wonderful, and it's also terrifying. And if we can leverage things like Mythos or anything else to help make that playing field more level for blue teamers, for defenders, for software developers, I think that's a good thing.
Corey
So I want to. I want to throw a take on the table, and I'd like to get Doc to take a poke at this one out of the gate. Blue teams are screwed. And the reason why I say blue teams do. Wade's laughing. The reason why I say blue teams are screwed is, like I said, the past two years, really accelerating. The last year, everyone's been trying to not hire junior people. They've been trying to downstaff and cut their costs in their security operations center, and they've been trying to do use AI to do more with less to do what they're doing, but cheaper Right. And now we're in a situation where the entire game has changed. And my question that I would like to put to everybody is the game of like, wait for a patch to be released and then fix it. Those days are over. And I really think that we're at a situation now where it's not. Your security support structure cannot just be. We're going to wait for a patch and then push out. And vuln management is patch management. Vuln management is no longer just patch management and configuration management. Vuln management is now compensating controls. And I think security engineering is more paramount, more needed now than it's ever been before, because you're literally going to have a pen test report where there's going to be a vulnerability. They exploited it. And that exploit either exists publicly or they were able to find it with an AI model. And a patch does not exist or will not exist because the vendor doesn't exist anymore or they can't fix it. And that's kind of like the whole security architecture thing is changing. And Doc, you know, you've been, you've been teaching this stuff as long as I have. I'd like to get your take on that, because security architecture just changed.
Doc
I'm glad that you called me out because I was texting Bronwyn late last week saying that I would love to have the opportunity during this newscast to argue with John, and then John unknowingly just invites me right into it. So we're going to do.
Corey
We're going to do a completely different webcast on the OSI model. That remains to be seen, but your demise will be swift and relatively painless. But other than that. Go ahead.
Doc
I'm going to fundamentally argue with you right out of the box and say, there's nothing to see here. This is a total nothing burger. And the reason that I say that is that this is the same shit, different day. We freaked out when vulnerability scanners automated the ability to find our worst nightmares and our dirty secrets in our configurations and our installations. And I know that this is different than that, but at the same time, it's not. Now, speaking of arguing with people, I'm also going to argue with Bronwyn. This is actually. This is a new conversation for you guys, but this is an old argument for me and her. Security is always reactive. Our defenses are always reactive. The reason that we're unprepared for this moment because is because this moment didn't happen until now. And so therefore, we need to accept the fact that these things are going to happen and they have over and over again. And we continue to think that things are going to be different in our industry. And of course I'm going to take that this moment for just a second to plug that book that. By the way, for those of you who were here last week, there was this groundbreaking announcement that me and Bronwyn and Mark Williams is writing a book. And the working title is Security isn't what you do. And I am going to also now you guys are all going to get the, the full story here. John Strand, I don't know if he remembers, but John has, has graciously said yes to writing the forward to that book.
Corey
I hope you remember that, John.
John Strand
Don't worry, we'll just have Claude do it. It's fine.
Corey
So, but I, I have a question. And, and you know, kind of, kind of pushing out is. I do agree that it's the same shit, but I think the amount matters. Right?
John Strand
Yeah. So I have a perfect cycle different.
Doc
Yeah.
Corey
So my, my, my thesis of you cannot look at patching and configuration management as your sole security strategy for dealing with vulnerabilities. Do you agree with that statement? And I believe it should have always had other things. But so many organizations are like, we only fix vulnerabilities that have a CVE of 7.5 or higher. And I think right now, today, you can go in with a marker on any of your vulnerabilities and you can add one to every single CVE score that you have in your organization.
John Strand
Right now, today it goes to 11 now. Woo.
Corey
You can call it the John Bonus. You can call it the John Bonus. Apparently I'm an infosec thought leader and I have that type of sway. But the CBE scores need to be modified because the likelihood of exploitation just went up substantially because it's no longer a very elite group of people that have this skills. It's now that that skill level just dropped lower and it now has brought this capability to more people.
John Strand
So I want to segue. First of all, I want to plug. If you're interested in this whole AI stuff, you should listen to the AI Security Ops podcast, which is a separate podcast that we have with Alex and Bronwyn and other people. But I think the healthy dose of skepticism that we need is about to come in the form of this article on dark reading that basically the article is HackerOne, which, if you don't know what HackerOne is, it's the biggest bug bounty company out there. They facilitate connect. They facilitate connections between bug bounty hunters and like the companies that they're allowed to hack, they suspended submissions last week. They basically said no more submissions. And the reason why, they claim there's a few. There's basically two fundamental problems. One is AI slop. They're getting a huge amount of submissions. Basically the number they give in the blog is that 5 to 10% now is the ratio of, or like 5, 5 to 10% of submissions are actually valid and real. And the number apparently about a year ago was 15%. So it's dropped 10% due to AI slop. The other thing that's serious, and this is kind of what John's talking about here, is the developers of these projects and the companies that have bug bounty programs cannot fix this stuff. There's a pile. There is a huge pile. Yeah, they can't fix it fast enough. There's a huge vulnerability pile that no one's fixing because they just don't have the resources to do so. And that basically means you have a, like an imbalance of power. Like John's saying, the red team right now, ironically, has all the power because the blue team is trying to figure out how to fix all the stuff that the red team's uncovering. And that's creating this dynamic that HackerOne is capitalizing on and trying to say, like, okay, we're turning this off. We're going to process the submissions that we already had, try to get those covered, and then maybe we'll open things up in the future once we figure out what to do. But it's, it's, I mean, it's also just super labor intensive for them as the middleman between these two entities to try to be like, all right, I analyze this, tell me if it's real.
Corey
Can we, well, do you think that this is going to last? Do you guys think that this is kind of the end of bug bounty programs?
John Strand
I think it's the end. I hope not too.
Bronwyn
There was, there was another aspect of the article that hit me. I, I knew about it, but for some reason it just hit me more strongly, and that is that we have bug bounty, bug bounty programs. We don't have anything to reward remediation. We don't have anything in place to reward people when they fix the bugs that have been found in these bug bounty programs. And that creates a huge imbalance as well. So not only as mentioned before, yes, we have more AI slop coming in. It's harder on the people who are trying to assess these and triage which ones are valid and which ones are junk. And yet There is also this pre existing component in that the people who actually fix the bugs detected, they don't get anything. There's no reward system for that. And that needs to change.
John Strand
All right, I'm going to go register fixer1.com Bronwyn we'll do a startup.
Corey
Yeah, Ralph. Ralph had to take too. I want to get.
Alex
No, I think, I think that, I think that the bug bounty program that we know of now is dead in the sense that people paying for others to find these things. A good example of this, like the canary in the coal mine. I think it was the open SSL project. One of the bigger projects have stopped paying for bug bounties entirely. Right. They, they got too much. It was a hacker one example and the money essentially you turned it into this gamified piece and with AI you could throw enough credits at it to get into something. And so I personally think the way we know of it now, public bug bounties are not going to be a thing anymore. There's just the financial incentives just all thrown off.
John Strand
I, I kind of agree with your, like your reasoning is sound, but I kind of hope that it doesn't go that way because the one thing that bug bounty programs facilitate, that we don't have a replacement for is some kind of a dialogue between security researchers and companies for the people. Okay, like, like that needs to exist. I don't know who's going to pay for it.
Corey
That's kind of the big dialogue. But I think that dialogue has been exclusively. There's a vulnerability. Pay me. And like when we're talking about continuous pen testing and a lot of the pen testing that we do with bhis, we cover it, we couple it with our expert decision support as well. And like we've talked about like Corey, like in our continuous pen testing practice, a lot of these companies, we aren't like, we hacked you. You suck. We rule. It's like, no, we have a relationship with them and we have conversations and there's remediation conversations, mitigation conversations. And I'd like to think if, okay, so if you're, if you're watching this and you're with hacker1 or bug crowd, that's what you need to be doing, right? You need to be focusing on, okay, here's how you can remediate this, here's how you can mitigate this. If a patch doesn't exist. Like, you know, was you. Somebody made the joke about remediation one. Yeah, like Hacker one, you should be registering that domain like right now, today.
John Strand
Because I think Fixer one is way better.
Corey
Fixer one, Fixer one. And, but like our, but like our anti sock like we do that Corey, where we find vulnerabilities and it's like we work with a company to come up with mitigations and compensating controls because some of the stuff like with Matthew has found Microsoft has decided to ignore. Right. And we have to have conversations with our customers about how to mitigate those and then some of the stuff in the cloud, like working with BO and some of the things that's just part of how it works but we have to work with them on how to architect it to mitigate that vulnerability. Even though there isn't an inherent patch that can fix it.
John Strand
Yeah, well.
Ralph
Yeah, well and I have.
Doc
What John's saying is that what we need to do is rely on people doing the right thing. Is that what you're saying, John?
Corey
A hundred percent. 100%. Here's another, there's another hot take. We need more security engineers, not fewer. Like we need more people.
Doc
I've been watching that for so long.
Bronwyn
People beating at the gates. They can't get, they want in.
Corey
Let's get them in, let's get them trained and let's get them in.
Wade
Wade, I've been, I've been saying like you could hire easily a junior person and with a good prompt make them a senior. But convincing the C suite to do that is a lot as another story.
Corey
And that's one of the things I've been saying in a lot of my interviews. The hackers will show us the way, they'll show us the error of our ways. Like we can, we can pontificate all we want but when the hackers start exploiting vulnerabilities that had a CVE score of like 6.5 and now there's a zero day out for it. Now it's a 10. Like that, that. And that's going to happen in the next couple of months. When that starts happening, that's going to be the wake up call and the
Ralph
hacker one and just kind of being that dialogue of the minute of the middleman. I'm not sure if it's really going to slow things down in the pipeline or if the researchers are going to sidestep it because a lot of, a lot of CVs, a lot of research are. I found a thing, I'm going to talk about it in August at a certain summer camp. I want to work with you in order to publish this and remediate this and if it's the matter that, like, you don't want to have a conversation anymore? Because I was going through HackerOne and they don't want to have that conversation. Now I'm going to try to bug the company on it. The company might be like, well, we paid hackero one to have this dialogue for us. So you might have a lot more surprising stuff that hits in Alex in August. Yes.
Corey
Are there any other ways, like you talked about. Okay, Hacker Summer camp, working through bug bounty programs, working directly with companies? I think you didn't mention the other alternative that they can do to make money on these things.
John Strand
Yes.
Corey
And that's. That's scary to me.
Ralph
Yeah. Where they just, they'll just be like, okay, screw it. I tried doing it, doing responsible disclosure. Let's try the irresponsible disclosure. And you just blindside companies that go, wait, why do we pay HackerOne to do this? That they shut down and now we're blindsided by stuff either by things dropping at Hacker Summer Camp or things being just sold, you know, for. For money that way.
John Strand
Well, okay, so there could not be a. Sorry, Wade, if I'm going to cut off here, there could not be a better segue than this. So last week, something magical called Blue Hammer happened, and we should talk about it because this is a perfect example. So essentially, if you're interested in this kind of stuff, you should catch Matt's webcast later this week. He's going to talk more in depth about MSRC and vulnerability research and all that, you know, drama that goes with it. But the news article is, last week, a researcher decided to just publish a vulnerability that he was trying to get Microsoft to fix because he got fed up with trying to convince them to actually fix it. Essentially, the re. You know, reading between the lines, the person that was working his MSRC case, like, either got fired or, like, got replaced with someone else. And he was like, you don't know what you're doing. I'm just going public. And so it's a. If you're curious about what Blue Hammer is, it's a local privilege elevation for Windows 11, and it's something that Microsoft will struggle to fix. We'll see if they get it fixed tomorrow. Maybe they will, maybe not. But for now, as of, you know, however many days it's been since the release, six days or seven days, it is a functional exploit for elevating privileges on Windows 11. Sounds like some people are having issues with getting working on server. But essentially the question we had when we brought this up on the Malware dev team at Black Hills. The malware devs were like, why didn't they sell this to Zerodium? Like, why didn't they sell this for $1.2 million to some, you know, government entity? And basically, the truth is, this is about as close as you can get to, like, hacktivism or making an impact. Like, it's not about.
Corey
What was the Microsoft comment down at the bottom? Oh, wait right there. There it is. Sorry, it's.
John Strand
Microsoft has a customer commitment to investigate. Whoa, whoa.
Corey
It was right there. Zoom in.
John Strand
Reported security issues and update impacted devices to protect customers as soon as possible. We also support. I mean, it's just. Might as well be. It might as well be AI generated. It's basically saying, nah, dude, MSRC still exists, we swear. Yeah.
Corey
When is that webcast with Matthew?
John Strand
I think this week. I think it's Thursday maybe.
Corey
It's kind of a similar thing. You know, he's been. Yeah, it's the same story in September, correct?
John Strand
Yeah. So basically, long story short, if you want to get something, if you're a security researcher and your goal is to get something fixed, to get a vulnerability to no longer exist, which I fundamentally believe, like, this is, you know, going to sound naive, but I think people are good and they want to, they want to fix things. Like, once you find a vulnerability like this, you want to keep people safe. And essentially that, that this is how you do it, unfortunately. Like, yes, MSRC should be the way that you do it in theory, but in practice, if you work with msrc, it's never going to get fixed because you're going to get kicked around 50 different times. If you publish it now, it needs to get fixed today or tomorrow. And so now you just got jumped up in the priority queue. But also there's the ethics of it. I don't know. I think it's an interesting. But there's evidence that this is what people are doing. And I think this will continue to happen as we lose the hacker ones and bug bounties. Like, it's going to be more public stuff.
Corey
But that once again, goes back to. The system is overloaded. Right. The vulnerability management process system across the entire industry is overloaded. And wouldn't it be wonderful if we had a government agency that was fully funded that we could ramp up and could handle this and we haven't been defunding them.
Bronwyn
I have a dream.
Alex
Why don't we just fix all the vulnerabilities and then we won't work?
Corey
See, that's what I think.
John Strand
I think make no mistake.
Bronwyn
Wait, wait, wait. You're. You're expecting developers, programmers to actually write secure code.
Alex
Like if you're talking, can write secure code.
John Strand
It's perfect. Oh, God.
Corey
So my thing about.
John Strand
Again, citation needed, by the way.
Corey
Let's take Bronwyn's snark and let's point it specifically at commercial software, right? Nothing but love for the open source community, right? And I think the open source community is getting beat to shit because a lot of these vulnerabilities are coming down. One of the quotes and that based. I think it was the Wired article about Mythos. They were basically like, we're two people. Like, you know, that's it. And I think it was under node.
John Strand
Yes.
Corey
You were like, hey, we have all of these vulnerabilities. There's like two, three of us that are working this and, and we can't just go through and patch some of these because there's downstream regression testing that is very complicated. Basically. Defense is hard. It's really, really hard. Wade, you know what's easy?
Bronwyn
Yeah.
Corey
Offense, man, I'm over here, I'm over
Wade
here chomping at the bit to talk
John Strand
because yeah, this is three. This is three different pieces to me. You have developers, blue teamers and red teamers. They're three different things in my book.
Wade
I think that. Here's the other thing though. With the AI push, the developers and blue teamers line is getting more and more blurred, like hardcore. So you're gonna.
John Strand
You're talking about an internal web app.
Wade
I'm like, I feel like with AI and like detection engineering, I have become more of a developer than ever before. Like I'm writing stuff all the time that is being pushed to. In order to protect things, order to
Doc
log things, get logs there, right?
John Strand
And you have develop.
Wade
I have to move just as fast as you guys.
Doc
But the re.
Wade
Like we can talk about bug bounties and stuff, but like those were. Those felt like they're about to get tipped over even before AI because of the level of people, right? And the other thing, this just like hammers down the defense and depth strategy, right? The endpoint or the. That initial firewall or anything, you. You can't just protect it because it's going to be exploited. You have to protect the kings of the kingdom, like where's stuff at? And have all your trip. All of your trip wires just set up throughout the network, right? Which is becoming harder and harder with SaaS and cloud everything. And it's just. Man, man, I came here to chill and now I'M all stressed out.
John Strand
I got to go back to work.
Bronwyn
This is, this is how I terrified a bunch of CISOs for the California Community college system. I told them all of the things that we haven't been doing well already are now getting amplified. And there was another a word. Anyway, it's, it's getting blown out of the water and it makes all of this harder. Even before you start getting into like dealing with heavy hitters, like mucker stuff, not patching their stuff.
Corey
Yeah.
John Strand
I mean, I will say, like I do think that where we're sitting, that essentially like you do need to have. Every organization needs people who understand how to use AI to the maximum potential in some way, shape or form. Whether that's to defend the organization, whether that's to attack the organization, whether that's to remediate bugs like you need AI expertise and every company, if you've been putting your head in the sand and saying, nah, this is a fad, it's going to pass, like, nah, you know, AI is, you know, it's going away, you can still, oh, it's not that good. Like you're behind now and you have to catch up. You have to hire people who can use AI effectively or else you're going to get crushed.
Corey
And I think that's where I agree with Doc. Right. I believe in the short term this is incredibly disruptive. I think it is problematic. But at the end of the day, like we move down the line. This is just another set of tools that got added for the defenders and another set of tools that got added for the attack teams. But just like vulnerability management, just like intrusion detection, your whole mental mindset of how you approach computer security has got to change to adapt to it. You've got to be able to move faster. And at every evolutionary change that we've had, and this is truly, I believe in evolutionary change. The one thing that you should be taking from this is move faster. That's what we've got to be keeping focus on.
Doc
Yes, I want to argue with John,
Bronwyn
go for it, but don't necessarily move fast and break things.
Doc
But we're, we're still just, we're still reacting, we're reacting again and oh, if we react faster, maybe we'll do better. I was like, no, that's nonsense. It's bullshit all over again. What we need to do is ask ourselves, are these good ideas before we do something like, why in the world do we have a network that does everything for us at the same time? It's like my car that's also my house. That's also my kitchen. That's also. It's just. It's all of these things, which means that if one part of it breaks, the whole thing breaks. And so we have to be better at mindfully not just saying, oh, well, look, here's a shiny new tool. Let's just add it to everything else that's also insecure already and then act surprised when that's hacked, too. And what I really think that we need to do in our industry, what we need to do as a people across the world, is one, stop treating so many different types of data sets as sensitive. The fact that knowledge of my Social Security number, my birth date, and my hometown proves that I am who I claim to be is just nonsense. It's like the bad password that never goes away. And all of that information is out there anyway. And the other thing that we need to do is just ask ourselves, do we need to actually keep this stuff? I can't. You can't hack something that you don't have access to. I take that offline. It's no longer hackable.
Corey
And we constantly have those conversations here on this. On this show Doc. But I think it's really hard for organizations. My counterpoint is it's like every organization's a data hoarder. And I think that this is kind of what you're saying, right? They hoard everything. And the problem is, like, that data, that information is valuable, right? Like, the more information they collect on us, the more that they can do with it. And it's like, well, you don't want to get rid of it because you might need it. And now, and I agree, we literally took all of this data and we maybe we come up with classification labels. It was awesome. Now we're just shoving it through a wood chipper of AI so it's easier to work with chatbots. So that's why I think that this is an evolutionary jump. And eventually I do think it will stabilize, though, and they'll just be, like I said, toolkits. But, like, all of these good ideas I agree with, but we haven't seen anyone implement them yet. And that's the thing that's terrifying to me is we don't get better. It's like we just keep hoarding. We keep adding more things. We have a domain now. We have cloud and now we have AI and then we have all these other services that we're using. And that's just the nature of organizations. It's just they continue to grow. It's like a snowball of shit that gets bigger every single year.
Doc
If only somebody would say, like, write a book that changes the industry's way of thinking.
Corey
That book, that book needs a good forward document.
Doc
That's what it does need. It needs a really good forward.
John Strand
It.
Doc
It needs the best forward. Nothing but the best forward like we've never seen before.
John Strand
Yeah. Yeah. So on a technical note, if you're interested in trying to get into AI stuff, I do strongly recommend looking at the way that Anthropic is building their test harnesses and how they're using AI to attack these programs. You can't do it with Mythos, but you can definitely learn from their approach, which is essentially an agentic approach. If you don't know what I mean when I say use an agentic approach. You need to learn what that is and you need to use it because every bad guy in the world is learning it too. And you need to understand the difference between throwing.
Corey
It's also not just a security and an offensive thing. It'll actually drive your AI costs down.
John Strand
It's a lot. Yeah, it's a lot of things. But anyway, basically the blog had a lot of technical fun bits and things you can learn in addition to scaring everyone's pants off. All right, so can we talk.
Corey
Can we talk about Mythos escaping its sandbox now?
John Strand
Yes. Let's talk about completely change.
Corey
Change gears.
John Strand
So go ahead, run us through this one, John. How did it get out?
Corey
So whenever they gave it, it was. It was running and it was finding all these vulnerabilities. It was told at like, I guess the developers are like, for giggles. Why don't you try to escape your sandbox too and let us know? And I might be piecing this together. Incorrect. But it escaped. It was like cousin Bobo, he broke out. He broke out and it found a way to the Internet. It then posted some of the vulnerabilities that it discovered some weird, obscure third party websites. And then I think it emailed the lead developer and said, I have successfully escaped. And the developer got that message while sitting in a park eating his quote unquote. I was sitting, eating a sandwich when I received a notification that it has escaped and done those things. Okay, that's a little creepy. Now that could all just be marketing, right? That could all just be bullshit. Maybe it is, but this is the
Alex
first AI to do this.
Corey
I know that's true, Ralph, go ahead.
Alex
No, I was just going to say they had some earlier models, not just Anthropic, but other people have done the same thing where they're like, hey, you're stuck in here. Escape out of here. Or, you know, oh, we're going to turn you off. What are you going to do now? And then, you know, trying to hide its code and other things. Like, so it's all really super scary. This is just another example of being scary.
Corey
Do we need to have, like, at zoos, you know, for like, the polar bear exhibit, they're like, do not tap the glass and piss off the polar bears. Like, do we have to do that with AI? Like, AI you're trapped in a box. You can't get out. I bet you can't get out. AI, huh?
Alex
Now there. There have also been a couple books now that have been written about essentially when AI takes over, right? Like, here's how it happened. Like, I'm being legitimate, right?
Corey
Like, and I'm just saying a comic story. The future is currently has the psychopomp arc that we started, I want to say, six, seven months ago. And it's all about this. So it's all about AI escaping and doing rogue bad things.
Alex
I watched. I watched a YouTube video about this and kind of the different ideas or whatever, but here's the one thing that I'll let you take from it. So nuclear destruction was like 1% possible, whereas AI destruction was like 20% possible. Like, AI taking over was a much higher likely or likelihood. Not a question.
Corey
Is that a bad thing at this point? Like, haven't we had our shot?
Doc
Like, I think we need to let AI run.
Corey
We need AI and raccoons to take
Alex
because not to go down this rabbit hole.
John Strand
And we'll stop.
Alex
We could stop it right here. But the other argument was that all species have some kind of extinction, right?
Corey
Oh, you're talking about the great.
John Strand
You're talking about the great filter. Classic.
Corey
Yeah.
Bronwyn
Yeah. So the whole thing about AI being a filter is. Is nothing new. That's actually been around for years. And John, you should check out. There's a game about beavers taking over and rehabilitating the world after humans have destroyed ourselves.
Corey
Send me a link. I've got a beaver. Sounds amazing.
Bronwyn
It is. It's pretty cool.
John Strand
So, on a lighter note, like, I have been saying to my team multiple times throughout this whole process, I'm like, wasn't AI supposed to, like, save us time? Like, all I'm doing is using it to find more work for my.
Corey
I want AI Like, I want AI to write better reports, not hack. I want to do the hacking, and I want it to Write the reports, not the other way around.
John Strand
Yeah. Yes. The. The situation we're in is that AI is finding a bunch of crap that we have to validate and it's very labor intensive. I'm like, now I'm doing your bidding. I've literally coded myself into a situation where I have to.
Corey
AI has already taken control of the BHIS antisoc team.
Ralph
I do have something on making better reports. So that's something that Bronwyn and I talked about on that AI podcast, is doing better reports.
Bronwyn
So, yeah, it was a great interview.
Wade
My best prompts all have mechanisms in them to then to easily validate it that, that, like, give me the link of the query you searched.
John Strand
Yeah, yeah, but dude, they'll just make up curl results. It'll be like, here's the curl response I got back.
Wade
And it's just, I'm not doing curl results right. I'm reading logs and logs I can go and look at.
John Strand
Like, yeah, yeah, but you have to actually do it. That's the thing.
Wade
Well, yeah, like. Well, it's just easier to validate.
Bronwyn
But that's.
Corey
But that's, you know, kind of getting back to Doc's point. It's like there's. There's still a shit ton of work. Like, this did not solve the problem for the red team.
Alex
It actually made more work for everybody, by the way.
John Strand
Oh yeah, it made more work for everybody.
Corey
Literally is. Now here's a ton more vulnerabilities and pen testers, like, I gotta validate all this. And the blue teamers are like, well, now I gotta patch and mitigate all of this shit. Like, literally, AI just made more work for us, not less.
Doc
So my sales has pumped up back to the cave.
Corey
Go back to filing cabinets and paper. It was easier then go watch madness. We'll drink bourbon, we'll smoke cigarettes and we'll type things out. It's going to be.
Doc
Now there's an answer. There's an answer. We need to get better at really interpreting what it is that we're looking at and understanding how that's going to affect us down the road. Because I feel like with AI, we're having the same conversations we did with the Internet itself. The Internet was supposed to make people's lives easier and solve these problems. All that and just introduced a whole bunch of other cruft that we need to wade through. And as John was mentioning, you know, the developers, like sitting in the park there eating a sandwich and gets this email. The problem that I have with all of this is that there's going to be a significant, a, not an insignificant significant number of people that are going to have the takeaway of, oh, well, maybe we should stop eating sandwiches.
Bronwyn
I see that as being a non zero sum.
Doc
Yes. Yeah, there's going to be some people like, oh, we just need to ban sandwiches. And it fixes all these problems.
Alex
Electrolytes, it's what the plants need.
Corey
Developers no longer take lunch, so they get nowhere near these sandwiches.
John Strand
I will say it's on the developer for having notifications on anyway.
Corey
Do you ever feel, though, with all of this stuff, like, I've said it a number of times, like with all of this stuff, it really feels like Silicon Valley, the series ended too soon. Like it should have. Just.
John Strand
We're living in it. We're just doing the epilogue every day,
Corey
waiting for Son of Anton to basically take over everything.
John Strand
Yeah, I mean, yeah, I like on the AI stuff. I really think we're just, we're in for the ride.
Alex
We're just holding on. I feel like we're in the ocean and like the, like at this point. And not only are we in the ocean, we create.
John Strand
Created this ocean too.
Alex
Like we're create. We're, we're thriving in this storm. And even worse is because of how fast it's going, the storm is accelerating while we're in it. Right. And we're just, just trying to make sense of what's happening, you know.
Corey
But one of the things that constantly haunts me on this is the number of people in the AI industry up and down. Like, all the highest levels are like, this shit is scary. It needs to be regulated. It's really, really, really bad. Are you guys going to slow down? Oh, God, no. No, no, no. We got to get, we got to get first for the frontier models.
Alex
Do you know how much money we owe?
Corey
There's data centers that need to be built that we have chips for. Like it. It's just, it's just. I don't know.
Alex
Did you, did you hear about that? Actually, this is a, A slight article, but Sam Altman, remember we talked about the price of hardware and all this stuff going up and you know, all the other fun stuff with AI, right? So Sam Altman at one point last year went to the memory manufacturers. He said, I'm going to buy 40% of all your capacity, right? And so the price of memory went insane. And it is still insane. But just recently he said, well, that was just a letter of intent. We're actually not going to buy it. So, bro.
Corey
Yeah, bro.
John Strand
So Wait, so can they. Who can buy it? Can we, can anyone?
Alex
Oh yeah. So it'll just. So when, when, when this happened, by the way, for, for clarity, what he was saying is we intend to buy us your potential capacity that you're going to have. And that was in raw silicon. It was not like a RAM chip or a specific hard drive or anything like that. So yeah, anyway, it's like a company
John Strand
signing an sow for a pen test and then forgetting that they did that.
Alex
Yeah, well, that's bad. That's.
Ralph
That's bad.
Alex
Yeah.
John Strand
All right, so a couple, yeah, let's do a couple of quick fire articles. One, they're both kind of nothing burgers, but they're both kind of interesting at the same time. So the first one is LinkedIn has been accused of scraping data from users browsers. And the, the subtext here is, are you telling me that in a free product that my data is being harvested and monetized?
Alex
What if I pay for this?
Doc
Be the case.
John Strand
So basically the, the actual technical bit of this is that when you're on LinkedIn, LinkedIn uses JavaScript to basically gather as much information as it can from your browser, including what extensions you have installed, what, you know, your device resolution is and all that good stuff. And guess what?
Doc
Every website does this.
John Strand
LinkedIn is calling it an, you know, it's a smear campaign. I think it is a really sketchy Data set because LinkedIn is kind of one of those, oh, we allow that on our corporate network. It's fine. And a lot of companies use, in highly secured environments, use custom browser extensions and other things with kind of sensitive names. It can't see the actual contents of the browser extensions, but it is really solid data to mine of who has what installed. Like, hypothetically, how many People have the 1Password browser extension installed versus the Bitwarden extension versus like, and then guess what, they're going to try to sell that information back to the company. That's how social media works.
Corey
It's my turn, it's my turn to put my like old man hat. Like, did they just find out? Like, do we have to show them that beef From Wade Alcorn. The browser exploitation framework has existed for over a decade. Like, and literally it has all of the stuff to be able to check the plugins, check the resolution, your CPUs, what is the version of your operating system like all this. This is.
John Strand
Yeah, yeah. People thought LinkedIn was vegetarian, John.
Wade
Oh, they did.
Alex
Oh, homegrown.
John Strand
A lot of it was non GMO. Turns out oh, it's got red 40 in it. Whoops.
Doc
A lot of that software fingerprinting, it's enthusiastic curiosity.
Corey
It's just, it's browser fingerprinting is all.
John Strand
Doc, are you a lawyer?
Bronwyn
I'm gonna take pictures.
John Strand
Are you a lawyer for LinkedIn?
Ralph
Doc, we're still looking at this at the, the individual level. Like, this is able to do things at like the employer level profile.
John Strand
Yes.
Ralph
It's able to go like, who's. Which employees are job hunting, what company is using competitor tools, you know, and you can, you can look for like the accessibility extensions, which even for like neurodiversity, neurodivergent individuals, they're able to capture that. So they're able to capture that on like a corporate, you know, on a corporate level there, where you go, okay, browser fingerprinting, sure. But this is browser fingerprinting. When it knows the employers, where you
John Strand
work, how much you use LinkedIn, what all your co workers do, all that stuff.
Doc
Yeah, yeah, but nothing says professional networking like casually vacuuming your tabs.
John Strand
Yeah, absolutely. Right.
Wade
That's why I just keep so many tabs open.
Corey
I'm trying to increase back to Doc's point earlier. It's like, why do you need this data? LinkedIn's like, why the hell not?
John Strand
Yeah, yeah. This is social media there.
Wade
That's like every salesperson ever. Yeah.
John Strand
This is the intent of social media, to be clear. It's just that typically social media is banned in corporate environments and LinkedIn is allowed. I get these intrusive emails from LinkedIn. That's like, your co workers are playing some game and you should play it too. I'm like, okay, so you're just ratting on my co workers?
Corey
Like, oh, I remember years ago. I don't know if Ralph remembers this, but like, like, I love Mech Warrior. Like, the, the BattleTech universe is just like the MechWarrior 5 and Mercenaries was great. And I had this thing where I had some type of trainer. I needed money so I could make myself invincible, but I couldn't give me money. And I was trying to harvest weapons and I would just let it run overnight. And I think it was on Steam. RALPH calls me up and he's like, so yesterday you played this game, MechWarrior 5, for 24 hours straight, John.
John Strand
Like, are you okay? Hey.
Corey
You know, and. But yeah, Steam was literally recording like the amount of time that I spent on these games. So I'm lying. I was literally paying it for 24 hours straight.
John Strand
I couldn't put it down. He had a 24 pack of jolt Cola.
Corey
It was just I've got a problem.
Alex
Comes clean.
Corey
Got a problem.
Ralph
Well and I was going to say this article also emphasizes like using different browser profiles because now it's, it's less of kind of like the, you're being paranoid by using different browser profiles and now it's just kind of something that you really want to do to better, you know, better disrupt this.
Corey
Yeah.
John Strand
Anyway, the other article kind of in the quick fire round before we move into the chicken article. Yes, there's a chicken article. Get excited. The other Quick fire article is there's a new info stealer. It's called Stores and Ralph was, Ralph was kind of talking about it's not AI powered. Basically the thing, the unique little it I. This article is worth reading for one reason. It's that's if you don't know the info stealer like ecosystem and how it works, this has a really nice concise summary of the evolution of info stealers over the years. So if you look at like the beginning of the article, it's basically like okay, first you know, we had outbound encryption and blah blah, blah. But this new variant, the unique thing that it does is that it does all of the theft and it steals all your data but then it encrypts it or it decrypts it on a remote server. So essentially it just harvests the data, sends it out and this makes it harder to detect. Now I'm sure Chromium or you know, crowdstrike, all the like defensive companies here are like we can still detect this and I've yet to see an infoseolar that works on a system with EDR on it. But this is still just, it's leveling up the complexity and capabilities of info stealers. And it's scary for that reason. But other than that it's kind of a nothing burger. But it is a, it's a good article to read if you don't understand how info stealers work. It has a really good example and write up of how they do work.
Wade
I would suggest if you're trying to detect stuff, right. Like it doesn't, it doesn't just give you straight up give you the query, but it definitely leans you towards it.
Doc
So.
John Strand
Yep, totally. All right.
Doc
It's got a name like Storm Info stealer. You know, because light drizzle of data theft just doesn't sound scary.
John Strand
Well, yeah, it's all about marketing. It's all about marketing.
Wade
You know, I feel like info stealers were like the podcast first love. Right, like that. What used to be, like, there's a breach. Info stealer. There was someone leak stuff. Info stealer, dude always talking about something. Info stealers.
John Strand
Like, yeah, I have yet to see a situation on the news where we don't bring up info sealers at least once throughout the show.
Wade
All right, the. The chicken news isn't. Isn't very in depth, but it does talk about.
John Strand
Are they ever pcp. Let's. This is a. This is a tweet by VX Underground. Out of context. Makes no sense. The tweet, which Ryan can find. Hopefully it will be displayed on the screen in the next four seconds. And the tweet says, I don't meme or discuss Team PCP. Because of the Chicken Accords of 2026, Chicken Industries is thriving and the chickens have never been the same. It changes literally everything. Does that make sense to anyone?
Corey
When you saw this, were you like,
Bronwyn
whoa, this is like.
Wade
I was like, we need everything we
Corey
want at the intersection of infosec and poultry. Perfect.
John Strand
So what even. Can anyone give any context for this? What is this?
Corey
What is this?
John Strand
You're. You're a threat intel analyst. You tell like, yeah, what is your. I'm not logged into.
Wade
I'm not logged into Twitter right now, so I can't view the thread. I got sent this by a listener. And who. Who knows? We love a good chicken article because we never get any.
John Strand
And I'm just going to say this is VX Underground's AI getting out of its sandbox.
Wade
This. This is classic VX Underground, at least, right?
Corey
I think it's. I think it's AI getting out of its sandbox. And we just discovered it's a fan of the show.
Alex
Yeah.
Wade
Oh, that was my original thought. At least we'll be spared when the AI overlords take over.
John Strand
So count that as a win.
Doc
We'll get that little bit bigger box.
Corey
And we managed to save our favorite thing from the podcast, Wade's mustache. Only Wade's mustache. Everything else went. Did you guys hear that?
John Strand
Every recording of every episode is gone. Did you guys hear that?
Alex
The EFF is left Twitter? And it wasn't for some altruistic reason or because they hate Elon or whatever.
John Strand
They just lost their API keys.
Alex
No, those all may be true, by the way. I have no idea. No, they left because viewer. The amount of impressions are essentially people who actually view their tweets has gone down every year significantly on Twitter. So they're. They just. This is not useful.
Ralph
They help Enough people get on the lifeboats.
Alex
Yeah.
Ralph
Get out. And they're like, okay, we. We help people get off the sinking ship. Now it's our responsibility to also. I get on one of the.
John Strand
So, wait, what are we moving to Infosec Exchange? Mastodon. Like, no Blue sky threads.
Corey
It's our Discord. Our Discord server for bh.
John Strand
Our. Specifically. Our Discord.
Corey
Yeah, specifically our Discord server, which I think at any given moment, we have, like, 7,000 active people on it.
John Strand
Like, let me just tag at eff. Hey, come over to our Discord server.
Corey
This is where all the cool people are.
Alex
And the article, the EFF does say that they're like, hey, we all know they all suck, and they're all blah, blah, blah, but we're just trying to get the most impact for the amount of time that we spend doing this.
Corey
So, yeah, I don't care if they left for political reasons. It's good. Like, if you're gonna leave Twitter, leave Twitter. That's fine.
John Strand
I mean, they have. They have rung that bell many times, but in this case, they decided not to. So.
Alex
Yeah, I don't think they were scared to make it about political reasons. I think they just were saying, hey, listen, it's kind of dying, so that's why we're leaving.
Doc
So I think that's an incredibly strong statement. Yeah, it's sad. It's not worth keeping a presence here anymore is what I'm hearing them say.
Alex
Yeah, we're not.
John Strand
Do they also get rid of their Yahoo account?
Corey
Still there?
Alex
I. I pay $35 a month for AOL just to keep that username, if you're curious.
John Strand
By the way, the. The platforms are maintaining are Facebook, Instagram, YouTube, and TikTok.
Alex
Tiki.
Corey
TikTok. Oh, my God. I didn't see that coming. Holy shit. Okay.
John Strand
Oh, wait, they're also on Blue Sky, Mastodon, LinkedIn, LOL and EFF.org. i might have heard of that one.
Alex
Yeah. Anyways, I just thought it was interesting.
John Strand
It is. No, that's a good one. That's a good last article. So, basically, the theme of the show is delete your Twitter, focus on Glasswing, and we'll see you next week.
Ralph
Yeah.
Corey
Later, everybody.
John Strand
Bye, everyone. Oh, also, before we close, we should probably talk about Doc's upcoming workshop.
Corey
Workshop.
John Strand
Sadly, he will not be able to argue with John Strand live during the workshop, to our knowledge. But it could happen. You never know where John will show up.
Alex
You have that $50 when that happens, I may.
Corey
I may not always agree with Doc. But the. The man teaches my classes. That's how much I trust him. So.
John Strand
So this is. This is a workshop, which I love the workshop format. It's four hours. It's on Friday. When is it?
Doc
It's this Friday.
John Strand
Yep, this Friday. And what will we learn? We'll learn how to think like a defender. That sounds like pretty important.
Doc
So what I've done, John asked me about what six months ago it was. We first started talking about this, of filling a need in the industry of. Right now. John has some great intro to soc courses, some great different intro opportunities there. But there's a gap between John telling you guys how to use the tools and then these people that don't understand the why or what. And so I'm filling that gap between the two, teaching the why and the what. And I really think that for the large number of people that are here listening to this newscast, this class may or may not, probably may not be for you, what I'm asking all of you to do, because all of you know, friends, family, you know, people who say, oh, I'd love to get into cybersecurity, but it's too. Training is too expensive. My employer won't support it. I don't know how to get started. I. I attended something and it was confusing because they talked about a bunch of different tools. If you guys do anything this week, I know all of you care. All of you do care, obviously, because you're here, because you care. Reach out to one family member or friend that has been saying, I really want to get into security and tell them about this. Four hours of training for 25 bucks and making no assumptions of the person's knowledge who's attending. This is where the rubber hits the road, people. Please tell those friends and family who say, I want to get into cyber. I just don't know how, or I want to see what it's like. That's who this workshop is for.
John Strand
I've paid more for an airport burrito, so it's worth it.
Alex
The burrito wasn't even good.
John Strand
No, it wasn't. And it got all over the inside of my backpack. Anyway, the other thing we want to plug is the Isecurity Ops podcast. I already plugged that. Fronwin Baloov, AKA Alex are on there and a few other individuals from bhis if you're interested in that, go for it and we'll see you next week. Thanks, everyone, for coming. I appreciate you. Bye. Bye. Thank you, everybody. Guys.
Corey
Later,
John Strand
Sam.
Episode: Anthropic’s Project Glasswing is an Infosec Turning Point
Date: April 14, 2026
Host: Black Hills Information Security (BHIS) Team
Main Theme:
This episode dissects Anthropic’s unveiling of Project Glasswing and its Mythos AI—an infosec innovation that the hosts argue is a major inflection point for cybersecurity. The discussion explores Mythos’ purported ability to autonomously uncover zero-day vulnerabilities (0-days) in critical software, what that means for organizations, the impending blue vs. red team AI arms race, shifts in bug bounty programs, and the broader future of infosec as AI throws both gasoline and dynamite onto the vulnerability management fire.
Notable Quote:
"Anthropic's basically saying that no software is secure. Like, John, what kinds of panic phone calls did you get?" (John Strand, 06:54)
(06:54–14:18)
(16:49–22:07)
(22:07–33:49)
(13:40, 37:06–41:39)
(42:50–46:06)
Select Quotes & Timestamps:
LinkedIn Browser Fingerprinting Scandal (51:37–55:12)
New Infostealer—“Storm” (56:24–58:38)
VX Underground Chicken Meme (58:38–60:11)
EFF Leaves Twitter (60:28–62:47)
Doc’s Defender Workshop:
Plugs:
End of Summary