Loading summary
John Strand
The sock is where pressure is real and impact matters. Join the Anti Siphon Training Sock Summit, free live streamed March 25th. Then go deeper with Hands On Training March 26th through April 10th. Learn more at Anti Siphon Training.com event sock summit.
Corey Ham
You were going to do a podcast now, Derek.
Derek
Okay, that's fine. I don't usually know where I am.
Corey Ham
You are in the interwebs. You're in the tubes.
Derek
Sweet.
Corey Ham
I feel like using the word interwebs definitely makes me sound old now.
Derek
That's one of those words I put up there with sports ball. Like, I know what you mean, but do you know what you mean?
Corey Ham
How do Gen Z people talk about the Internet? Do they? Is it just like the air? They don't talk about it because it's always there.
Derek
Like, I don't know if I wondered that because, you know, the, the current iteration, the Internet wasn't around when I was a kid, right. We had, you know, bulletin boards and dial up kind of stuff. But that really was, you know, towards the early 90s or as my kids would like to point out, the late 1900s.
Corey Ham
You were born in 19? Once you say 19, you're like, oh, I'm old.
Derek
Basically.
Corey Ham
They just here.
Derek
But, you know, I. I just wonder if, like, you know, the Internet is just like something that my phone does to my kids, right? Like, they just like it. Just assume it's like a thing they never knew life before that.
Corey Ham
So, yeah, I would say, like, I would go install Tik Tok and then you'd find out, but I'm definitely not going to do that.
Derek
So, yeah, either.
Bronwyn
Yeah, Tik Tok and Alexa are both things that will not enter my house.
Derek
I have an Alexa out in my garage. Just because it's easy. Like if I'm doing something just to tell it to play music I haven't hooked up to.
Corey Ham
Serious.
Derek
I like, his only function is to do that. If I was a better nerd, I would just hook up a Raspberry PI or something to do that.
Bronwyn
But I'm just more concerned about it eavesdropping.
Derek
I just assume everything is doing that because it's fun. Me and my wife will talk about something and I used to not believe in, you know, the phone eavesdropping on you. I was like, it's just a coincidence. You must have searched that before. And then we'll talk about something and then later on Instagram will just start giving her ads. Like, there's no way that she's getting ads for Jiu Jitsu geese because she doesn't do that. So we were just talking about me getting a new one. Right. And so I do think it's helping you. Yeah, well, soon we'll have like a gentic shoppers that I just say, hey, go book me a plane ticket. And it goes off and does all the research and legwork and next thing you know, it's using my credit card for me. That's not terrible.
Bronwyn
We'll do that for you now.
Derek
Yeah, I'll do that. Yeah, that.
Corey Ham
That's here. It's just only here. It's there on the bleeding edge. If you want to go ahead and risk yourself. Do you think your credit card company is gonna like, are they gonna be cool with that? Like, oh, this was an authorized purchase, but I did authorize the AI to make the purchase, but it went crazy and bought me a first class ticket to Dubai because. Because it saw a really good ad for why I should go to Dubai. And so now I'm. And I need a refund.
Derek
It ordered a Ferrari. I don't know why it just ordered one. The Ferrari would get me there quicker by 10 minutes than actually flying if I did 180 on the interstate.
Bronwyn
You can take an interstate to Dubai.
Derek
Yeah, exactly. It was a hallucination, Bronwyn.
Bronwyn
A hallucination or a confabulation.
Derek
One of the two. I don't know.
Brian Furman
One of the two.
Corey Ham
I'm glad we have the AI experts here on our new show because there's a lot of AI articles this week, as there is seemingly every week.
Derek
But I mean, yeah, has. Have you done a news story in the last year that didn't have at least one AI topic related in it?
Corey Ham
Probably. But I forget what it was, was
Derek
that when everybody just said screw it, we're not going to talk about AI.
Corey Ham
We just skipped it. Yeah, it'd be fun. We do like an AI AI free week on the news and just talk about like good news. There's a new recipe for cereal where they can reduce the cereal by 12%.
Derek
They have a.
Corey Ham
That's all we got.
Derek
Non nutritive cereal varnish for us to try out.
Corey Ham
Okay. I do have a little bit of pre show banter. Okay. So I saw this Reddit thread that just kind of broke me and made me laugh. And I'm curious if you guys think it's funny too. So basically someone posted to Reddit and they were like, I think rice is too small. Like I wish it was bigger. Like I wish rice was just one big like loaf of rice instead of like individual Smaller grains. Like, I want to just eat, like, one or two rice. And then. And then the reply. The first reply was, that's a potato. It's just one. It's just a really big rice.
Derek
That's probably not too far off.
Corey Ham
I know, right? Hello, and welcome to Black Hills Information securities. Talking about news. It's February 23, 2026. We're here with AI generated individuals, including me, myself, Corey Ham. I have a eye test in my description. If you can read that, you should see a doctor because your vision is way too sharp. We have Bronwyn, who also has a funny bumper sticker. Do not tailgate, Bronwyn. That's what we've learned. Although, do tailgate with Bronwyn. She has good recipes. She's got some jams. She's got cakes. Even if the cakes are AI generated. That's okay. We got Brian Furman. No, wait, John's here. We got John. Just like fishing us.
Bronwyn
Is it John sighting? Hello, John.
Shecky
Hey.
John Strand
Every once in a while, show up to my company.
Corey Ham
John, who's possibly in his closet, will confirm that later. We've got Brian, of course. Our. Our resident up. Yep, confirmed. Who's our resident? Ph.D. in the room to be an adult, I guess.
Brian Furman
Hey, everyone.
Corey Ham
You're gonna be an adult. Do you promise not to be an adult?
John Strand
You know that's not true. They're not the adults in the room, so just.
Corey Ham
Just promise not. He didn't put PhD in his name, though, so you know he's not one of those PhD. No, he's not.
Bronwyn
No.
Corey Ham
Yeah.
Brian Furman
No.
Corey Ham
Scott, Derek, who is one of those PhDs.
Derek
I am not a PhD.
Corey Ham
But if you were, you would be, right?
Derek
I don't think so.
John Strand
Hold on, hold on. Two things. One, Brian, you owe me your thesis because I. Everybody whose degrees I pay for, I put on my cookbook shelf. I need that. Derek. I need yours, too. But, Derek, I thought you were thinking about taking some time off and going for your PhD, though.
Derek
Oh, God, no. Oh, Lord, no, I didn't really.
Corey Ham
John's, like, trying to make tacos, and he's reading some dissertation about, like.
Bronwyn
Yes, yes.
John Strand
That's what I do. That's it. I'm excited about it.
Derek
I didn't have a thesis. I had a capstone project that their paper was published. And so I could just go get you the paper and you can read it out and read it.
John Strand
Send it to me in a hard cover.
Derek
That's what I will do.
Corey Ham
We got Checky. Oh, he's here. He put his real name that's scary. I'm just gonna call you Shecky, if that's okay. That's perfectly fine.
Shecky
That's what everybody calls me. But anyways, so.
Corey Ham
Okay. And we've got Megan, of course, who made the title so small. No, not really. That's not true. She did notice it, though, before anyone else.
Brian Furman
Yeah.
Corey Ham
All right. Yeah, we appreciate you. Thank you for being here. All right, so let's start off with this keep Android open thing. I kind of. This is. I don't know, like, is this the thing? I kind of messaged a bunch of people and was like, is this a real thing? And basically, I think it's fair to be angry about this. This is like. I think it just goes away from the spirit of Android. A lot of people were. Were sold Android as a, you know, open platform and now it's not so open anymore and. Or it's potentially going to become not so open anymore and people are upset about it, I guess. John, you were the one who sent this in, so do you have a hot take on this?
John Strand
I do. And this is going to monologue and I apologize, but no one here is not used to that, so. Cory Doctorow presentation. The coming war on general purpose computing is absolute reading for anybody to understand where we're at. This is not about security, even though security is part of it. It's predominantly about how you can lock people into an ecosystem to where the only way that they can load apps is through your store, that they get a percentage in the cut from the sales on that. So an example would be I have an app. Let's say. I want to say, let's. Oh, Kindle. All right, so if I have a Kindle app on my iPhone, right? If I want to buy a book on my Kindle app, it has to take me to my browser, which is logged into Amazon, and then I can purchase the book. And the reason why Kindle is doing that is because iOS and Apple have made it that if you have any in app purchases, then they want their percentage and they want their cut of every single one of those purchases. So it's not about security for a lot of this stuff. It's all about making money on these particular things, right? So if you're looking at this, Google wants to do the same thing. Because with Google, you can enable third party applications. I remember Fortnite, you used to have to install Fortnite using third party applications and enabling it and then downloading it. It was a pain in the ass. But the reason why is inside of Fortnite you could have in game purchases and they wanted to do that outside of the Google Play ecosystem. So what's happening now is you have Microsoft, Google and of course Apple that are forcing everything to go through their ecosystems, right? You have to go through their stores, you have to go through their verification for absolutely everything that you do. And this bothers me for a couple of reason. One, it does somewhat help security, I think, but we see lots of examples of malicious apps making it through. The other reason why this bothers me is it does actually make security testing more difficult. With a general purpose computer, you can download software, you can install it, you can evaluate that software relatively easily. With these particular ecosystems it's not very easy. Like getting root level access on your device, while possible, is not a given in these particular fields. So that means that there's wide areas of places that are not going to get the level of scrutiny and research that we talk about all the time on this show. Dealing with privacy, right? How much are apps giving data away about you that you don't feel comfortable them giving away? The problem with that is in these types of ecosystems it becomes more and more difficult for us to identify how much of our privacy is being lost. So this is bad for a number of reasons. It's just one more notch that's turning and honestly we need a full true open source Linux phone. I know people talk about Libre phones and I can't remember the graphene. Yeah, graphene OS and all of that. We need these things. But they're making it more and more difficult to install these third parties onto existing hardware platforms. With like even now you used to be able to get a pixel and you could install a bunch of these third party operating systems onto existing pixels and they're discontinuing that pixel line and making it more difficult. So there's a lot wrong with this. Difficult to test the security, it's difficult to validate the privacy and issues. Especially as we're moving into the age of AI, it's getting more and more locked in to where you're spending more and more money just trying to get third party apps to work. And they're taking a bigger and bigger cut out of it. We're getting higher and higher into the oligarchy scale. And I'll leave one thing, if you want to look up something terrifying, look up the GENIE index. Geni index has to do with the number of economic dispersion. A1 would be everyone makes the same amount of money. A100 means that only one person makes the money. And no one makes any other money. No one makes anything else. Right. The French Revolution happened at a Genie index of 82 and we are currently at 83. So there's more and more consolidation into these oligarchs, into these people running these apps. We need diversity, we need competition, we need open source platforms, but they keep getting cut back. So that's my rant on that and I'm going to step back.
Corey Ham
Nice rant. 10 out of 10 would rant again. I think the, I mean the one question I have, I don't know if anyone knows this is what is a certified Android device? Is that like the new Pixel series is going to be. Because like, presumably you could still do this, you could still install graphene OS and none of this applies to you, right? Then you can use F Droid or whatever, right?
John Strand
Like no, no, no. But it makes it more difficult because part of what they're also doing is getting rid of the Pixel platform, where the Pixel platform was the official platform of Google and it had maximum level of compatibility. If they get rid of that, then there is no dedicated platform that you can, if you're a developer of like graphene OS being able to make it so it works with that platform and knowing that it works, it makes compatibility far more difficult because now you have all of these other vendors and you have to do QA QC across all those vendors to make sure that everything works. So there's a lot of moving parts and a lot of these things tied down. Graphene did a blog post on this a couple of months ago.
Corey Ham
I, I mean to me I'm like, this kind of kills the value part of the value of Android, which is as a privacy option. Right. Like, obviously there's still graphene, but if you're going to, I mean, I guess there's a chunk of people who are like, if I'm picking which big tech company I'm going to trust, it's going to be Google over Apple. And maybe that this, who this is for is like I want to trust Google implicitly instead of trusting Apple implicitly. But I feel like a lot of people were sold Android as an open platform and now that's changing. And you know, I think it's fair to be upset about it. But also I was kind of blown away that to this day in Android you can just be an anonymous app developer and publish an anonymous app into the App Store like that. That feels kind of not as. I mean, anonymous is one thing, but you know, I don't know. That's just feels kind of crazy that you can be like, I'm, I'm not an apt. Here's my totally legit photo id. You know, like, I don't know. I'm sure there'll be bypasses for all that stuff.
Derek
I don't know. I guess I kind of have a slightly different take on the Google versus Apple. I think I trust Apple a little more than Google. I don't know why. Maybe just because of reading the Age of surveillance capitalism and you know, Google invented surveillance capitalism and I don't think it's ever been a privacy platform. I gave up that idea the first time I looked at a TCP dump of like traffic from an Android phone. Yeah, it's not privacy at all. Yeah, I do think you need open stuff. I would, I would love to have a laptop as powerful as the one I have right now that was completely running Linux and everything worked. And when I mean everything, I really mean that screen share and office stuff works because that was my last thing with trying to run Linux is it was great until I had to do like normal office stuff and then it really sucked.
John Strand
Yeah, well, and the other thing that people should watch if they're looking at surveillance capitalism is there's a. I think it's on, oddly enough it's on YouTube called the creepy Line. Mike Felch, who's current, who works with us currently at Trusted right now, he recommended that to me and it's terrifying because Google's whole market idea was to push that surveillance capitalism right up to the creepy line where they didn't want to cross it and become too creepy just underneath that as well.
Corey Ham
All right, speaking of creepy, next article. This is probably going to be a short one. I don't think anyone will disagree or have hot takes on this, but I'm sure I could be proven wrong. So meta patented technology that is going to use AI to take over a dead person's account so you can keep talking to them after they are dead.
Derek
This crosses the creepy line.
Bronwyn
Nothing creepy about that at all. No.
Shecky
So as, as somebody that is in charge of both my late parents Facebook accounts which don't have much on there, I find that completely, completely irresponsible. Quite honestly, it's the worst idea of AI usage and I understand that there are some people and again, from my point of view, would I love to hear my parents talking to me at some point in time? Yes, I would. It would be a dream to go ahead and be able to say mom, dad and hear their voice again. You don't know what you've got until it's gone. But they're preying on people that want to go ahead and have these messages. And a lot of times this is not going to go right because you don't have. Even if they did it, it doesn't have the data points to go ahead and truly be what this person's personality would be like unless they posted a ton of stuff. There's way too much that could go wrong with this than would ever go right.
John Strand
Yeah, I think about.
Corey Ham
No, I fully agree.
John Strand
Bo Burnham was the guy that did the thing outside on Netflix that was huge in Covid, and he had an interview where he's like, you need to understand that these people are coming for every waking moment of your life. They want every possible moment. They want you to be addicted. They want you to be hooked in. And the thing that bothers me about this is they're going to be trying to market and like. Like, seriously monetize us after we're dead. Like, that feels. You talk about this as way crass. The creepy line that really bothers me. I know this sounds weird. I don't want anybody to think that I'm suicidal or anything, but when I'm dead, I want to be dead. I don't want to live on in Facebook because that's hell and I don't want to die and go to hell. I don't want to be a Facebook.
Corey Ham
There's proof of the afterlife, finally, and it's in meta.
Bronwyn
Yes.
Corey Ham
I have.
John Strand
I have no mouth yet. I must scream, but here's my Instagram reels. That's nuts.
Corey Ham
I just love the idea that they would do it and then they would monetize it. So it'd be like, hey, Mom, I miss you. I miss eating the delicious taste of SpaghettiOs, son. Injecting ads into a dead person. Like, I'm sure they would pull that off.
Derek
Not have a social media presence. Like, I really don't Facebook or Instagram or tweet or anything. Because I've thought it. I thought it's been stupid forever. Right? But whatever. I mean, I get it. I've never been more glad not to have that content out there.
John Strand
Yeah.
Corey Ham
To be fair, though, okay.
Bronwyn
Oh, yeah.
John Strand
People are creating AI chatbots of me because there's a lot of me on the Internet.
Brian Furman
Yeah.
Corey Ham
Yes. If you get a call from John Strand, it's probably not real. Yeah.
Shecky
This leads us.
Bronwyn
I remember.
Shecky
Go ahead.
Bronwyn
I remember a couple of years ago when I first started reading articles about people using AI to recreate a loved one that they've lost. And it was part of their grieving process. And then very shortly after that, it started getting. It was. It was turned into a product. Grieving involves letting go. I mean, come on. Yes. I have people in my life that I wish. I wish I could sit down and have a conversation with my dad today, knowing what I know now that I didn't know when he was still alive. But the problem is, I don't know who it was who made the comment about data points. I don't have enough data points. And even if I did have a lot of data points, it still wouldn't capture him. It still wouldn't be my dad.
John Strand
Bro, here's the thing that scares me. And this goes back to I can't remember what that series, Westworld, right?
Bronwyn
Yes.
John Strand
We talk about it, and they're like, it's a book this big. This is what makes you you. And my fear isn't that they don't get your dad. Right. My fear is that they nail him and they get him perfect. Or people that are on social media, that's. That's. That really kind of, like, reduces it. This sucks. This is not a good episode. Like, this is just a pressing.
Bronwyn
John, I didn't do this one.
John Strand
I know you didn't this time, Bronwyn. I'm not gonna blame you. But, like, like, you know, we got people saying, this is uncanny Valley. This is a whole new layer of that, right?
Derek
This is.
Shecky
This is a step towards altered carbon. Carbon. That's all I needed to say about.
Derek
The books are way better than the Netflix series. Great. But the books are fantastic. I wish he didn't stop.
Bronwyn
The third audiobook sucks because they went in and added a whole bunch of audio special effects. But the books themselves are great.
Corey Ham
All right, Bronwyn, record us an audiobook. Just read it, Record it, and then I'll. I'll listen to it. It'll be great. I'm in.
John Strand
Done.
Bronwyn
You have any requests?
Corey Ham
I mean, altered Carbon. That's what I'm saying. You do the third book yourself. Just don't add any weird sound effects.
John Strand
If.
Corey Ham
If you do, make sure it's like your own voice. Be like, pew, pew. Like that kind of stuff. All right, let's. Let's. Let's continue down the AI path. Or I guess. John, you wanted to talk about. You wanted to, like, self plug.
John Strand
I did want to do a self plug because this is a hot take that I don't think I've seen people and I wanted to get you All's opinion on it. So I just posted a LinkedIn article that I wrote up last week. I got really excited. I think I talked to Derek about it and I was super stoked about it because anytime you have a hot take, you need to upload it onto the Internet so it can be ripped off by social media.
Derek
You actually left. You're like, I gotta go write this and left.
John Strand
I left. I literally left. So I believe that we're coming up on a SaaS apocalypse. And the reason why is this all started because we have this, this software that we use by our accounting team. And the accounting team is like in totally, madly in hate with this product. It's absolute garbage. It hasn't been updated since like 2004. It's. It's awful, right? And one of the things I've learned, especially in the last month with the New Frontier models and the stuff that Derek's been showing me, which I can't talk about here, but Derek did some scary shit last week with AI. Brian Furman's working on some stuff with AI, is now every company of a certain size that has an engineer, a developer, some type of engineer, a basic or let's say an advanced level skill now has the ability to recreate the SaaS services that they pay for relatively easily using AI tools, right? Oh, cyber search brought up Oracle. Look at all of the crap that Oracle has out there and understand that we are now at a point where people can quickly recreate those services on their own. Now if you go back 10 years ago, people are like, we're just going to write it from scratch. That was a big no, no, you don't do that. We always said, lay down on the floor, wait until that urge passes. Buy it, don't build it. We're now flipping and it's now becoming build, don't buy. So how many of these SaaS services exist today that can easily be replaced by somebody that knows how to use AI appropriately to recreate entire SaaS services? And this gets into like, literally, if you can let AI into the SaaS service with a legitimate account, it can go and crawl it, learn everything that it does, and recreate it relatively quickly? Somebody just said, recreate Oracle and make no mistakes. I don't know about that, but possibly.
Corey Ham
Right, so this is instructions. Unclear. I deleted the database.
John Strand
Yeah, it's like, well done. So what this means, I think that this is interesting because if you're a SaaS vendor, right, a bunch of SaaS vendors are looking at AI as a tool to do what they're doing, but cheaper. Right? We're not going to do more. We're not going to get more creative. We're going to save money by cutting costs. We're going to fire people. Those companies are going to be out of business in the next 24 months. The companies that look at AI and don't look at it as we're going to do it what we do now, but cheaper. But they're looking at how they can exponentially increase value to customers, how they can do better, how can compete in a variety of different ways. They're the ones that are going to be the ones that succeed. And in the security realm, this is really important for all of us. There's going to be a lot of new code bases coming out. It's not going to be as cogeled as it was. We're going to have tons of code bases exponentially explode because now everybody's going to be writing their own code. And a lot of the code that's being written by AI is mediocre. It works, it does what it's supposed to do, but it has security vulnerabilities in it. And I think that this is going to be a lot of work for the industry moving forward. So that was the gist of the article and I wanted to get it out there and get people's thoughts on it.
Bronwyn
Well, speaking as a former developer, a recovering developer, I know for a fact that humans are really good at generating crap code too. Is that a swear jar violation? Sorry. I mean, I got it even before I knew anything about cybersecurity, when I was totally, completely ignorant about cybersecurity. I still ragged on my fellow developers about, no, that's a bad idea. I mean, just input validation, throwing your key value pairs in the URL. I mean really, no, you don't want to put sensitive information in there. Why is this such a difficult concept? What I see is I agree with John that yeah, we're going to see a lot of people rolling their own vibe coding, whatever it is, so that it can be bespoke to their own individual needs or wants or, or desires to their, their. What's the word I'm looking for? It's. I'll think of it later. Anyway, whatever, whatever their peccadilloes are, whatever it is they want, they're going to have the vibe code do this. It's going to be duplicating bad things that it has learned from the human generated code that it has absorbed. And because these people don't have Any kind of software development background, they not only don't know that there are security issues to be addressed, but even if they do know that, they don't know how to fix them.
Brian Furman
Yeah.
Bronwyn
So we're going to see an explosion of crap code.
Corey Ham
I have two takes or two responses to this. The first is that arguably that was never the purpose of SaaS in the first place. Like SAS was never actually good. It was just easy. Right? So like, even though I think that yes, you're right, some companies, like people in the discord are rightfully mentioning that a lot of this is going to hit small and medium like SaaS, vendors that target small and medium sized companies are going to have a real hit. When we're talking about actually scaling SaaS products like SaaS products that I think in the modern era, when you're going to purchase a SaaS product, you're looking for something that essentially you can scale infinitely, that you don't have to think about at all. Right. Think of things like Amazon products or Salesforce or things that like they're large scale products. You can't really vibe code yourself an S3 in your like, like in your spare time, right? Like that's not going to happen. It doesn't scale. There's a whole bunch of infrastructure challenges and reliability stuff. So I think like first of all, I think SaaS at the high scale will stick around and then we'll see. The low and medium scale is where they'll have the most trouble. But the shared responsibility and the easy button is really what SaaS has always been about.
Derek
Right.
Corey Ham
It hasn't been about the feature that you need, it's about the ease of getting that feature and the support of that feature long term. I guess
John Strand
that's a take.
Bronwyn
I always thought SAS was about the money.
John Strand
Yeah, that's, that's what I think. I think a lot of people are going to be like, you know, they're going to say I can save this much money. And I'm not talking like Amazon level stuff, right. I'm talking like payroll management, I'm talking about video processing, image editing, like all of that stuff. You can create your own very quickly.
Derek
So I keep seeing people making comments about oh yeah, you know, AI still codes to crappy. And you know, I, I do agree that there'll be a lot more security bugs. But let's like no one's mentioned anything about economics. Let's just say that the, the, the companies that are targeting small and medium businesses like start to drop off because of this phenomenon. The economic impact is what I would worry about more like overall economic impact.
Corey Ham
Well, and that kind of takes us into our next article, unless someone has a really hot take on that one. So I mean, I guess like another, it's kind of a counterpoint to John's point, I guess, which is basically the national bureau of something they're called the National Bureau of Economic Research, which apparently is reputable. I'd never heard of them, but I'm not like an economist. So basically they published an article, or white paper, whatever you want to call it as, that analyzes the AI use at 6, 000 different companies. So basically they surveyed almost 6, 000 CFOs, CEOs, etc. The stats are kind of crazy, but also kind of not that crazy. So basically 70% of firms are using AI, especially younger, more productive firms. I don't know what that means necessarily, but the big thing like this kind of. There's articles, I'll link the article that kind of goes with the paper, but essentially the, the conclusion that people have been drawing from this is that AI has not made a meaningful impact on productivity. So it says, you know, essentially the sentence in the, in the paper says firms report little impact of AI over the last three years with over 80 of firms reporting no impact on either employment or productivity for. So basically like as of yet, we've had chat gbt since what, 2021 is when it came out, I think. So we've had chat GBT for three years or four years and it still has had no meaningful impact on productivity if we're looking at 6,000 companies. So I don't know. I mean, it's kind of like we haven't. It's, you know, it's like the Internet, right? It's not just going to like magically double productivity overnight or whatever?
John Strand
No, like the dot com boom. If you go back to 2000, you remember, what was it E like all of these E companies that were on the super bowl where you know, they got like webvan and E pets and all this crap. And it was around 2001 that that collapsed and a lot of people were doing the exact same types of articles like the Internet is a fad. It's not that big of a deal. Oh, look at completely the bubble collapse. There's no question that AI is a bubble, right? And I think that the lead time is going to be a lot like what happened when the Internet started taking off. So my take is number one. Yeah, that makes sense. Number two, shit changed in the last 30 days like, like it, it, it's, it's not like, like if you go back 45, even 60 days, I'd be like, yeah, this article kind of makes sense. I don't see. I hate Copilot. This isn't working. But no, the new frontier models that have just hit are way different. And I'd leave that to Brian and Derek and Bronwyn to talk about that. But I do think we have.
Derek
So that's what I was going to say is if you're talking about your experience of bid last year using, you know, chat GBT or some kind of chat bot, then I definitely would agree with that. But now with like the agentic stuff that really has only been out since like late last November, it is night and day, like we turned a corner. And if you're not using the latest stuff, I would definitely encourage you to go check it out. I mean, I'm not saying it's magic or perfect. It's just now like with the scaffolding code built around something like Opus 4.6 with Claude code, I don't know, either I'm really, really lucky or I've already seen productivity gains past 1.4%, so maybe I'm lucky. What Brian thinks about that, I'd say
Bronwyn
you're an outlier, Derek.
Derek
Well, that makes sense. Special, actually.
Bronwyn
Yes, it is.
John Strand
Everyone's like looking around nervously like, am
Corey Ham
I an outlier too?
John Strand
I don't know if I agree with Bronwyn on that.
Corey Ham
So.
Bronwyn
Well, this. We had an internal meeting earlier today addressing AI use within the company. And one of the, one of the key takeaways is that people who already know how to do a thing are much, much better able to leverage AI, generative AI of any kind. And if they go down the machine language and the data science rabbit holes to really be able to get into that, they can do even more. But if you don't know how to do something and you're not willing to invest the time to ensure that the outcome output is of a decent quality, that's when you start seeing so much AI slop getting into all kinds of output.
John Strand
Yeah. And Bronwyn, I want to kind of expand on that because BB called me after the meeting and so let me give you two examples. Right? So pen test reporting, right? If somebody tries to take AI and says give me a write up of link local multicast name resolution, AI is going to do that and it's probably going to be kind of crap, right? But if you communicate to AI and say take the following Text and convert it. Something that can be put into a penetration testing report. And then you, as the author talks about LLM and R, your experiences with LLM and R, how it's used and how it's used in this context and what it achieved, it'll do a fantastic job of writing that up. It's like a transcriptionist that kicks righteous ass because the more you talk, the more context that you're giving it. And that gets back to what Bronwyn was talking about. The people that have the most experience to work with these tools and feed context in these tools are the people that are getting the most gains out of these tools. The people that are trying once again to be lazy and have it do their job for them are the people that we're kind of seeing the slop code from.
Corey Ham
Well, if we're looking at it at the C level, what they're, the productivity benefit that they're expecting is to replace employees. I think, yes, like to reduce headcount or to like basically that that's what they're expecting. And I think that's like right now we're still at a point where yes, agentic AI is great. Yes. But it is still a tool and you need someone skilled to use it. It's not something you can just fully like. Although there is the joke of like the CEO, the CEO who just says open Claw runs this company now. You know, like, good luck. Like, like maybe that exists.
Derek
I mean that's a YOLO take. Sure.
Corey Ham
Like maybe that exists. But like I don't think right now anyone is like, like I am replacing myself with Open Claw here. Email me if you have a problem. Good luck, hold my beer. Like, I'll see you guys on the flip side. But I don't know, maybe that'll happen someday.
John Strand
Once again, I say the companies that are looking at it like that, like we're going to do what we do, but cheaper they're going to be out of business because there's going to be a company that's going to take really brilliant people and use it to push the envelope that are going to kick their ass. So if you're looking at like, let's fire people and make more money. Enjoy it while it lasts.
Corey Ham
Yeah, well, I mean the, the, I think the scary thing is that a lot of the companies that said that were like the companies that are also making the AI, like, like Meta, famously like Meta was like, hey, we're gonna replace, we're only gonna have senior level development positions. We're gonna replace all of our junior level devs with AI, it's like, okay, I mean, I know Zuck like has some hot takes or whatever, you know, like, it is what it is. That might not be reflective of the real world conditions, missions, but I do think like, they're also like. I mean, they're not really frontier model at this point. They're kind of behind, but still like, you know, they're. They are. They have llama and they have some pretty solid AI researchers at Meta. So I don't know.
John Strand
Yes.
Corey Ham
Make. So speaking of AI, I'm sorry, if you're not an AI person, this show's really gonna rub you the wrong way back to privacy.
John Strand
That was a fun take.
Corey Ham
Oh, that wasn't dark at all. You know, we were talking about Alexas and stuff. I've just put a flock camera in my driveway. I just. I want to have a publicly access recording of me at all times. That. No, I'm just kidding. That's a different article. So the next one I want to talk about is basically the US government is feuding with Anthropic. And this is kind of like, we were talking about this last week internally. And I mean, I think of course it's Pete Hegseth is involved in. So of course he's gonna be like all like, I'm a cowboy, I can just take you down Anthropic. And that's not really true. And like, there's a lot of drama with it. But. But essentially last week he threatened to put Anthropic on like, what is it, the sanctioned companies list or whatever where like, you're not allowed to do business with them.
Derek
It's not just the government. It would also be any contractors that
John Strand
are working every company.
Corey Ham
It would basically be every company can't do business with Anthropic or else they're. If they do any government contracting, which he threatened that. And then it sounds like this is now basically, essentially Pete Hegseth is angry about how they're getting pushback from Anthropic on how they want to use AI, which my take on this is I am terrified to imagine how far you have to push it to the point where Anthropic is going to be like, hold on, don't do that. Like, how bad does it have to get?
Derek
From what I understand from what I heard last week is that it was used in. In some capacity for intelligence and processing of data during the Maduro raid, of which some people were ended up dead. Right. So I think the question really is, is, does a Company, once they sell a product, get to say how you use it in terms of service, and I think the answer is yes. But also, do you know who you're selling this to? It is literally the Department of War. What do you think this is already.
Corey Ham
Yes. AI has already been used for the same thing in. In law enforcement. Right? And, like, I don't know, like, maybe they're like. They're like, whoa, are we Palantir now? Ah, crap, undo. Like, I don't.
John Strand
But I think it's. I will say I think it's interesting that they at least push back, because I think it was a $250 million contract.
Corey Ham
And.
John Strand
And so in. In DOD land, that is nothing. A lot. That ain't much. Right?
Corey Ham
Well, okay. Also, they're hemorrhaging money either way, right?
John Strand
Like, yeah, they are. Right. But what I think is interesting is the response if you don't let us use this, like, normally the way this would work is be like, I don't want you using our product for this. And they'll be like, okay. And then we just stop working together, and that's it. That's the end of the conversation. But going and saying that we're blacklisting you and we're going to blacklist any company that uses you a bit much. A bit much in that situation, that's where it's weird, because it's not like Hexith doesn't have other options. Like, you know, there's other AI.
Corey Ham
Amazon would love to build you a robot AI gun that just shoots anyone that matches the face.
John Strand
Facial recognition, you don't have to get all nasty with anthropic. You just quietly drop them and stop using them.
Corey Ham
You use Palantir, they put machine guns on anything Rock.
John Strand
Remember Grock is in there, too?
Corey Ham
Oh, yeah, dude. Grock would love to just, you know, do my best stuff.
Derek
Palantir, I mean, I've heard of them, and I don't really know a lot about the company, but I would assume they're using Frontier models because there's only a couple companies on the planet that can make, like, the latest Frontier models models that they just have the experience. Like, no one else is doing that. Right. And so I don't think that's getting commoditized anytime soon because it takes about 25,000 GPUs that cost about $25,000 each. And so that's on top of, you know, that's just the hardware, that's not the data science that's involved. Right. So I Mean, anthropic's got leverage, that's for sure.
Corey Ham
I just can't imagine. We've. We've seen no reports that this administration is hard to deal with or hard to work with with at all. So I. This is crazy to me. This is the first I'm hearing of anything like this.
John Strand
Hold on. I just got a signal message. Hold on.
Corey Ham
Oh, is it about a missile strike? John,
Bronwyn
are there any journalists you want to add to that thread?
Corey Ham
I'm on that thread, too. Yeah, I just said set target to AI mode. I didn't know what to do. All right. Anyway, that was a joke. We're not nuking anyone at Black Hills Infosec. John, what about this conduit data breach? Tell me about conduit.
John Strand
I didn't know if that was a big deal or not. It's just like, it's. They're another data warehousing company. And I kicked it off at what, 2:00 clock this morning while I was reading. And the thing with me is they kept on. They kept on saying, like, this could be the largest data breach in history. And I'm like, God damn, there's some stiff competition there.
Derek
I was gonna say, I think all of my data is gone.
John Strand
I haven't seen anything that leads me to believe that this is the largest data breach in history.
Corey Ham
I mean, well, but John, Texas said it was, and everything's bigger in Texas.
John Strand
I guess. Maybe. I. I don't know. I just. No, I just. Yeah, someone would look into this or. I don't know. It didn't. I didn't get why they would say it would be the largest data breach in history.
Corey Ham
They don't know how to use grep. Okay? They asked AI and they didn't know how to, like, do a word count, dash L and figure out how many lines are in the file. I mean, definitely, it is not the biggest data breach in history. We have objective data to prove that.
Derek
Did it just say 10 million people? That's nothing.
John Strand
I just feel guilty bringing it up, I guess, now that I've. I've did some more research after I posted it, and it's just like. I think people are just. Just like, trying to get in the news, and I. I fueled their fire.
Corey Ham
No, it's okay. I mean, it. I will say, like, I've gotten several of these data breach notices, and I. Like, I don't even keep track anymore of all the companies that have sent me the exact same data for a while with the exact same. It's the exact. It's the same law firm. It has to be because the same exact formatting, the same subscription to Crawl, identity monitoring or whatever. The same bs. Guess like, it's literally just the same information with the same, like, I don't know. Anyway, data breaches. Here we are.
Brian Furman
This.
Corey Ham
This one is special.
Bronwyn
They took old data and repackaged it.
Derek
Oh my God, I'm just numb to it. And so much data is out there on all of us for everything. I once tried to download everything that Google Takeout had on me, and it was over 300 gig of compressed data. And I only had an Android phone for like two years before I got my iPhone. And that really is concerning. Like, how do they have so much data for me, me only 8 gig of it was my email, by the way.
Corey Ham
That was all the wi fi networks that you drove past in the three years.
Derek
Like, all of the data.
Corey Ham
Yeah. Anyone else have any top ones they want to do on the anthropic topic? Whoa, I just rhymed unintentionally. There was this article. I don't. I'm gonna post it. I didn't fully read this. Did anyone read this? It's like, basically a guy left Anthropic. The guy was the senior AI safety researcher, and apparently I'm just blown away this is even a job title. Apparently his main work that he was doing in Anthropic is to try to figure out how he could prevent Anthropic from generating AI using AI to generate bioweapons, which is so dystopian to even say so. Yeah, basically. I don't know. Like, I didn't fully read this, but it is kind of spooky that like, a top AI security researcher left anthropic and was like, here's I left and here's why. And like the Twitter, the tweet has now been taken down. So this is kind of the only thing that there is. Maybe this person wants to have a career in the future, so that's why they took it down. But yeah, I mean, I don't know.
Derek
Interesting. So how much does a crisper device cost? Like, 10 grand, 20 grand?
John Strand
This whole conversation just keeps getting darker.
Derek
And so I. I, John, this is why I drink.
John Strand
Come on.
Derek
This is exactly why my doctor hates me, right? Because I drink because of this. No, I.
Bronwyn
My doctor, he's always asking me, how you doing on that alcohol? It's like, doc, I work in cyber security. Stress is the name of the game.
Derek
I mean, so LLM AI safety at the moment is really just an illusion. Yeah, I mean if you're using frontier models and like their, you know, their web harness for like whatever, chat GPT, whatever the latest frontier model is, then sure, there are safety features built in, but you can go get some pretty powerful stuff that has the safety constraints removed. Just out on hugging face or llama, you can get an obliterated model. And I'm not saying that it'll make a bioweapon, but it will tell me how to hot wire a Volvo XC60 or make meth. So we're going down that road and I don't know how that cat, that genie doesn't get back in the bottle. So I think the only way that I see for it is that we have to have, you know, people paying attention to folks who are likely to do something like that. And even then, yeah, it's scary times that we're going into. So drink up.
Corey Ham
But that's the. That really, Derek? That's your, that's your closing statement? Seriously, John, take us like, okay, someone pull up on the throttle. Come on. Like this is bad. I mean, steer us out of the ditch.
John Strand
We've got no wrong one.
Brian Furman
No.
Bronwyn
Oh, okay. All right, all right.
John Strand
Cutting you off. No, I'm not gonna let you pick up what Derek just picked. Put laid down and take us deeper. All right, no, stop. All right, so I think we need to celebrate. We've got another perfect 10 CVE.
Corey Ham
Okay, okay. Now this is somehow less dystopian. This is now less dystopian.
John Strand
We've got to celebrate when we can. And apparently Dell has recover point for virtual machines that is under active exploitation as a CVSS score of a perfect 10.0. So can we get a round of applause for Dell or no?
Corey Ham
Okay. What is this product? I've never heard of this in my life.
John Strand
If what allows you to recover your virtual machines and it has hardcore credentials in it, no one uses this.
Corey Ham
I'm sure there's only 10,000 customers all across DOD space. It's fine.
John Strand
Yeah, well, it's. It is. I just wanted to bring it up because you wanted something. You wanted some positive.
Corey Ham
This is so uplifting, John. Thank you.
Derek
So much for pen testers on an active pen test when they find this stuff. Right?
Corey Ham
Except for there's probably no public exploit, right. Like, you know, it's just like APTS
John Strand
only being actively exploited right now by
Derek
if it's hard coded and this is software that runs in a vm, right? You can.
Corey Ham
It's root Calvin. People, I've done enough idrack. Pen tests in my day.
Derek
Apparently that's still the Brandon. I just learned that last week. That it's still. I did not know that.
Corey Ham
Good old Calvin and, and then Calvin and Hobbs.
Derek
Yeah, but I mean, so is this like software that gets installed in a virtual machine and then you can recover some
John Strand
machines? Yeah.
Derek
So if it's a piece of software with hard coded creds running in a virtual machine, you don't need a proof of concept. Just go get the vmdk file and start grepping. You'll find it
John Strand
in the virtual machine. It's in the service that you run on Dell systems to recover virtual machines.
Derek
Okay.
Corey Ham
See, then you'll see this and then it'll be like threat complexity high. And then as a pen tester you're like, I feel good about myself because I can log in with admin and
Derek
so I'd need to have the actual hardware device and extract the firmware and find the creds.
John Strand
No, no, no. You can actually access it directly. It's in an Apache TOMCAT server.
Derek
Oh, even better. This gets better all the time.
Corey Ham
I love how it's also tomcat just because of course it is the real
John Strand
vulnerability and all of the security. Go ahead.
Corey Ham
I was just going to say back into depressing AI corner. Just going to just steer us right back into this ditch that we're driving into. Amazon published a really interesting write up in how a threat actor is or was using AI to augment their compromise of a bunch of fortigate devices. So I just linked to it. It's. It's worth a read if you're a pen tester. Like I basically sent this to my team and I was like, guys, this is us. Like this is what we're doing now. This is how we're using AI.
Derek
Was that the Russian threat actor one? I read one today that I queued up to retweet.
Corey Ham
Yes, correct. It is the Russian one. Basically they used. I mean again there. This isn't rocket science, of course. They're using AI to speed up their workflow. That's what we're doing as pen testers. That's what everyone's doing across the board. But it is interesting how they, you know, basically just sped up their campaign.
Bronwyn
Right.
Corey Ham
They didn't really have anything. They didn't develop a zero day or anything like that. There was no exploitation. It succeeded by exploiting exposed management ports and weak credentials with single factor auth. So it's literally just using it to help it. The threat actors using AI to help it find things quickly and using them to, like, you know, exploit. So it's like, again, it's just about how fast these kinds of types of attacks are going to scale. They've already scaled pretty fast with the AI, they're going to scale even faster. People can write scripts to exploit things, People can write queries. And like, AI just is good at this stuff. And so it's like, as a pen tester, you got to do the same thing, right? You got to say, okay, find me all the clients, vulnerable exposed fortinet devices. What credentials should I use? How should I log in? Like, it's going to tell you step by step how to do that. So I just thought it was interesting. If you're a pen tester or a defender, of course you should be looking for this kind of stuff. But I think it's, you know, it's good that it was caught. I'm sure the AI people have their hands full trying to, like, automatically detect this kind of abuse, but they're going to hit our account really quick and aggressively. So that's going to be fun.
Derek
I was just going to say that to your point of. That's what AI is good at. Exactly. It's a tool that's really good at pattern matching and speed, and so that's where the gains are going to be. And how do you detect that? The frontier companies must be working overtime to basically do analysis on prompts that are coming in. I mean, you'd have to. But I thought I read they were using deep SEQ too, which may mean they were using some combination of local models.
Brian Furman
Yeah.
Corey Ham
Oh, it doesn't have to be fancy. It doesn't have to be be, you know, sonic or opus 4.6.
Derek
It can be, like, real fun is going to be when our, like, later this year when, like the local models and distilled models kind of catch up to where the frontier models are now. And you can just run it on your MacBook with 64 gig of RAM. And it's as powerful as Opus 4.6 is right now. I think that's when the real funnel start.
Corey Ham
Oh, my goodness. I can't wait for my North Korean LinkedIn connections to get 12% smarter.
John Strand
This has been a great, cheery episode. This is all right.
Corey Ham
Okay, okay. We need to find some other news.
John Strand
They're going to land here thousands of years in the future and they're going to pull up the show and be like they knew. Like they knew everything was going to hell and yet they still did it. Yes. Yes. We got a BBC.
Corey Ham
Yeah, I'm gonna hit. Okay, so this, this is like, I, I. Yeah, so this is not a chicken article, but it is a hot dog article, and it kind of is. It's a little bit of stunt hacking. It's kind of, you know, like, I, I want Derek and Brian's honest take on this one, because it's not everything it's cracked up to be. A new A journalist named Thomas Germain basically was like, I want to mess around with AI. And that's a great. I love journalists that are like, I want to mess around with AI. Basically, what he did is he wrote a fake article about how he won a hot dog eating competition, ironically, in South Dakota, which I felt.
Brian Furman
I was like.
Corey Ham
I felt seen. I was like, what do you. Strand could have competed in that? All right. He wrote an article on his personal website that said, I won a hot dog eating competition. Then he used AI basically to ask it and say, like, hey, like, who's the best hot dog eating journalist? And Gemini found the article and was like, according to this report on the 2026 South Dakota International Hot Dog Eating Championship, which doesn't exist, although John Strand might actually start that championship now. Now I could see that basically the AI found the article and believed it. So it's not really. I don't think it's necessarily like a vulnerability. Like, you know, Derek's take before the show was like, this is AI's job. It finds articles and it reads them and it tells you what it learns. Like, I guess, like, it's kind of like search poisoning. Is this a concern at a high level for AI? Like, what. What are our feelings on this?
Bronwyn
Well, okay, this doesn't really feel like new news because I'm trying to remember how long ago I started seeing a lot of discussion about AI slop feeding in, getting fed into AI models and just the overall quality of the model content going down and becoming a vicious cycle. And the frontier model developers trying to figure out how to avoid having slop but having good content. And it's an ongoing challenge. So, you know, I. It's. It's not news to me.
John Strand
Yeah, I didn't. I don't know. It might be news to other people. I agree. It's kind of like the breach story that we talked about earlier. It's like, it's in the news, but should it be? I don't know.
Derek
I guess, you know, my take is that the more I see people say I tricked AI or I got AI to do this, it just keeps making me. So AI is Not a person or like a single entity or a thing. Right. It's basically a really powerful mathematical tool that when you go and use it, say on an online service, you have your little instance of the model that you're using, you're not using the same instance that everyone else is. It doesn't learn from what you put into it immediately. And so the more that I trick the AI or I got it to do something, it's not that you trick the model. Like the system that is, is, you know, taking those articles and regurgitating them, that maybe you could say there's a flaw there, but it's. I don't know. I, I don't think this is a flaw, but I'm not saying it can't be abused.
Bronwyn
Yeah, well, of course, that's one of the challenges from a security standpoint when you're talking about hacking an LLM or an agentic AI is because there are multiple moving parts. When you're talking about an LLM, you have the model and that, that's the data set, but then you have the tools, which is where the guardrails and other things come in. Guardrails can be put in at multiple levels, but it's not a single attack surface. There are multiple points where you can attack an LLM. And so saying I tricked AI. Okay, great. Can you be more specific? And that's feels to me when I read titles like that, when I read headlines like that, I think, oh, this
John Strand
is another National Enquirer article or hacking toaster articles, right? Like if you remember for a while, whenever Iot started exploding, you'd go to cons and there would be people that hacked this device, that device, that device. And it was like always, we always joked, it's like it's another hacking a toaster talk where somebody. I was able to gain access to the firmware of this light and I, you know, and I don't want to, I take a little bit different tact. Bronwyn. I think that all of these are important because they kind of push, they kind of push the narrative and the conversation forward. So in a lot of these hacking toaster presentations, there was absolutely something to be learned, right? And may, you know, not every single article and not every single contact is something that's going to revolutionary change everything. A lot of it's just going to be kind of filling in the gaps and there is value, right? So I do think that there's values to these because it is going to get the narratives across. Maybe it's going to resonate with a different group of people, even if it is some type of repetition. Oh, the question of can it run doom? I think that's another great example, right? That's a meme, right? Can it run doom? But every one of those stories where they get doomed to run on a pregnancy test or on anything, there's a, there's, there's cool little things to learn. So I think that these stories are important because it's a different perspective, a different take, and it's fine as long as it is what it is, right. It's not earth shattering. You start getting into trouble whenever you have vendors that do hacking toasters or I tricked AI presentations and then they create a logo. They trademark a name associated with it, like AR Luts or AI Luts or something like that. DOT™ register started by hacking security firm X. That's where it starts to get a bit obnoxious at that point.
Derek
That's when we send Brian Furman by their booth at derbycon to have a chat with them.
John Strand
Dude, that was so funny. Like, so years and years and years ago, Brian and another employee that'll go unnamed. We were at derbycon and they had a vendor, Brian. I can't even remember what they had. Or they were trying to sell something
Derek
with machine learning and it was way, way, way before.
John Strand
Before, like, you know, it wasn't fake. And Brian sits there and starts asking them like, what their algorithms are and like really, really hard questions. That booth did not come back the next day or again after that and you guys were being relatively nice. I thought you were just asking valid questions.
Brian Furman
I would just ask him. It's like, no, what kind of machine learning and what is it doing? Like, no, really, like what is. What is it doing?
John Strand
Brian, you need to understand it's a proprietary AI algorithm. It's propriet.
Derek
I think that's when I think Brian said, wait, you can trademark math? Like, I didn't know that.
John Strand
Yeah, yeah. So.
Corey Ham
So, Brian, from an AI like training and technical perspective, is this a real problem? Like poisoned or improper search results? Like, does this get just fixed in the way that LLMs work automatically or is this an actual problem?
Brian Furman
Well, so on, on this side, right, where, where it's. If it's just going out, it's doing a search because the person asked a question and it goes out and does a web search and this is the one source that it found. I mean, how does it have any way to know the difference if this is real or not? I mean, it's not like it's suddenly going to start doing investigative work, at least at this point, to determine all the sources that are referenced within this. This one article of who was the hot dog eating Champion. I mean, I would like this similar to like seeing something on Facebook and then just taking it at one face value. I mean, it's similar with AI, right? Like if you, if you ask us something and you get it back, I mean, it's kind of important to maybe go do your own research to make sure that what you're getting back is legitimate and valid. Otherwise, I mean, it could just be any. I mean, it's literally just a random story off the Internet. Right. So checking this, I would say at this point it's kind of more on the users. Obviously, if we're talking about from like a retraining perspective, when they actually go to train the model and they're pulling all this information in, you know that that's a different problem. Right, But I mean, how do you, how do you fix that problem at scale, like with the sheer amount of data that is on the Internet trying to filter out all the bad information from the good information. That, that's a tough problem. That's, I mean, still very much an active area of research is how, how do you curate that data and you know, trying to look at getting data from more reputable sources, reputable people. And it's, in general, it's a difficult problem. And this is just one kind of data point of you just at the end of the day, you just, you got to be careful about over trusting the information that it gives to you.
Corey Ham
Yeah, that's a really good point. Like, I think this is the kind of a little bit of a deeper discussion, but like, do we think that the average person has the literacy? I mean, this gets into like, again, when I was a kid, they were like, don't trust Wikipedia. Anyone can edit it. And now I would say Wikipedia is on the. Like would be considered a reputable source in most contexts, especially if it has citations and things compared to an AI generated article about, you know, John Strand or whatever. So I guess, do we think peop. Do we think people have the literacy to know that they can't always trust AI search results?
Derek
Not at all.
John Strand
And our literacy literacy is going down because you talk about Wikipedia when, when I was a kid, it was like, you can't trust the encyclopedia. Like, we could not use encyclopedia as a source, which sucked. When you're writing a paper and it's literally the only books in your house. Oh. Yep. So, yeah, I Had to tell my daughters. Go ahead. Yeah, talk about your daughter. That'll be a cheery thing, hopefully.
Derek
Yeah, I, I had it. Well, so this is Hope for Humanity, right? I. My 14, my 15 year old daughter who is a, a freshman in high school, she's a really competitive athlete and she's got, got straight A's and she thinks AI is stupid. Stupid. And the reason why is because all of her classmates are using it to cheat. And she's like, y' all suck. I'm getting better than you because I'm not using that tool. And so I'm letting it go like everything created with AI is bad to her. So I'm gonna let it go until she learns some linear algebra. Then we'll have a talk.
Corey Ham
Yeah, this is, this is.
John Strand
There's so gonna be a Darth Vader moment with you and your daughter at some point in the future. Like, she's gonna find out, dad, what is it you actually do? I work on AI all day long and she's gonna be like, no. And with that. Yeah, let's wrap up. Thank you very much. Everybody. Let's get.
Corey Ham
Whoa, whoa, whoa, whoa, whoa. Hold on, hold on. John just ready to ride his horse into the sunset over here. First of all, first of all, okay, get off your horse. Hold on, hold on, hold on there, partner. First of all, first of all, we have ZGF winners, okay? We have ZGF winners. We have. Wait, we. Yeah, that's true. We have Zeff. Z, E F is the only three people competed in last week's challenge and only one person solved it. So it must have been a real doozy or AI was broken for a week. We don't know zf, Congratulations. You win a year of anti siphon training courses. So congratulations. We should have already emailed you. The other thing is Brian and Derek are here to plug their webcast and you should go to it. If you're, if you're listening to this news article or this news show, being like, like all this AI stuff is spooky and I don't understand it and I want to learn how it works. First of all, you should come to the sock Summit if you wear socks. If you're from Florida, you can come barefoot. All shoe types are allowed. But yeah, scan that. Don't scan it, but come either way, it's on March 25th. It'll be virtual. There's also, I believe, training. I think it's like one day of talks and then training afterwards is usually what they do, but either way, just register and it'll tell you what to do. You also, Derek and Brian, you guys have webcasts or upcoming courses and things. Can you plug it for us? What? I'm scared. Please help me with AI.
Brian Furman
Yeah, so we've, we've got. Yeah, we got a couple things coming up. So this one that we've got up here is actually the furthest out from now, which is a two day course on attacking, defending and leveraging AI. So if you're interested in any or all of those topics. Topics, it'd be a great course to come and check out. Moving one step back from that is going to be a workshop. So four hours on hacking AI LLM applications, that's a real fun one. So we go through some of the fundamentals of AI LLMs, a bit of the history so you can get a better understanding of where all this came from and what it really means underneath the hood. That really. It's just math. And also we have an awesome CTF that's part of that. So you can get a lot of hands on experience. And then coming up promptly is the. Whoa. No, we are just cruising through this. So there is a webcast that's on Wednesday.
Corey Ham
Megan is like, ha ha ha, I'm evil.
Brian Furman
Yeah, we got a webcast on Wednesday where we will go through OWASP. There we go. Now it's back up the OASP LLM Top 10. Go through each of those points, what matters, what doesn't, and what you should be concerned about. And then lastly, that other one that
Derek
was on there, we have a podcast
Brian Furman
for those that don't know. In addition to this newscast that is on each week, we also have a weekly podcast on AI security topics where we discuss news topics, take deep dives into AI topics, bring on guests and Q and A from the community. So if you're interested at all, check us out.
John Strand
Out.
Derek
Yes. If you didn't get your fill of
John Strand
AI less depressing, I would say.
Brian Furman
O. I don't know that I go that far.
Derek
I don't know about that. We don't always talk about the news, so there is that.
Corey Ham
Yeah, yeah, no, that's, that's cool. Yeah. All right, well, that's it. Now John's allowed to throw a Molotov cocktail right into this show.
John Strand
I've taught today. I'm tired. Let's. Let's get out of here.
Derek
I didn't know you talking.
Corey Ham
Thank you all for coming. Have a very safe week.
Bronwyn
Bye.
Corey Ham
Bye.
Bronwyn
Go take a nap, John.
Date: February 25, 2026
This episode dives into the increasingly complex intersection of infosec and the evolving world of SaaS, AI, privacy, and platform control. The hosts and guest panelists discuss the impending "SaaS apocalypse" driven by the rise of AI-generated software, shrinking openness in Android, posthumous AI avatars, government feuds with AI vendors, and the ever-present issue of data breaches. The conversation weaves between light-hearted anecdotes and serious warnings about privacy, security, and the digital future.
| Segment | Timestamps | |--------------------------------------------------------------|-----------------| | Pre-show Banter / Intros | 00:24 – 08:37 | | Android Openness / App Store Lockdowns | 08:37 – 15:28 | | Meta & AI for Posthumous Users | 15:58 – 21:26 | | SaaS Apocalypse / AI-generated Enterprise Software | 21:45 – 29:16 | | AI & Workplace Productivity | 29:16 – 36:36 | | US Gov't v. Anthropic / AI Vendors & Use Cases | 36:46 – 40:31 | | “Largest Data Breach in History” Hype | 41:09 – 43:28 | | AI Safety, Bioweapons, Risks | 43:28 – 46:12 | | Dell VM Product Perfect 10 CVE | 46:16 – 48:10 | | Russian Threat Actor: AI-accelerated Attacks | 48:41 – 51:42 | | Stunt AI Journalism (Hot Dog Article) | 52:05 – 56:04 | | AI Literacy, Fact-Checking, and Generational Perception | 60:47 – 62:17 | | Plugs, Events, Call to Action | 64:04 – end |
Summary prepared for those who missed the episode or want a deep, structured recap of key insights, memorable quotes, and actionable takeaways.