Is Mythos Preview Too Powerful for Public Release?
Loading summary
A
It's time for TWiT this Week in Tech. Mike Elgin is here. Doc Rock, Jason Heiner will talk about the AI Anthropic has said was too dangerous to release. AI has been very, very good for Samsung, but not so good for Sisa. And France is ditching Windows. Breaking news. All that and more coming up on Twitter. Podcasts you love from people you trust. This is twit. This is TWiT this Week at Tech. Episode 1079, recorded Sunday, April 12, 2026. Fans only fans. It's time for TWIT this Week in Tech, the show we cover the week's tech news. Hello, everybody. Good to see you and really great to see our panel this week. Mike Elgin's visiting us from Tuscany. You dog. Lucky boy.
B
Oh, it's beautiful. It's beautiful here. Leo. Wish you were here.
A
Yeah, I wish I were here there too. It's been pouring rain here and I love Tuscany. You know who else is in a paradisical place on the other side of the earth? Mr. Doc Rock from Oahu, Honolulu, everyone.
C
Good to see you guys here. Actually, I'm just happy that it's pretending to stop raining and. But then they told me today it's only going to stop raining for like a couple hours and then back at it.
A
How'd you do in the floods? Were you okay?
C
We're actually okay because randomly our building just switched out all of our, like, pump systems. But normally we would have probably. Probably flooded, but we literally just got that done probably like a month before. So I was super stoked for that. But yeah, it's crazy. Like the. The mountain up on the north shore, like rocks are coming down. A whole piece of the highway went missing on Camp highway today. It's kind of insane.
A
Oh, man. Well, take care. And to Louisville, Kentucky we go next, Mr. Jason Heiner, editor in chief of the Deep view@the dpu.com a great AI newsletter and website. Hi, Jason.
C
Hey.
D
Great to be here. Not from Tuscany, although part of my family's originally from Tusc. Tuscany.
A
Oh, so there's that. Nice.
D
There's that.
A
You know what? I. I love Louisville. All three of those places. I'd. I'd be happy to be in. In fact, you're gonna. A lot of people are coming to town in a couple of weeks. The Derby, right?
D
Yes, the Derby it is.
A
You'd forgotten all about it. Do you go to the Kentucky Derby?
D
I do not. I mean, I have been once to cover it because it's. There was a funny, so funny story in 2017. I covered it because the year before, AI had predicted the winners. So this is like pre LLM days. AI had predicted the winners the year before, and then there was a. So now there was all this big Buzz, like, is AI going to predict it again? And so in 2017, we went there and we did a big story on it when I was at Tech Republic at the time. And so I went to the Derby is the only time I went. And I didn't pick any of the winners, none of the top three that year. So then they were like, okay, A doesn't know anything. So that they got off of the train.
A
Hell with that. And half the population was thrilled. And the other half said, oh, yeah, yeah, I just invested millions into it.
D
So I was the only reason I went to the Derby.
A
AI well, that's a. You know what? That's a good reason. I would go for the Mint Juleps, the Derby Pie. And I hear there's a horse race, apparently.
D
Apparently.
B
Yeah.
D
Every once in a while. There's the hats.
A
I would go for the hats. And you are. You are well prepared today with your.
D
The hats are off the hook at the Derby. Like, the funny thing is, so. Because I fly a lot, I've had a lot of times where I'm flying back in, you know, as Derby Derby week. Now, that's the worst time to fly into Louisville. The flights are all super expensive for obvious reasons, because so many people come. But there's these giant hat boxes. So. So, you know, for the. For the derby hat.
A
Funny.
D
These huge hat boxes. So they take up all of the room in the. In the bin space on the. In the airlines, in the. In the airplanes. So, yeah, you know, when it's Derby week, and then there'll be people that are pulling behind them a whole container that has their hat in it. Like, some people pull their luggage. They've got their.
B
Ha.
D
You know, towing their hat wheels. It's got, you know, it's got a handle and. And they're. They're towing their hat, you know, behind them.
A
It's so big.
D
Yeah, it's the whole thing.
A
I love it. We already have a show title and we haven't even gotten started this week. Anthropic did something, and I think there's some controversy around the whole thing. And since you all know a lot about AI, Mike does a. His own AI newsletter, MachineSociety. AI Jason, of course, is the editor in chief of an AI newsletter, the Deep View. And Doc Rock, I just found out, has his own open claw so between
C
the three, I actually, I didn't want to say afraid. I just didn't want to touch Open Claw because I knew I was going to be busy releasing a new version of our software. So I'm like, if I go in this rabbit hole, I ain't coming out. So it's just Claude, everything, Claude desktop. But now that that release is over, I'm like, oh, I have new things to dive into.
A
Actually, I did dive into the rabbit hole, so to speak. I bought a RA to run my. I have a fake openclaw. Don't tell the rabbit. It thinks it's talking to openclaw. But I actually simulated openclaw on my framework. But this thing's kind of cool when you can attach it to your own AI instead of its own AI. It's actually a kind of nice little interface to that. And they just release.
C
Yeah, if I can use it for remote control, that would be totally, totally cool too. So charging it up and updating it.
A
Oh, you got one. Look at that. Get the new OS 2. They just put that out. It's really good. Not the same OS2 as Mike Elgin's.
C
Wow. One of the first things I messed with on a computer back in the day was OS2, because I thought it was going to be better than Windows.
A
Oh, Dvorak was such an OS2 fan.
B
The first computer magazine I ever worked for was called OS2 Magazine.
A
No.
D
Wow.
B
Oh, yeah, yeah.
A
By the way, our cats have arrived. They were a little behind us in the box, but, Jason, I'm a fan of yours. It's got horses, a horseshoe, peacock. It's everything. Yeah.
D
Oh, wow.
C
Mike looks like the Monopoly dude, except
A
his feathers look like they have a bottle opener on it. So he's prepared perfect. And he has the only one in a morning coat, so that's good too. Thank you, Darren, for your instant AI response. Where was I? Oh, yes. Anthropic. So they announced a new model called Mythos. We kind of thought this coming, but they didn't release it to the public. They said it's too dangerous to release publicly.
B
That's right.
A
To which a number of people responded. You know, that's what Open AI said when that it. It had chat GPT 2.0 and said, oh, this is too good. We can't release it. It'll be dangerous. It's. So some people speculate it's just good marketing. They did build something they call Project Glasswing. The idea being we are going to give some very, very large companies Amazon Web Services, Apple Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks. Just those access to IT so that they can patch all their zero day flaws. Because as soon as Mythos is released, it wasn't trained to be a cybersecurity expert. But Anthropic says it's so good at finding these flaws that we are afraid bad guys will immediately pounce on all your software. So we want proactively to give these organizations and 40 more access to it, plus usage credits so they can patch their flaws before we release it to the public.
B
Yeah, there's a big.
A
Just great marketing.
D
Well, it is marketing for sure, but it sets a good precedent. Like if before sort of a new model is released, like they do this, I think that's a pretty good precedent in general.
A
Right.
D
So that's good. I do think it's mostly marketing because from what I understand they don't. This thing is really powerful and they don't have the compute to put it. If they put it out there and it's as powerful and it does a bunch of new things, they don't necessarily have the compute yet to run it. Right. So they want to keep it in a small set of hands while they're sort of spinning up the compute. Remember they just signed some big deals. They just signed a deal with coreweave, they just signed a deal with Google and Broadcom. Like they're trying to bring on more compute. And remember that since they've had this big upswing since February that remember they've been struck struggling. They've been, they've been having clot. Clot has gone down multiple times. Right. Because.
A
And it's been. A lot of people complained. I think I, I'm one of them. That it's been kind of nerfed over the last few weeks. Just not as smart.
D
Yeah.
A
So I think that's the compute constraint.
D
That's the compute constraint. They know they have it. So they can't put out a new model that has all these new capabilities. Now people are going to use it for more things and that's going to cause a whole lot of problems. I think that's what's behind this more. And if you're going to do it, I'm sure the marketing people will like, well, let's say it's just because it's
A
so powerful, so dangerous.
D
It's too powerful to put out there like it is good marketing. And marketing doesn't have to be 100% true as well, but it's also.
B
I think what's happening is that the. As OpenAI gets increasingly dinged for ethical lapses, they're doubling down on their ethical behavior. So basically what they're saying is that this thing, this Claude Mythos preview, found thousands of vulnerabilities, some of which have been in heavily tested open source environments for 30 years. And they found vulnerabilities that nobody else has ever found. And so one of the things they're concerned about is in the open source
A
community,
B
if it's suddenly revealed that there are thousands of vulnerabilities, the volunteers who maintain the security on these systems would just be completely overwhelmed. So one of the things they're trying to do is figure out which are the biggest vulnerabilities and then come out with the SOL and then patch them before they announce this to the world. I think first of all, we should applaud any kind of responsible behavior from AI companies.
A
That's true because it's known for their responsible behavior. So you're right.
B
Exactly. So that's a good marketing ploy.
A
That's just icing on.
D
Win, win. Win.
A
Win. Win.
E
Win.
A
Yeah, there's another issue, and I don't know what you think about this, doc, but the other issue is, okay, maybe they do release this a few months down the road publicly, but what if it costs $10,000 a month to use? What if because of the expense of the compute that it uses, are we going to have a situation where people who have money, big companies, wealthy people, will have access to better AI, more powerful tools than the rest of us?
C
I thought long and hard about this because it came up in a conversation with some friends the other day and I'm like, you know, the same thing happens when I see a 488 GTB drive by.
A
Why don't I have that car?
C
No, I. I've loved them. I've driven one. It's amazing car. I don't know If I'm gonna drop 450 on the whip. I think I kind of spent too much on the Beamer that I have now. But for some people, they don't care. Like they would just, you know, ride a skateboard, ride a scooter, ride a bike, ride a bus. And other people will enjoy their Toyota Corolla. And there's the. Some people that have that car and absolute. It would care less to have a more powerful thing. Because I am a speed freak. I actually would want a more powerful car. And do I feel bad that I can't afford that Ferrari? Yeah. But it ain't going to ruin my life. I think, yes, this is going to happen, because as it stands right now, and you just told me something that I've been thinking about for about two months, and I keep putting it off, but I'm probably going to do it this week. Is changing my plan from 20 to 200.
A
Yep.
C
Because I know that the way that I'm using it right now, I would actually gather more than $200 worth of. It's worth it out of it, and it's worth it. Get that $200 back. From the way that my business is personally set up, I will be perfectly fine with that. If it's $10,000 and other people just can't get it at all and there's no other alternative, maybe I can see where that would bother somebody. But that's no different than, you know, what we're going to pay for a wine from Mike's neighborhood. Fort Tuscan in Tatooine is different from what he can get it for because he's there. You know what I mean? And when I go to Japan, you're
A
saying there's always been. There's always been this.
C
Always been 100, right?
A
Yeah.
C
You know that. That bag of rice that you and I are paying 40 bucks for in Japan is $3, but we have to pay the import 40.
A
You got it for 40. I got a new one for 5 kilograms. Oh, you got a new one.
C
Okay with your wife. But yeah, it's. It's one of those things, like if you get. If you're getting a fresh batch, you know, they call it shiboritate, like a. A fresh back spring picked, you know, Koshikari.
A
Yo.
C
It costs grip. Yeah. It's $50 a pound, you know, and it's not the same if you're buying Uncle Ben's. That doesn't. It is the thing.
A
No, that's a good point. And not everybody can afford it, but AI might be a little bit different because it gives you superpowers. Maybe this.
B
Yeah.
D
What if they only released mythos to the Pentagon and the government?
A
You have to think that some of this is Dario Mode. The CEO of Anthropic is a little miffed because he said, you can't use Anthropic's AI for spying on Americans. You can't use it for killing people. And Pete Hagseth in the Department of War said, well, that's not okay. And, you know, in a way, I kind of think it's appropriate that, you know, Boeing shouldn't be able to tell the Department of Defense, those bombers you bought, you can't use them to kill civilians. So it's appropriate that government, that the Defense Department should have the opportunity to say how we use, how they use the technology they buy. Nevertheless, the Department of War took a very harsh stance against it and declared Anthropic as a. What do they call it? A security.
B
A supply chain.
A
Supply chain hazard risk. And that meant nobody who does business with the DoD can do business with Anthropoc. Anthropic, That's a. That's a lethal stroke at Anthropic. So I have to think some of this is anthropic going, oh, yeah, well, we got the best model ever. How do you feel about us now, buddy?
D
But I mean, you know, eventually, you know, administrations change and, you know, things change. I think my bigger concern is that some of the most powerful models only get to the government. And I did ask one of the AI executives about this kind of offline, you know, recently, and they acknowledge that they're probably. There could be models that are more powerful that only. That only end up interesting being used by government.
A
Is that good or bad?
D
That's sort of what I'm asking. I sort of have mixed feelings as well.
A
Depends on the government, doesn't it?
D
It depends. You know, I mean, government's ultimately made up by people. So it's like it depends on the people who that are there that have their hands on it, ultimately. And, you know, our. Our gap in AI right now is not so much a capability gap as a little bit of an ethics gap. And I think that's kind of what Mike was getting at earlier.
A
Yeah.
B
And also, if we quibble with the specifics, I mean, I think, as I recall their concern, Anthropic's concerns were spying on Americans and using it for autonomous AI targeting through drones and stuff like that.
A
That's right.
B
And so, you know, okay, so you could quibble with that and say, well, those. Those are essentially okay, but there's a line somewhere for everyone.
A
Well, they're actually technically illegal. And the Department of Defense said, well, we're not going to ask it to do anything that's not. That's against the law, so they shouldn't be worried. But we've seen the Department of Defense do things that are against the law, and they're threatening at least war crimes against international law, the Geneva Convention. So this is where you go, well, gee, in theory, it's better if the people decide how this stuff is used. But on the other hand, if the government is out of control, do we want it to have the best AIs? This is tough.
C
One of the things that I think is also quite interesting is, okay, whenever you're about to broach something that requires you to remember legal points, right, you have to go through a human studied memory to make sure you're getting it. Now, in, in business or buildings and things like that, what people would do is get all the books out, put them on the table, legal department, comb through a bunch of precedents and say, okay, yeah, I think this is good. I think this is great area. I think this is black and white. Let's just go for it. Now you can use the AI to help you get to that quicker and then give you the sources and then you go and double check in so you can cut down the length of time it takes to void diary to witness. However, if it's just making a decision by itself, that's, that's terrible, right? So if you're using it to help you pull a target because it can do the research quicker and then a human verifies it, that's. That's better, right? And unfortunately, that's not what people are going to do. People are going to always go for the path of least resistance, like water and the path lease resistance. Or if, if it's, if it's getting it right, you know, four out of these last 10 times, I'm gonna assume it's always right. And now we're in trouble if it's getting it right. Nine out of the last 10 times, I'm gonna assume it's right and we're still in trouble because this is a 10 out of 10 thing. It can't be. One is off. And unfortunately, humans will never do that because we see the stuff that gets put out by AI writing in professional advertisements now, and it's like, bro, you let that one slip. Like, nobody check that.
A
Yeah.
B
This is one of the interesting things also about AI in this era we're living in, where almost every other company can decide in advance what kind of business they're in. Lockheed Martin Martin can be a defense contractor, and Apple Computer can have nothing to do with defense contracting. And almost every company is either in or out of military use in terms of battlefield products, right? AI is so universal and so flexible, so it can be used for anything at all. Anything at all. It can run the accounting system at the Pentagon, or it can power autonomous drones. And so we're in this weird situation where the companies are in a position of saying, well, we are choosing to be in, in Pentagon contrasting or not by a matter of contract or policy as opposed to what the nature of the product is. And so it's a, it's a weird kind of a thing. And they also, I also believe that it's only a, we're talking about anthropics marketing. It's also a marketing thing. Oh, we're the ethical company. But it's also, they probably have to deal with employees, right? So employees nowadays will walk out or boycott or do whatever if companies like Google for example have contracts with the Pentagon, even non lethal ones. And so that's another concern that these companies have to do. It's, it's a, it's a completely unestablished norm that we're dealing with. And so Anthropic has one norm and OpenAI has another norm. And, and so far Anthropics is the one that's winning out in the court of public opinion.
D
Well, and they, so we, I think we have to give, we do have to give Anthropic some credit here because if you look back through all the history of technology companies working with governments and the governments doing things and people getting upset about, then the government's using these technologies to do things that the public doesn't agree with. What the technology companies throughout history have said is like we just make tools. How people use them is up to them. So IBM supplied Germany with technology in the well documented, you know, in, During World War II companies, modern companies like Microsoft and Google have gotten taken to task for giving the government technologies that they've used that have been used by not just our government but other governments to do things that are very anti democratic. Right. That could support some very unsavory things. And everyone has kind of always said with, with a few, only a few exceptions, these are tools. We, we can't, we can't decide how they're using. I think anthropic doing what it's done and it has sort of its pros and its cons but the one thing that it said is like we understand this technology better than the government does because we're using it every day and it's not ready to be used in some of these situations or it could be used in ways that could be really, really negative, a net negative for humanity. And so we're not comfortable allowing our technology to be used for those things. And so we're not going to go along with it. That's somewhat, you know, new and novel. I mean we saw Google pulling out of China, that's a, that's a, you know, a thing that's somewhat similar. Right. But still standing up to the US government, A US Company standing up to the US government. It's pretty new for, for that to be, to be a thing.
A
So a couple of last questions before we move on. We're all acting as if AI is the real deal at this point. The days of saying, oh, it's a parlor trick, it's just spicy autocorrect, we can agree now it's not the case. This is remarkable technology that is about to change things in unpredictable and dramatic ways. Is that fair? Is that correct?
B
That is correct. We are doing that.
A
I think we can agree we're not debating anymore whether AI is real. Now this is the other question. Is this AGI? Is this general intelligence? It's pretty darn good. I don't. We haven't tried mythos. AGI is also opus 4:6 is pretty darn good.
C
Yeah. I think AGI is fully, fully developed in anyone's head yet. There's a dream of it and there's ideas of what it should look like. But I think we won't really know what it looks like until it gets beyond what it looks like. And one of the things, like I, I think most of us probably don't pay much attention to the benchmarks anymore because they don't really mean much. No. When you look at the, the software bench jump between 4:6 and mythos, yo, that's some like Lance Armstrong level.
A
There's still, I have to say though, to be fair, there's still some question about whether in fact, even anthropic raised this in its system card, whether Mythos already knew the answers and was kind of cheating. Right.
C
This is also true. So you think about it though, last month, right, it seems like, you know, anthropods was over here dropping new updates every two days. So I'm like, they were using this in house.
A
That's a good point.
C
Coming out when, hey, we did this. And then Tuesday, hey, we did this. And it went crazy.
A
Yeah, yeah. That actually might be more proof than
B
anything in the chat room. There's some demand for a definition of AGI. So I think one good definition is Amazon Web Services and Wikipedia both define it as theoretical self teaching software that matches or beats human thinking. So that's super vague. And like Jason was saying, it will be and is being used as a marketing thing. But essentially I think what we can say is that AGI, one good metric is that AGI is better at all people at one thing. And super intelligent AI is AI that's better than all people at all things.
A
And we're not there yet.
B
We're nowhere near that. But AGI, AGI, I mean, I think it depends on the thing that it's better at. Right. Yeah.
A
I think it may not be as good as Einstein at theoretical physics, but you know what? Calculators are better than I am at arithmetic. We've always had machines that are better than we are. Bicycles are better at me than me at running. So that isn't much of a bar.
E
Hi, this is Benito.
C
But.
E
So the problem here is that there are a lot of things that we say, better or worse, that are very subjective, that are not objective.
A
Well, that's true too. And that's why these may be interesting.
C
Yes.
A
Let me put it this way. It is a feeling. And I feel like when I'm interacting Now with Opus 4.6, it's, it's. I'm not saying it's conscious. It's very good at doing the things it does. It's really good at it.
C
Yeah.
A
And so to me, that's. There is a. There's a breakthrough, there's a bar, it's risen to of some kind. AGI is a bad term.
B
There's also a. You know, there's also kind of situations where it's clearly. I mean, give you one example, we've all seen these amazing Chinese drone swarm shows, Right. That sort of have a. Replacing fireworks.
A
Okay.
B
If you got 5,000 people with, with remote controls, they couldn't do that.
A
No. Right.
B
So. So that, that's something that only those. That software can do.
A
And that's not necessarily AI. That could totally be deterministic software. Right?
B
Totally could. It totally could. But my point is that there are certain. There, There are lots of applications where only AI can do it.
C
It.
A
Yes.
B
People couldn't.
A
Well, there are applications only a computer can do.
D
I think we're gonna, I think we're gonna find, I think AI, AGI and super intelligence as terms I think are anachronisms. I think what we're going to find is we're going to move past. And the industry is kind of already moving past this, that we had this goal for a while that we were going to have one model to rule them all. We're gonna have the smartest. And whoever landed with the smartest model that was good at everything, that was going to be the win. And the company that did that, then that model was going to be able to train itself and train the next model and that company was going to win. And really the industry's move past that now and has said that, you know what, what we're realizing is actually this functions more like human intelligence, which is who's. If we all think of, like, who's the smartest person that we know. And okay, I'm going to think of somebody who I know, which is my uncle who worked it, NASA, JPL Laboratory. And I could go to him and I could ask him a lot of things about a lot of things, but I couldn't ask him necessarily what's his pick for the, you know, NCAA title next year or the Kentucky Derby or all of that.
C
Right.
D
So intelligence is, you know, artificial intelligence is going to. Is modeled after human intelligence. And so it's going to function very similarly in that we're going to. And we are already seeing this. We have some models that are outstanding, like Claude at coding. And it's going to go way beyond what humans are capable. And then we're going to have other even tasks. They're now task specific models, domain specific models that are really good at one task or one domain of intelligence. And they're going to be incredible at that, just like humans experts are. And I think that the AGI thing is going to slowly sort of fade away as the sort of finish line for this. And it's going to get more about who can build the best models for specific things. That's really where I think we see the industry moving and from this idea of like, AI, and it's going to have consciousness, it's going to be, you know, incredible, and it's going to be one model rule them all, I think. I think that idea is going to slowly die.
B
I think that's exactly right, Jason. I think it's our own innocence that makes us think, oh, we're about to hit AGI and then we're going to hit super intelligence. And once we get to the point where it's like the Turing Test, right? It was like, oh, some. Some computer is going to pass the Turing test. Now it's a banality that everything can pass a Turing test. And it turns out Turing test was a nothing burger. It doesn't. It's like not even a good test, right? So there was a huge craze in the 1920s, right, of the mechanical man where they would go to state fairs with this thing that would have a voice courtesy of a record player that was inside, it was smoked cigarettes and all this stuff. And it's like every, like all the all the. All the farmers and stuff who saw that said, oh, we're two or three years away from people being replaced by these mechanical men, right? Well, no, no, not, you know, just. We're just.
C
I wouldn't even say that today.
A
Yeah. I mean, you watch one of these things load a dishwasher, it's like, get out of my way, you silly.
C
Even all of what. What Boston Dynamics does. And then when. When every time I go to corporate, I drive right past there. Like, I see it. I'm like, this is really cool. I, as a nerd, I enjoy it. You know, we grew up watching robot shows, but there's still a long way off. And I think the funny thing is, I think the way we feel about it is a little bit different from the people who never grew up watching all the misses. See, we've had the opportunity. Mike just nailed it so tightly in my brain. I could explain this better. Why are you not scared? I get all my people ask me all that, why are you not scared of this? Whatever. And I. Guys, you gotta remember, we grew up watching all the things that were supposed to happen in the next couple years, and none of them has happened because it's. The computers isn't as dope as you think they are. Or when people complain about the algorithm for YouTube and I'm like, programmatically, that can't work. And the reason why you believe this myth is because you don't understand enough computer science to understand they can't code what you think it's doing. It doesn't know how to pick you out of billions of people and take your video and push it down so nobody sees it. No one's watching a video because the video sucks. You just don't want to believe that. So it's easier to think that YouTube is stopping from. Somebody from seeing your video. So, Mike, you nailed it. Because of. We've saw all the movies. We've been through the Asimovs. We've been through, you know, Star wars and Star Trek and all these other things, and we saw what it's supposed to look like. And I think that's what some of us are thinking about Agency. Other people are thinking about it because they don't have a. A long enough time on the planet to have seen that. Yeah. This is not what you think. And we're still a long ways off.
A
We're gonna take a break, and I'm. This is a good panel to discuss this stuff. I will. I think we do have to talk about AI because you're old. We've been here, we've seen it, we've seen it all.
B
And we're human, damn it.
A
And we're human. As far as we know, we're human. You know, it's not going to be long before you. It won't be really that easy to tell. I mean, it almost is there already. That scares me a little bit too. Mike Elgin is here. Machinesociety AI is a great newsletter. You should read it. Absolutely. We'll talk about his amazing tours that he does with Amira Elgin, his wife, all over the world@gastronoma.net it's great to have you. Mike from Tuscany, from Louisville, Kentucky. Jason Heiner, longtime ZDNet stalwart. Formerly editor in chief, he's now editor in chief at the Deep View. He's taken the AI plunge. The Deep view dot com. I think we've all seen the writing on the wall, right? That this, we gotta, this is the next big thing. I mean, I've seen people say, and I don't disagree with it. It's, it's as big as, or bigger than the Industrial Revolution. This is a huge leap for mankind, for better or worse. And there was bad things about the Industrial Revolution and nobody can deny that. Also here, Doc Rock, the doctor of rock. He is a director of strategic partnerships at eCamm. Good friend youtuber@doc rock on YouTube. Great to see you as well. In the purpleness of the grotto
B
there
A
was a San Francisco radio guy who. He did a radio show. He was in a normal studio, but he always described it as the purpleness of the grotto late at night. Al Jazbo Collins and he had an owl he said that he would talk to and he created such a word picture. I still see him in the purpleness of the grotto. But you actually are in the purpleness of the grotto.
C
Yeah, it's an homage to Alzheimer's.org because of my father in law who's no longer with this and my mother who is going into the deeper states is now. So I just want to remind people that all this stuff that we're talking about, I, I love it and I'm in it and I'm a hardcore nerd. But what I wanted to do AGI is solve Alzheimer's. Yeah. And then I'm cool because out of, out of us on the panel, mathematically three of us will have someone in our lives or a partner, whoever, who gets dementia slash Alzheimer's.
A
So it's just I already do my mom and my dad back my.
C
Oh.
A
So both 93.
C
Yeah, there you go.
A
Both Alzheimer's.
D
I had a grandmother too. Just passed away.
A
Yep.
C
Yep. Yeah. So that's.
A
I figure if my mom and my dad both have it, I'm on the road to the purple. The purple. Necessarily.
C
Why keep my brain working so hard? I'm like, please, me too.
A
Exactly. It's also why I'm. I'm working on my Obi Wan, because he's going to take over at some point for my, for my memory and my mind. Thank you all three of you, for being here. It's great to have you. This episode of this Week in Tech is brought to you by our great friend friends at Bitwarden, the trusted leader in passwords, pass keys and secrets management. With over 10 million users across 180 countries and more than 50,000 businesses, I saw Bitwarden at RSEC. It was really exciting. Consistently ranked number one in user satisfaction on both G2 and software reviews. And they are always adding features. This is one of the reasons they're always adding features to businesses need, individuals need. For instance, Bitwarden Access Intelligence. This is for enterprise. With it, organizations can identify, you know, there's always this gap. What are your employees doing with these credentials you've given them? With Bitwarden Access Intelligence, you could identify weak, reused or exposed credentials and more importantly, take action immediately. And then there's the vault health alerts and the password coaching, which surfaces risks to individual users in real time so that they can fix the issues on the spot, turning one of the most common causes of breaches into something visible and fixable. When they were at rsec, one of the things they announced I was so excited to hear about is their new Agent Access SDK. It's a powerful way for developers and teams to securely integrate controlled credential access into applications, into automation and AI agents. It enables programmatic just in time access to vault stored credentials without exposing sensitive data, supporting secure use with modern development environments. But you know what I love the best about it? They're open sourcing it. They're encouraging every other password company to adopt it because it's a standard that they want the world to adopt. It doesn't incorporate any AI functionality, don't worry into the bit warden solution. And it doesn't grant AI systems persistent or unrestricted access to your private secure vault data. In fact, that's the whole point. The Agent Access SDK is a separate open source development toolkit designed to enforce secure scoped credential access for teams that leverage AI agents in their workflows. It's brand new. It's an early alpha release. It's available now for you for testing the Agent Access SDK. It's a secure framework for how agents request, receive and use credentials. It's a brilliant model for safe credential interaction and agent driven systems. It really is awesome. Bitwarden also supports passkeys. In fact, this is new. You can now use it to log into Windows 11 with your passkey, securely unlocking devices. At the OS level, it's much more secure than anything else. Bitwarden's working with Microsoft on native passkey support while extending SSO to automatically log users into more apps, making credential management across devices more seamless than ever. And that's what we all need. And here's another thing. It's not just for enterprise. For us nerds, if you want a lightweight option, Bit Warden offers a self hosted password manager. They call it Bitwarden Lite. Self hosted for home labs, personal projects, quick deployments with minimal overhead. This company is open source first and they care about their users. Their open source code is regularly audited by third party experts. It means SoC2 Type 2, GDPR, HIPAA, CCPA, ISO 27001:2002 standards. It's the best. It's what I use, what I recommend. Get started today with a Bit Warden trial for free of teams or enterprise plans. Or get started for free forever across all devices. Unlimited passwords, passkeys, secrets, all devices as an individual user. Bitwarden that's bitwarden.com twit we thank them so much for their support this weekend. Tech so there was an article in the New Yorker, very, very long article by Ronan Farrow and Andrew Mains. I can't remember his last name. It was essentially a takedown of Sam Altman. Why? It says, can we Trust this guy who's in charge of OpenAI, one of the most consequential technologies ever? They didn't accuse him of anything illegal as far as I could tell. I mean, maybe the SEC needs to investigate some of this stuff, but there's not been an investigation outside of OpenAI. Some people took it pretty seriously. Sam Altman got firebombed at 3:45 in the morning a couple of nights ago, he says in his blog post. Thankfully I bounced off the house. No one got hurt. But not only was he there, his family was there. He apologizes. In the blog post he says, I haven't always been the best steward of OpenAI. I haven't always been the best human being. But there are some things I'm proud of as well as some mistakes I don't know. Where do we stand on Sam Altman OpenAI?
B
So I read the, I subscribed to the New Yorker and read the whole article and it's long, long article.
A
I did too and devoted a whole morning to it.
B
Yeah, essentially to me the biggest transgression described in there, it was very well resourced article and of course Ronan, 18 months, 100 interviews.
A
Yeah, yeah.
B
Is that basically he spent the first few years of OpenAI spinning a tail to prospective AI engineers about their sort of their public benefit corporation type mission where they were going to make AI benefit mankind. And they even had a thing in their charter where if somebody else reached AGI before they would, they would throw all their resources behind that other company to help them them that sort of thing. And he used that according to the article the, they accused him of using that sort of message to get engineers to work for OpenAI at lower salaries than they could get elsewhere. And then when they went to investors they had the opposite story. We're all about profit. We're going to take over the world. We're going to do all this stuff. Please invest in us. So he basically was, had a story telling each person whatever they wanted to hear for maximum benefit of the company. And so the way that Altman characterizes this in his blog post is he says I'm not proud of being conflict averse, which has caused great pain for me and OpenAI. So he's accused of lying and Sam Altman is basically defending himself by saying he's conflict averse.
A
In other words, they accused him of that too. I mean they said that was the root cause of some of the things was yes, he would lie to people just because he didn't want a conflict. Right. That's not what you were just talking about.
B
Exactly. So it's this when it gets down to the money getting people at lower salaries and then getting more investment telling opposite stories. That to me was the biggest transgression. Again not illegal probably kind of, kind of a Theranos type of it's slimy approach.
A
It's a little slimy. Well now, now Theranos was illegal and she went to jail for deceiving the investors.
D
So yeah, I, you know, I think first, first of all with the firebomb, we don't know that it's connected to the New York New Yorker store story. So we should, we should be good point thoughtful about that. You know, hopefully they find you Know, whoever did it and whatever anybody thinks about Sam Altman, there's just no excuse for that. You know, I, I do. I just hated to see them, you know, firebo his house or trying to firebomb his house. You know, thankfully, nothing, you know, bad happened. But I, I just think that this narrative was a little opportunistic because I, I, I think that Sam Altman has long been accused of sort of talking out of both sides of his mouth.
A
Sometimes you don't raise $122 billion, the largest raise in history, which is just last month, without being a salesman.
D
That's right. And salespeople, right. You know how to talk to some people this way and other people that way. But, but I think one of the things that, you know, has happened, I think the reality of what happened with OpenAI was they originally started it because they were afraid of Google controlling, you know, the most powerful technology in the world. And they're like, we want to offer a counterpoint to Google because they had concerns about, about Google's own sort of ethical center. Right. And so they were like, let's create something that is a little bit more, a little bit more altruistic. And that's where it started. And then over time, OpenAI became a much more important company and the opportunity became much bigger than I think they even originally considered it was, at least commercially. And they, they had a chance to, you know, chat GPT became so popular in a way that I think that even surprised them. I mean, they've said this clearly, there's plenty of that. And then they were like, okay, we could turn this into something way bigger as a public company. That's why the whole, they had to redo the deal with Microsoft and all of that. So I think, I just think the New Yorker narrative is maybe a little too clean. I don't know that it exactly, you know, happened. Maybe in retrospect, right, things become a lot clearer. But I think what we know is that, you know, Altman is opportunistic. Everybody's always known that. I do think that, I think he's not quite as nefarious as he's pointed in that article. I think being opportunistic and nefarious is a continuum maybe. And so, you know, we'll, we'll see. But I think that the company now knows that they have this incredible opportunity, and yet they also know that they've been losing the narrative over the last six months, right? That anthropic has become the good guys of, of AI and they have been painted as the bad guys. And they're very much conscious of that and they're trying to think of, okay, what do we need to do? Because they need the support of the public in order to, to pursue this mission that they want. Because they're ultimately going to need to have a lot of people have confidence in, in who they are.
A
And they're also going public in the next. This year, right?
D
That's right.
A
So they need to win over.
D
Yeah, they need to win over investors and they need to win over the public in a way that they aren't right now. And I think they're really grappling with who we are and what is.
A
Do you think we could trust Dario Amode, the CEO of Anthropic? You know, like that. I mean, look at Steve Jobs, look at Bill Gates, look at, I mean, Jack Welch, look at. I, I go on and on.
B
The difference is, shouldn't we have
A
a
B
higher level of ethics for people controlling companies that could terminate mankind?
A
We should have a higher level of ethics for people controlling companies, period.
B
Yes, we should. Shouldn't there even be a higher bar?
A
Well, and that's what the. So this is the article by Ronan Ferreiro. And let me get his name right. Andrew Morantz. That's the headline. Sam Altman May Control Our future. Can he be trusted? And if you read the article, the answer to that is obviously no. No one should trust Sam Altman. Can you trust OpenAI, I guess is the next question. Does Sam Altman reflect OpenAI? Because as you pointed out earlier with the Anthropic discussion, Mike, the engineers have a lot of weight here. It's not Sam Altman's company exactly, but
B
the lead engineers who were initially brought in, two guys I've spacing on their name, but they were. They had very high bars for ethics for the company, and they were very, very concerned. He was sort of strung them along saying, yeah, yeah, that's what we're going to do. He would go and do other things when they're not in the room. And so I think to a certain extent, many of these AI researchers and engineers we can and should trust, and they should be that those are the kinds of people who should be in charge of the decisions around, around AI safety. I think other than this kind of weaselly salesman.
A
Right.
D
Did you all see the document that OpenAI published this week? It's sort of went under the radar, but it's called Industrial Policy for the Intelligence Age. Ideas to Keep People First. Is this policy document, Policy blueprint?
A
Yeah.
D
So I got to talk to, this was on the deep view but I got to talk to the lead researcher, you know on this and Adrian Echo Fe. And so the interesting thing this is again I think them trying to, to say look, there are a lot of opinions inside open AI about how this could go, right? This being sort of the AI revolution unfolding and bringing a lot of things and this document, so this is about 30 researchers. 30, 35 researchers and then their, their policy team worked on this document and there's some interestingly in there, there that it's divided into two parts essentially which is about the economics and then about building a resilient society. So really about keeping AI safe and under human control. And so in this document the, the, the researchers sort of unfold a bunch of ideas and these are like some pretty, I, I, I thought of them as like not radical but you know, revolutionary in, in their scope. So they talked about creating a public wealth fund so that you know, the, the government, like the US government in this case investing in these AI companies as they go public and then sort of distributing the benefits to, to the populace. Now this is very controversial idea. Sort of like you know how in Alaska if you're an Alaska citizen, like everybody benefits from the oil there. It's sort of like that, that model,
A
I like that, I like that model. I think that's good.
D
Yeah. So, so, and then they talked about
A
we have a thing, we have a system to do that. It's called the tax system.
D
So they get to the tax system too. So it's interesting here they had AI driven corporate gains and automated labor. So basically taxing companies that are replacing human workers and earning higher profits that they should pay higher taxes. Right. If they're, if they're displacing workers. Another one was 32 without those taxes.
A
Maybe you should be dedicated to supporting the people they've replaced.
D
That's exactly the thing in there, right? No, these are just ideas. Right. They said this is not a final thing, this is like a thought starter.
A
This is the problem with that article though. Now everything that comes out of OpenAI I'm going to see through that lens of oh, is this just more slimy
D
propaganda or is it virtue signaling? Right, this is where virtue signaling when we were talking about it at the Deep View and, and Nat Rubio Licht, our senior reporter and I were sort of debating the story. This is one of the things that we were thinking about. But there are a couple other things that were interesting sort of last two are just the 32 hour, four day work week, which is if AI is making workers more productive, let workers get some of the benefits of it and not just go to the company's bottom line. Right, Reasonable idea. And then the last one was, you know, a right to AI similar to like broadband and electricity where we sort of have subsidized universal access so that not just the people with resources are the ones that end up benefiting for the technologies. This gets to sort of the earlier thing that we were talking about before. So I bring all of that up just to say that, you know, even to the point we were just on, even within OpenAI, there's some real diversity of, of approaches to this larger problem. And I'm sure it's the same at Anthropic and Google, you know, as well. And, and what we want is we want more people with a seat at the table. So I, having OpenAI being willing to release this thing into the world, support the researchers on their team putting this together and release this out to the world is something that, I mean look, they, again, they're looking to change the narrative about, you know, what's going on and how people think of them. So it has a lot of benefits for them. But it also, I think we should still think about conversations like this and what Altman said about this, this document is that like, look, he's been through multiple transitions. We, we know he's been through mobile and social and all these. And he says the more time we have to debate them, the public has to debate them before they become like urgent in an emergency is a good thing. And here's the one part that scared me about it. The last thing is like Altman told this story in there, you know, about sort of pre pandemic and them sort of realizing because the data they had that the pandemic was coming, it was going to be pretty bad. And, and he talked about like this were more, you know, more powerful models are about to be released as sort of like that. And that sort of gave me pause of like he's, he's, he's a little scared, right? He's a little scared that these new models are coming and they could cause some problems. And before they get out there, let's like at least get the conversation going because like in the US because they talked about, you know, this being a new, what we need is a new deal style set of things for, for AI, a new deal for AI. And we're so far from that right now in the US the conversation around AI is so you know, rudimentary. And so, you know, early that we're so far from that. And so I think we need more of these kinds of things. And that's why, you know, shows like this are great. Right. Where we can help get some of those discussions started.
B
It's really a shame how far behind we are and the country we're far behind of is China, which is, which has banned, you know, relationship apps for teens, has banned all kinds of things like that. Some of the dangerous things without, they're still full speed ahead on competing with the US in AI. And also all these great ideas about how to redistribute the power dynamic that comes from AI and the money and all that kind of stuff. Those are great ideas and unfortunately they seem unlikely given the current politics. We should be taxing high fructose corn syrup and subsidizing fresh fruits and vegetables, but instead we subsidize high fructose corn syrup. Like there's a million things like that that we're doing the opposite of what we should be doing for the common good, for the, for the public benefit. And so we just need, we just need a better politics is what we need.
A
Yeah. Good luck. Yeah.
C
The one thing that was interesting about the, the concept of taxing the companies that have the, you know, sort of using AI, you know, to invest their process. Well, if they're smart, you have your, your agent go out and find you every possible loophole, which is way better than a good shyster finding you.
A
It's true. They're very good at that.
C
Right? So, I mean, yes, it's a brilliant idea and I think it should happen that way, but if, trust me, you, me, they're going to figure out where they can get some holes in and like how they can put money in some certain places and have it feed. Like there was a thing before where Amazon had to come in and basically eliminate the process of using gift cards because they were untaxable because the money doesn't really exist. And when it happened, I remember like, that messed up a whole bunch of people who were basically using zip gift cards as a loophole. And it seems so weird, the people that don't know about it, they have no idea what they missed. But the people that knew about it, they were buying literally hundreds of thousand dollars worth of gift cards and never paying taxes on it because the money was never real. So to Speak Week.
A
That's. Wait a minute. So you could convert your money into Amazon gift cards, spend those like money,
C
Amazon, Costco, gasoline, Ruth Chris Steakhouse.
A
But why is it taxable? I mean, that money was income to begin with. I don't understand how that's gets.
C
Yeah, it was a weird thing. This was probably. It was middle 2000s, and then they were like, no. So you couldn't convert.
A
I do remember that.
B
Yeah.
C
So, okay, so for instance, I gotta
A
find me some good loopholes.
C
You were. I. I would get.
A
I'm an idiot. I pay my taxes.
C
Right. I would get a lot of Amazon affiliate income because I made a video about changing an SSD in a Mac and some famous musician got a hold of it and it saved his concert and he tweeted about it.
A
So you take it in gift cards instead of.
C
Correct. Correct. So.
A
And then it's not income.
C
Correct. So what happened? He. He put the tweet out. All of a sudden, my Amazon affiliate revenue goes up into like eight, nine grand. And then I would just turn that
A
into gift cards, say, don't give me any cash.
C
And now they don't want to do that. They don't let you do it.
A
Like, funny money you can't tax.
C
You could use Amazon gift cards, but you couldn't turn it into like, you know, buy an Xbox or buy some plane tickets or other stuff.
A
One last thing before we move on. OpenAI is backing a bill. According to Wired, it's an Illinois bill that would limit when AI Labs can be held liable even in cases where their products cause mass death or financial disasters. Yeah, I bet Open AI is back in that.
C
I would like, though, because where's this bill for the sugar people? Like, we're scared of a lot of people faster.
A
Or the gambling concerns. I mean, there's a lot of businesses in this country that are not good for children and other living things, but we don't ban them.
B
You know, it's funny about this good.
C
Kelshi. I can never say this stupid.
A
Kelsey. Yeah.
C
Like, yeah, come on, dudes. Like, the minute we start getting BET companies tagged onto our uniforms like they do in the uk, we've definitely changed the thing. So again, we're looking. You know that thing Leo, when the guy used to come around comdex and he would always do, like, sleight of hand and he'd be talking to you, shake your hand. Next, you know, he's holding your watch.
A
Yeah.
C
He's always walk around all of the computers. Yeah, I forget his name. But he's a Vegas resident in which was basically trying to drum up people to come to his show.
A
Right.
C
That's what we're doing.
A
We're yelling, you know who he is, and worry, I want my watch back.
C
Okay, dude, all the other people are still doing the crazy stuff while we're yelling at AI. Just like, why they're distracted by AI. Let's go change some drug policies or, you know, get another.
A
Everybody's looking at Sam Altman. Meanwhile, Elon's getting trillions of dollars in government funding.
C
Yes, it's so.
B
But back to this. Protecting the. Think of the AI companies, protecting them from critical harms. SB 343444. One of the funny spins about this is basically, it would essentially retain the illegality of murder for humans, but essentially make it legal for AI AIs.
A
And it's also, it's really in response to the liability that ChatGPT and others have. Have for the suicides of people's children.
B
But also they're. Yeah, exactly. And, and if somebody creates a bioweapon using AI or something, they don't want to be held liable for any, any of that stuff.
A
A little more proximate.
C
Yeah.
B
When they're, when they're looking ahead to the future of agentic AI, they don't want to be held liable if the AI just wakes up one morning and says, hey, I think I'm going to just go cause mayhem some more.
A
Right.
B
But I think, think, you know, I think we need to be careful about, about these blanket. So, you know, I think there has to be a lot of pressure on companies to make sure those kinds of things don't happen.
A
Like it's another Amazon gift card loophole they're remembering with murder.
D
Dario Amadai said, you know, after the Pentagon controversy that he said this on CBS News, the interview, first interview he did after it, I believe it was that the company was not necessarily against these machines or AIs being able to have lethal capabilities. What he was against was that the, the, the technology was not good enough in its current form to be reliable to do it. And so I think that's a really important distinction to make. And that's not to. You know, I gave them some credit earlier for doing what they did, but, but I, I think people are running around saying, like, they're, they're blanket against it. They're not actually. They're, they're like their, their thing was that these models are not good enough, they're not accurate enough, they're not reliable enough in their current form to do it, and they didn't want to be responsible.
B
And the Pentagon isn't reliable enough in its current form. And I think they suspect that if they used anthropic to do some weapons thing, the weapon went and, and did some horrible act. Right. They know the Pentagon would blame Anthropic.
A
Well, don't we think Palantir was involved in the targeting of that children's school, the girls school that was destroyed in the first bombing in Iran as the war began? Don't we think that that was. There's some debate over how that was targeted.
B
That story is out there. Yes.
A
Yeah. And we know Palantir uses anthropic and is widely used for military purposes. I mean, let's also, although it's used for many things, it's also used for choosing chairs, apparently in the
C
thing that, that's kind of hard to wrap a head around because we've gone through this with, you know, gun manufacturers. Right. And then even if you go to like Atwood Samson, Raven Knox, these are US rope companies, bro. No one's ever sued a rope company for the amount of suicide or the stuff my people went through a couple years back. Like, we can't go back and sue, sue the rope companies. It was the, the idiots wielding the ropes, the people.
A
Yeah.
C
You know what I'm saying? And like, if I thought I could do a class action lawsuit against Raven on Sampson and US Rope, I would have done it already.
A
Right.
C
You know what I mean? So it's, it's such a weird thing, but it just looks weird when they try to push for it. Well, you know what I mean?
A
I think that's the matter. What. Even if AI does something like that, that it's going to be a human behind it ultimately. Right. It's always, there's always going to be a human behind in the same way
D
that that school that was hit, which is horrible. Right. Like it's, it's, you know, reprehensible. Let's also remember that that government built that elementary school right next to their nuclear facility for a reason because they didn't want people to aim at it because for.
A
They were worried it would.
D
They were using them as. They were using children as a human shield. Right.
A
So it's a, it's a complex story. There's some debate over whether or it was, you know, old information that was the problem. There's a lot of debate over it. So I don't, I don't necessarily cast aspersions on Anthropic or Palantir, but that's part of the discussion that was in the air. And that's one of the reasons I think for sure, for sure debate happen. Let's take a break, come back with more. That's enough AI. Enough.
D
Enough.
A
I know. I just hear people in my head going, enough AI. Or maybe that's my AI. I don't know. Could be before the show we were having a fine day conversation with Obi Wan over my rabbit. Great panel. Good to have you. Jason Heiner, Doc Rock, Mike Elgin, Dear friends all, it's so nice to see you. Nice to see all of you too. Thank you for joining us for TWiT this week. Our show this week brought to you by ZipRecruiter. Did you know the average employer has to sort through roughly 250 resumes per job opening? That's the average. Talk about time consuming. Talking about a waste of time. Well, if you're hiring, here's some good news. You can now review all of those resumes and applications faster thanks to ZipRecruiter. ZipRecruiter has a new feature that instantly shows you the most interested qualified candidates first. And today you could try it for free at ZipRecruiter.com TWIT ZipRecruiter's powerful matching technology finds qualified candidates. And with ZipRecruiter's new feature, Qualified candidates who are very interested in your job show up at the top of your list. You can also get a feel for their personality. Candidates can tell you in their own words why they're interested in your job. No wonder ZipRecruiter is the number one rated hiring site based on G2. Cut through the standard and get to the standouts with ZipRecruiter. Four out of five employers who post on ZipRecruiter get a quality candidate it within the first day. And now you can try it for free at ZipRecruiter.com Twitter that's ZipRecruiter.com TWiT meet your match on ZipRecruiter. ZipRecruiter.com TwiT we thank him so much for their support of this week in tech. It is good. I should say it is good to be just as it was good for Levi's to be the company that made the blue jeans for the gold miners. It is good to be the provider. Nvidia has done very well. Samsung, which has had financial issues in the past few years, says they had an eight fold jump in profit this quarter because AI chip demand. They do make chips and it's been very AI has been very, very good for them.
B
$37.92 billion in profit for the first quarter.
A
Yikes.
B
It's good work.
E
Samsung's a big chunk of the Korean economy, right?
A
Oh, yeah, a huge chunk of it. Yeah. You know who else is doing all right? Not Xai. SpaceX. I remember last year had, I think, $8 billion in profit, but remember they merged, or Elon merged, Xai with SpaceX. Xai, his AI company, which is a money loser, with SpaceX, which is a money maker. And as a result, the Resulting beast, with two backs, $5 billion loss last year, 18.5 billion in revenue, but they lost $5 billion. Not SpaceX, really. I think it was. It was Xai.
B
Yep.
A
Xai. Well, SpaceX, because that's all called SpaceX now, spent heavily. This is, according to the information, spent heavily on chips and data centers for XAI, with $13 billion in capital expenditures. That's 50% more than they spent on rockets and satellites combined. So we know what business they're in now. Speaking of which, by the way, congratulations to NASA. Real success with Artemis. And I think we were all watching the whole. I don't know about you. I was watching the whole mission with just joy in my heart. We needed some good news and fascination. I know there's probably no real reason we should be going to the moon or Mars, but it sure is inspiring to watch people get together, people from all over the world get together and accomplish something so remarkable. It just made me feel good.
B
Yeah, I mean, what's the reason to do anything? I mean, we want to achieve things. We want to. It's like a fantastic thing. And I also think it. This whole event showed where we're at in terms of prioritizing bad news over good. The fact that it all went so well meant that most people. I mean, we're, we're all. We're all nerds. And so we were super into it, but the, the, the public at large was kind of indifferent to the story because.
A
Is that true? Nobody was. You know, I just watched the Martian because I really. I love Andy Weir's works, his books. And I had just seen Project Hail Mary and Lisa and I had a little friendly debate. Which is better, the Martian or Project Hail Mary? And having just seen Project Hail Mary, I said, I think it's as good as the Martian. We watched the Martian last night. No, it's not.
B
The Martian's better.
A
Brilliant. And there's a moment in the Martian, you know, spoiler. Could I spoil this movie that's 10 years old now? I don't think so. Where the final bit of the mission to rescue you, the. The Martian and everybody around the world, they're all standing in Times Square watching the TV screens and stuff like that. I remember that happened in 1969 when we walked on the moon. I don't know if that happened for Artemis. Like, I know it didn't. But there was interest. Or am I wrong? Maybe it was just us nerds.
B
There was interest, but there wasn't nearly. Nearly enough interest. There was a lot of interest around the quality of the photos that they were sending back. That was pretty amazing.
E
But in terms of volume, though, right? In terms of volume, probably more people saw this than Apollo.
D
That's a good point. There's more people.
A
There's more people in the world.
B
Yeah.
D
There's like, twice as many more diverted. Because there's.
A
The other thing is everybody's connected now, right? And everybody much more so than they were in 1969. And by the way, the technology for photography and. And transmitting those photographs back to Earth is so much better.
B
Yeah.
C
I will say the thing for me, as I'm watching the splashdown, and this is not intentionally morbid, but it's a little morbid being from Hawaii and remembering I was In Hawaii in 86 watching, you know, this come back and remembering Challenger. Like, I'm just sitting there watching it with a different level of paying attention.
A
Well, it's true. We've seen disaster. We know it's not safe. That is very risky.
C
So when I'm sitting there, we're watching this, and my other half is on the couch much. And then, like, when the first drag shoots come out, I'm like, okay, that's good. And then I'm like. And then they cut away. You know, she's like, what happened? I go, okay, that's. Now we're gonna get three. We're gonna get the big three. And then she's like, like, what are you talking about? Just hold that lady. Stop talking. Because I'm legit squeezing. I. I'm having some sphincter tension, waiting for the. The actual touchdown.
A
That will not be the show title, I promise.
C
And she. It. She was young at the time, right? We're 11 years old. Gap. And so in 86.
A
So she didn't remember it.
C
She would have been nine. So she knew it from school. But, like, not the same way as, you know, I'm sitting there looking at it at 19, a little bit different, but I remember just, like, a sense of release once, you know, you saw the water, you know, and then the boats all pulled in.
A
Yeah.
C
And then she was like, well, what are the Other guys doing well. They're putting the. The outer raft around the outside. So because you've seen all these different boats moving around the gate. One is camera crew, Coup one is like safety team. And it was something dope about watching them walk out of the hole. Then I felt like I could let go. And I just think a lot of people in Hawaii, I've talked to other friends. Like, we all sat there and watched it with a different lens.
A
And the coverage really is much more detailed. Much better.
C
So much better now. Yeah.
A
Yeah.
E
This is the furthest you've thrown a rock into space, man. This is the furthest we've thrown a rock. Like, that's the whole thing.
A
50 years.
B
And we went further than just.
E
This is the furthest we've ever thrown a rock.
B
Yeah. Furthest humans have ever.
C
People in it.
E
People in it.
A
Okay. Yeah. I think there will be a lot of people maybe even in Times Square watching the next moon landing. Right when somebody's getting out of Artemis.
D
Oh, yeah.
A
I don't know. Is that Artemis 4 or. I don't know which one that is. It's not the next one. I don't think. Then people will be paying attention. I hope people can see it as a victory for the world.
C
I think it's good. A good idea if you think this was unnecessary or whatever or just a ploy to distract. Whatever. I think it's a good idea to go listen to some of what Neil DeGrasse Tyson says about why we should be doing this. Right. What's the case? He has a couple different speeches. One at night, Y92, about a case for NASA and like a case for research. And so, like, again with you, I just. I appreciated it just because when we were kids sitting on the floor in elementary school, watching the first couple. Sorry, Jason, I know you're not as old as us, but. But it was just something like. I just remember looking at that as a kid. And at the time, I'm in Beltsville, so I'm not that far from one of the NASA facilities in Maryland. And I just remember looking at that as a kid, like, oh, this is something we all look up to. So the amount of rocket toys we had. And then, you know, Evel Knievel had his little shuttle thing that he did a little trick in trying to jump the canyon. Like, that was our entire childhood. Lost in space and Star Trek people
A
trying to do amazing. Bizarre. Yeah.
C
Like, all of our toys were space oriented. Like, we just knew that was going to be a thing. And then somebody just Said, yeah, we're not gonna spend money on that. And it just went out the window.
A
It's better doing it. You're right. We can afford it.
B
I think it's, I think it said, also revealing about the mindset where we wouldn't question for a second. Making money, Bitcoin or whatever or we wouldn't question for a second Pentagon spending. The cost of the artist mission in total is about 100 days of the Iran conflict. And yeah, there's people questioning wars and stuff like that. But my point is that why is it so controversial that we do this great thing to go into space and the engineering behind it, the, the just the incredible thing. It's like, why explore anything? Why understand the mating habits of, you know, koalas or, you know, it's like, we will not understand, you know, this. And it's cheap compared to so many other things.
A
Yeah.
B
So.
A
No, you're right. Thank you.
E
Exploration is like probably the deepest seated human endeavor. Like, this is the thing we do.
A
Yeah.
C
Facts.
A
I just, I feel like there's a lot of hype about how we're going to live on the moon or even crazier live on Mars. That seems like just fantasy. Sci fi, fantasy. But maybe not. I don't know. Are we going to be an interplanetary society? I don't think so.
C
I don't know. You think about the old school computer movies with the robots in it and the AI that all seemed like fantasy back then. We just spent an hour.
A
That's a good point. We're living in a fantasy world already. You're right.
C
Right?
A
You're right. When you're right, you're right.
C
I'm sure at some point in time Mike says, hey, me and the misses, we're moving to Tuscany. And a whole bunch of his friends was like, that's a fantasy. And Mike was like, watch my suitcase. See you. Bye. And then, you know, and we are
A
cutting, we are cutting budgets on things that probably shouldn't.
C
For instance, USAID still hurts.
A
Yeah. Well, what about cesa? Here we are, we're in the middle of a war with Iran, who has very good hackers. The new budget proposes cutting CESA's budget by 700 million. There's no director right now. The director they had was a problem and, you know, they got rid of him, but they don't have any replacement. They fired many, many, many, many people from CISA and now they want to cut their budget by $700 million for 2027. This is all a political thing because the Trump administration says that they were air a. There was waste, but more that they were being weaponized politically, which they weren't. Their mission was to secure federal networks, civilian networks against and critical infrastructure against cyber attacks. And believe me, that's something we don't want to cut back on right now. It wasn't focused on censorship. That's only because Trump didn't like it when in 2020, Chris Krebs, the director, said, oh, the 2020 election was the most secure election in history. Trump didn't like that, so now he's punishing them. But guess what? Guess who gets punished. School safety programs. We're facing, especially with AI more dramatic at cyber attacks than ever before. Yep.
B
It's. Yeah, it's very personal with our president.
A
Yeah, it's too personal. Unfortunately, Congress stopped him last time. I'm bringing this up so you can write your Congress critter and say, you know what? This is a very important agency and we really need it, and now is not the time.
C
You know, it was wild to me the last couple of weeks I've been. I have a couple of flights, you know, during the whole. Whole TSA people working for free type of situation. And then I had a wild dawn, you know, dawn of me situation. The whole reason why we even have TSA was based off of, you know, World Trade center tax way back in the day. Yeah. And here we are going to an actual war situation, and then we're gonna try to mess with the TSA guys. I like that. Made about as much sense as screen doors on the submarine to me. Me. And then I saw one the other day, which is even more wild. Oh, now we're going to go after the NFL for. What is that thing called? Not monopolistic practices, but like a competition blocking antitrust.
A
Yeah. Because the NFL has an antitrust exemption, so that that allows them to negotiate as a group.
C
But, you know, that's about the not being able to buy the Buffalo Bills, like way back in freaking 92 or whatever.
A
I'm like, Bro, it's another grudge 100.
C
I'm like, Are you absolutely joking me? Because you're buddies with a whole bunch of these guys. But now some way you were sleeping and somebody brought up not being able to buy the Buffalo Bills and having to start a USFL team. And the next day we got the DOJ attack in the NFL. They need to be attacked. Let's not say the NFL got issues, but, son.
A
Yeah, I'm not sure they should have an antitrust exemption.
C
There we go. But you know, the point was that
A
you can't have all these teams negotiating individually.
C
There we go.
D
Go.
A
That they, that they. Yeah.
C
It's crazy. Completely a grudge. It's so stupid.
B
Closer to home, Trump also cut CIS's protection against disinformation, firing 130 employees. And, and, and that's another one where you're like, wait a minute, is the disinformation, anti American disinformation situation getting better or worse? It's clearly getting worse.
A
And yet X said that they, they killed 800 million bot accounts last. 800 million bot accounts last year.
B
And that's definitely not all of them. Russia.
A
No. Take a look at X. It's filled with it. Yep.
B
And it's for the same reason, because in general, especially the Russian disinformation was in favor of Donald Trump's reelection. And therefore that, you know, that wasn't
A
disinformation, that was, that was good stuff.
B
Exactly. So it's just way too personal with the President for my taste. And I think this is the only explanation for this kind of, these kind of cuts. It just makes no sense.
A
Well, if you're thinking we're more secure than ever, I know a lot of you have used cpuid, their CPUZ program, their HW monitor. These are really, we've recommended these tools for 30 years now for measuring performance of your PC. It was infected with malware, the site was hijacked, and instead of downloading HW monitor, you are getting malware. This happens again and again. There have been two supply chain attacks recently on AI Light, LLM, on npm. No, that was on PYPI as a Python library that was hijacked for 45 minutes, enough time for tens of thousands of people to download it. It was a stealer, credential stealer. And then Axios, which is, I think NPM is also very widely used by, among other things, openclaw and CLAUDE code, and that was hijacked. It's not like we're suddenly in a world where there's no cyber threats anymore. Yeah, GTA 6, the developer Rockstar Games, was hacked. Ransomware group 6 said, we're going to leak all of your customers data if you don't give us a ransom. It was the Shiny Hunters. So, I mean, I don't know how much Rockstar knows about its customers. I guess they got credit card numbers. This is all this, this is all from this week. This is ripped from the headlines.
E
I mean, Rockstar is one of the biggest game developers. They're one of the largest of All.
A
Oh, yeah, oh, yeah. Oh, yeah. They have a. I mean, I just don't know. What do they have? They have your. Your credit card. You might have a little more than that on you. But it's not just bad guys. The FBI has figured out a way to retrieve encrypted signal messages. They're using iPhone notifications. So, you know, I. This is the first thing anybody watching the show should do is turn off your signal notifications because you notice when you get a signal message, they come in and clear text. Well, Apple stores them and those are preserved even when you delete the message. 404 Media says the FBI was able to use it to recover deleted signal messages by extracting data in the device's notification.
B
Specifically, Signal has a feature where you can tell it to give you the message in the notification that hands that data to the notification system. And this user had that feature turned on. You should turn that off.
A
Turn that off.
D
User.
A
I don't know who the bad guy was, but the point is the FBI is perfectly willing to do that and has the ability. So if you've been thinking, oh, I'm using Signal, we should be secure. I'll give you one more. ICE says, yeah, yeah, we're using, you know that software that's created by the Israeli company Paragon, the one that gives you zero click takeovers of people's iPhones? Yeah, we're using that. It's called graphite. The agency signed a $2 million contract with Paragon at the end of the Biden administration. The contract was paused by the Biden administration saying, whoa, wait a minute, hold on. Do we really want to be doing that? Of course, Trump administration revived it last fall. What's happening was one of the targets of this. They disclosed early last year that 90 journalists and members of civil society in various countries were targeted with graphite. And the ICE is using it. And I'm sure they're only using it to go after the worst of the worst, right?
D
Only the bad guys.
A
Only the very, very worst of the worst.
B
And your landscaper,
A
you better not take my landscaper. You better not touch him. All right, one more, one more story. A supply chain story, but it's not a security story. But it does have to do with the war in Iran. Helium. Helium is used in the manufacture of. Of semiconductors. It's also used in medical instruments. And most of our helium comes from underground pockets where the natural gas collects. It's not a byproduct of natural gas, but it is produced with natural gas extraction. The helium comes up in the pocket. They save it, they sell it, they ship it through where? Oh, the Strait of Hormuzzi. Thanks to the closure of the strait, helium prices have spiked. Businesses are scrambling to deal with looming shortages. We did, you know, the US used to have a strategic helium reserve, but that was sold off 2024.
D
Oh, boy. So it's using chip manufacturing. And so if it's not resolved, it could actually bring semiconductor manufacturing to a. To a hall.
A
You thought we had a shortage. Get ready.
B
Yeah, let's use MRI machines as well.
A
Yeah, mri, that's right. Yeah. It's got a really low.
E
Can't buy computers anymore anyway.
A
That's a good point. If you have a computer, you can buy. No, no bonito. Let's be fair. You can buy it. You just can't buy any ram and you can't buy a video card. But you can get, you know, an SSD or. An ssd. Well, you buy the case. There's plenty of cases available. And fans. All the fans you want.
C
Otherwise, there's a site for computer parts called Only Fans.
A
Only Fans. That's it. Did you just think of that?
C
Yeah, I did. Sorry.
A
You should start that site.
C
I did it. I was talking to somebody the other day, and, you know, this conference I was speaking at last week was all about content creators. This thing. I'm going to an England conference. I was in Seattle and. But the one we're going to in England is called Tube Fest. And, you know, I'm like, for the guys who are still sitting on this process of I need to wait till I get a new camera or a new something in order to do things like this. Content creating is going to become more expensive very, very quickly. As it stands now, an SD card that I was buying for 34 bucks like a couple months ago is now 120 some odd bucks for the same little SD card. I'm like, you really got to get used to shooting on your phone and just enjoying it.
A
It.
C
Because the phone is dope. It already has the memory, it already has the storage, and you just got to get it, post it, upload it. And then people like me who hoard all of their B roll and their excess footage got to delete it because I just bought a 20 terabyte helium drive that I swear I paid 289 for in December. I bought his twin sister like two weeks ago. It was 700 bones.
A
Oh, that's more than three times more expensive.
C
I had to do it because Synology was like, hey, dude, you need some more space. And so, okay, co, I'll buy another drive. And I had every intention to go buy two more drives. And I like, no, you're not. You can buy one. It's just freaking insane. So Venito is so right right now. Drive prices and memory prices are so crazy. It's. It's kind of insane. And everyone's blaming it on, you know, the AI stuff because it's the easy scapegoat. It's not just that. It's supply chain issues, which Leo is trying to tell you. But no one wants to hear that because it's so much easier to blame it on AI AI. And I swear to you, I would love one day for us to get out and do a deep dive report on AI Facilities aren't using as much water as the media likes to put out. A lot of times they're using dirty water or non potable water, but it's just reported as water. And so all these people are up in arms about how much water AI is using. And I'm like, yeah, you know what
A
uses a lot of water? Golf courses.
C
Thank you. How many golf courses where I live? Leo, you've been here.
A
Golf course and. Well, there's. What's weird is there's golf courses in the deserts, right? They love them in Arizona. They love them in Las Vegas. Forget the data centers. Let's get rid of the golf courses.
B
The golf courses have actually turned Palm Springs into an ecosystem that's no longer a desert.
A
Oh, yeah, it's humid in Palm Springs. So crazy. That's your water. That's your water right there going in the air.
C
Yeah. But again, it's so easy to blame AI because AI is scary. And no, nobody knows what it is unless they listen to this show.
B
Right?
C
And like, I'm. Yeah, it has done some wild things, but it's also in the hospitals when you have to go through many, many, many, many, many cases to try to figure out this rare form of. You know, a buddy of mine who's just recently diagnosed with a rare form of blood cancer, and they find it faster because the hospitals can comb through the data really quickly, you know, and here we are trying to plan a fundraiser in a couple of weeks, and people are. Are mad had, you know, because of AI And I'm like, yo, they might not have never caught it if it wasn't for the ability for them to comb through this data really quickly. So we just got to get better at balancing pros and cons in general. We've become so binary that it's crazy like. And binary about dumb stuff like Xbox versus PS2 or whatever. Or PlayStation. Sorry, that's just dumb. Like we got.
A
I did. I confess. I bought a switch too, before the price hike.
C
You're a genius. I should have. I didn't.
A
I looked and I said, it's still 449. I know that's way too much and I don't really need it, but it's only going to be more. So I'm buying it now.
C
It's so crazy.
A
All right, let's take a break. Enough of that. Enough gloom and doom. Let's talk about some happy stuff like, who is Satoshi Nakamoto now? There's another theory. There's a new theory. There's always a theory. You're watching this week in Tech. Jason Heiner. Doc Rock. Mike Elgin. Great to have all three of you. Jason, tell me about the Deep Review.
D
Yeah, so the Deep View. I joined in December. December 1st.
A
Congratulations. That must have been a little risky. Feeling a little.
D
For sure. So I, you know, I, I'd been working in traditional media for a long time, but I had two things that I was really. That really sort of crystallized for me in 2025, which is that I had this thesis about media. The future of media is about direct relationships with your audience, deeper relationships with more specific audiences.
A
Oh, I like that. That's kind of what we do.
D
Yeah, exactly. Less algorithms sending you users and more you convincing people that you're. You have something that they would like and them subscribing to you. Right. So in its various forms, newsletters, podcasts, other other ways then I really wanted to work on. I really wanted to work on that specifically. And then I. The other theory I had was that I was not keeping up in AI. That AI was moving so quickly, that things were. Were rapidly and that we were about to sort of, you know, hit this new gear and I was already not keeping up. And so, so I wanted to spend all day talking about it, thinking about it, writing about it, learning about it. And I knew that I couldn't keep up otherwise.
A
Yeah, that's kind of. I think we've all done that. I did that. Yeah.
D
Yeah. And then the interesting thing is in, you know, so I, I came to the Deep View. I started in December and the. That was the month. And as a matter of fact, Leo, you're one of the first ones that, that, that, that said this to me because I was on shortly after I joined. Joined. And you had just. There was a New version of Claude
A
code that came out November 24th. November 24th, 2025 was when, in my opinion, everything changed. It was the Release of Opus 4.5.
D
I remember you were telling me, you're like, I'm using Claude code to do all kinds of things that aren't just coding. You know, like it's a true agent. And you were showing me some of these things before the show started. And I was like, whoa, this is different. And of course, since then, like, like with openclaw, with the new Claude cowork, Perplexity Computer, all of these things Now, Codex from OpenAI, all of these things have just taken everything to another level. And so I'm so glad that I started when I did, because I would not have been able to absorb or keep up. I barely am now with all of these things. And so I'm thrilled. So the Deep view, just to bring it back, it's a newsletter starting. Started two and a half years ago. It's been on some amazing trajectory now. Over half a million audience, audience of over half a million.
A
Congratulations.
D
That's growing by like 40 or 50,000 new signups a month. And we have this perspective that there's a lot of information out there that is very surface level. And so we're trying to just give you three stories a day, the top three stories, and give you a double click on those, on those three in the, in the main newsletter. And then, then they also brought me in to do other things. So we're also doing a podcast. We are doing long form. So we want. Launched our first long form two weeks ago that was about. So it said the, the title is AI's utopian story Masks a Race for Power. And it's about the fact that AI tells this story about what it's about, but at the end of the day, it's also centralizing power in a smaller and smaller number of hands. And that's dangerous and sort of anti competitive, all of these things. It brings up a lot of challenges can bring up. So just raising the flag on that. And it's interesting because we published that story, you know, in March. And I'm not claiming this, but I will say that it is an interesting parallel that all of the, you know, big labs have started talking about this idea that they want to see. See this, this powerful technology democratized. They don't want to see it in, in, you know, as few hands. And so that if we contributed anything, you know, our drop to the ocean in that dialogue, I'm. I'm thrilled and so that's more we're going to be doing more of that. We have more long forms coming up. That's one of the things that I spent a lot of time on at Tech Republic and cnet and zdnet and something that I'm really, you know, proud of and spend a lot of time and thinking about which is doing sort of deeply reported and deeply, you know, thought about issues around all of these most important things. And it's going to be really important in AI in the, in the years ahead.
A
Couldn't have picked a better time and they couldn't have gotten a better person to run the joint. I think it's a good match. That's great. The deep view.com thank you for doing this, Jason. I read it every, every day. It's a pleasure. Subscribe to the newsletter for sure.
D
Yeah subscribe. The deep view.com will take you straight to the to the subscription it's Daily NewsLet top three stories of the day and we double click on them and tell them tell you kind of why they matter. We have our deeper view on each story which is our analysis of, you know, what, what it is and why it matters.
A
Very nice. Also here, Mike Elgin. Great to have Doc Rock. Mike Elgin, Jason Heiner. Our show today brought to you by Threat Locker. We just were at Threat Locker Zero Trust World in Orlando and what an amazing event. ThreatLocker is a zero trust platform. It delivers the industry's most comprehensive suite of zero trust solutions. Protecting endpoints, protecting networks. That's new. Now protecting the cloud too. That's also new. They announced this to Zero Trust World. By extending Zero Trust enforcement to cloud services and company networks, Threat Locker ensures that devices are validated through a secure broker before connecting to platforms. You know, the ones you use like Salesforce, Microsoft 365 Asana, Google Workspace, GitHub went on and on. Even if a user is successfully phished, this is your worst nightmare, right? Attackers cannot access their resources unless they actually have physical possession of the user's trusted device. That's not what you're worried about. Threat Locker works across all industries. It's 24. 7 based support. It's from the U.S. it's fantastic. They work on Windows, they work on Mac, they work on Linux environments. They enable comprehensive visibility and control. And it's very easy to set up and use. In fact, I would encourage you to get a demo because if nothing else, you'll immediately see and this is time and time again this happens. What things have access to your network that you had no idea how many different remote access utilities, for instance, have access to your network. It's an eye opener. Ask Rob Thackeray. He's the end user technical architect at Heathrow Airport. Now, Heathrow has had problems in the past. They knew they really needed to do something to keep operations flowing. They cannot afford to be shut down, he said, quote, Threat Locker was the most intuitive solution we tested. And the responsiveness of the organization, the willingness to engage with us, set up a demo and work with us on weekly audit reviews was very good. It's great to have an ongoing relationship with a company that is so responsive to our requests. End quote. Thank you, Rob. Not just Heathrow, JetBlue, the Indianapolis coats. Companies that can't afford to be brought to their knees. Right. The Port of Vancouver. You know, this is an infrastructure play. You cannot afford to be shut down. Threat Locker consistently receives high honors in industry recognition, their G2 high performer and best support. That's for enterprise summer 2025 peers spot ranked them number one in application control. The best GetApp gave them their best functionality and features award in 2025. With threat locker you can confidently ensure that users have access to a consistent safe network connection. Offices, remote users, internal servers, critical services can maintain smooth operations without the need to open inbound ports or deploy traditional VPN solutions. And that's really what everybody else is doing. And it's. It just gives you an attack surface right for the bad guys. Threat Locker prevents that your end users will get the secure, reliable internal system access they need without complex infrastructure changes. Get unprecedented protection quickly, easily and cost effectively with threat locker. Visit threatlocker.com TWIT to get a free 30 day trial and learn more about how Threat Locker can help mitigate unknown threats and ensure compliance. That's Threat Locker. We thank them so much for their support of this week in tech. Great company with a truly wonderful product. Hesitate to say that about John Deere, of course. They've been the focus of a lot of Right to Repair fighting. Just lost a big case. $99 million fine and a monumental right to repair settlement. But it's not just the money. John Deere is going to make. Is forced to make digital diagnostic, maintenance and repair tools available to third parties for the next decade. Huge victory for Right to Repair. They have fought it hard as farmers buy these very expensive farm machinery and then can't fix it. They have to go to John Deere to fix fix it in. In some cases it's been really problematic. They've had to go because they want to fix it themselves to weird third party software from the Ukraine, for instance, to unlock their home. Repaired tractor. This is not good. It's also skyrocketed. The cost of used John Deere equipment, the cost of older tractors doubled. But farmers wanted them because they could repair them. $60,000 for a 40 year old tractor. Yeah, good deal. I'll take it. You got any more? Judge still has to approve the settlement. That seems likely. There's another lawsuit from the FTC also about right to repair. So it's not over for John Deere, but this is a, a really important victory and I wanted to pass this along for right to repair. We're big fans of right to repair and it's I think very, very important.
E
So like cars, right? Like vintage, you can fix those.
A
Yeah, I mean you, you. It's true that a lot of modern cars, my car, I have a BMW also Doc Rock. But it's an, it's one of the electric ones. And there's no way a shade tree mechanic can fix this thing. Oh yeah, have to. You'd have to have all the equipment. Right. They don't want to touch it.
C
It's funny when you open it and you see the shroud and it's like, yeah, they don't want me touching nothing in here.
A
Don't go in here.
E
No user serviceable parts.
A
In fact, it was so bad I went to, I needed new tires and the tire guy said, oh no, it's an ev. I can't, I'm not going near it. He wouldn't replace the tires.
C
You, you, you have to have the exact circumference down, otherwise you mess up all myriad sorts of things. And so even when I was getting
A
the dealer to fix it, she said, can you walk around and tell me what it says on your tires? I had to read, I had to read the labels on the tires to her. She said, oh good. Yeah, okay, that's good.
E
So is that a good trade off for you, you think?
A
Oh yeah. Well, nowadays with gas in California, the gas station down the block is $6.50 a gas gallon.
C
Yeah. Which is crazy. That's the first time I've seen it higher than Hawaii. But we're like 570. But it's so funny when you drive by, you look at everybody else, they're all pissed off and screaming and we just like, oh.
A
And our European listeners are going, what? What? So what? That's nothing. Yeah, you should see what we pay for a liter.
C
What did.
A
Do you know what they have traffic. Gas prices are in Tuscany right now.
B
Trains just bought gas. They, they. Yeah, they charge it by liter. It sounds like a good price until you realize you're only getting a liter. But yeah, it's like, it's. It's like basically double, I think something like that. The price of gas generally.
A
Darren, In Australia says $3 a liter in Australia. So the problem is, is that Aussie
E
dollars, though that might be Aussie dollars.
A
That's Aussie. Aussie dollars. And it. And the price is not even across the country, While we pay 650 in California because we have kind of draconian rules about refining. General Tab says 389 in Massachusetts, but that's still higher, isn't it? And then.
C
Yeah, well, every time I go to Mass and you know, I get gas before I go back to Logan to drop the car off, I'm always like, dude, I wish I could put some in my suitcase.
A
I know. Back. Well, Hawaii. Well, but they, but they. I mean, it still has to get shipped everywhere anyway, right?
C
Yeah, we have a refinery, but the crew got to get shipped here.
A
Right.
C
And we have that. I don't know if everybody remembers this because unless you live someplace like this, you don't really think about it. But there's this thing called the Jones Act. So even if I buy something, something from Japan or China or whatever, it goes all the way to freaking LA or Alaska first and then comes back. So our shipping, like I wanted to order a bunch of filament from bamboo. They had a killer sale. And the shipping for the filament cost more than 12 rolls of filament. I was like, this is really stupid.
A
That's the Jones act says something. What, American shipping has to go through an American port? Port. Something like that.
C
Yeah. Like this is not a state. This is what's so dumb about it. Because we are an American.
A
Hawaii is not a state.
C
Well, it is.
A
You know, I know when you take a cruise, they have to stop in one international port. If like on your Alaska cruise, they have to stop in Canada, just briefly.
C
Yeah. So like, who comes up with these idiotic rules? And it makes.
A
It's protection. It's just. I'm sure there's protection histories. Speaking of protection. Frank. Frank. My favorite country. Frank, you're going to France. You're driving to France a couple of days, Mike. Tomorrow.
B
Tomorrow morning, driving to Paris.
A
France's government, you should do some coverage of this. They're ditching Windows for Linux.
B
Yeah.
A
Saying US tech dependence is a strategic risk.
B
It's a Trend.
D
Although a lot of European cities have done.
B
Yeah, yeah, yeah, Germany, etc and, and you know, China of course has been trying to do this sort of, this kind of thing for a long time. Yeah, it's.
A
There was tough. They covered this on the title Linux show. There was something going on with Open Office. Last month they began a project to take off an Office and make it Euro Office. In order to gain independence from Microsoft Office. But some of the Open Office people were a little bit upset. They said it's a licensing violation because the Euro Office code base removed the GPU AGPL license, which you're, which they saw it was unenforceable and non obligatory. But in fact that's the whole point of the GPL license is if you fork a project you have to have the same license. You can't just say, well now it's ours. So there's a big kerfuffle in the open source community over Open Office and Euro Office is not the best name. I'm sorry, it doesn't sound good. Did you see what Red Hat did? Now you were talking about IBM earlier And of course IBM's shameful, beautiful history of collaborating with the Nazis. During World War II their tabulating machines were used to tabulate Jews in the Holocaust. I don't. Are they, are they at it again? Red Hat, which is owned by IBM is trying to scrub a white paper from the Internet. The white paper is, was titled Compress the Kill Cycle with Red Hat Device Edge, showing how it was faster to use Red Hat's products and technologies to, well, kill people. Of course there's nothing disappears on the Internet. And in fact the minute you try to shut it down.
D
Yeah.
A
As Mike Masnick calls it, the Streisand effect, people become aware of it. It.
D
Wait, what is it the Streisand Effect?
A
Yes. Did you know that Mike Masnick coined that term? I didn't know it either. He told me, he was the first to call it the Streisand Effect, remember? Because Barbra Streisand, during her wedding there were helicopters flying over and taking pictures and she sued. Which of course nobody was interested in the pictures of her wedding until they sued. She sued. So in this paper they talk about the fine fix, track, target, engage, assess, process to, for the strategic, operational and tactical levels. Delivering real time data from sensor pods directly to airmen, accelerating the sensor to shooter cycle. You know, it's better with Lennox. Everything's better with Linux. I mean, I don't know. You served in Iraq. Was it Iraq or was it Afghanistan?
C
South America and in Kuwait.
A
Kuwait, that's right. You went back to the operation. Whatever. What was that? Shock and awe.
C
Yes, it's.
A
I think if it's our military, you kind of want them to have the best technology, right?
C
Yeah. Again, this. I. I hate this. I hate. I hate, hate, hate the terminology, slippery slope. It's because it's is overdone too many times. However, this is really one of those weird things where, like, you know, AI could help us in situations like this, or all of the capabilities can help us in situations like this, but unfortunately, the checks and balances become a little bit unattainable.
A
You know, if it's my boy on the front line or my, My. My daughter on the front line, I want her to have the best technology to defend herself.
C
Correct.
A
And to defeat the enemy.
C
But if. If your son's in the front line, we are all distracted by good food.
A
And so he'll be in. He'll be in the, in the, in the kitchen, in the KP there doing the, Doing the sandwiches.
C
But I mean, every. Both sides are distracted by the food. Like, hey, fight's over. Let's eat. Somebody, Somebody crack open the ball of the Chianti that Uncle Mike sent us. And, yeah, we're supposed to.
A
Even in France, they love his French dip. I just wanted to say, you know, even though.
B
And that's a dance move, I'm surprised
A
they call it Liberty Dip in. In the United States.
C
Oh, that's super funny.
A
I know. Anyway, yeah, you know, I'm really conflicted. This is.
B
Well, I think. I think the biggest problem with this for them, it's, of course, an IBM company, is that, you know, it's the way it's written. Written. It's not that they're bragging about their technology. It's just. It's just very blunt. Yeah. Language in the document about killing and this, that and the other thing, and it's just not this. You know, I think if they could rewrite it with euphemisms and, you know, soften every. The language, I think they wouldn't mind that it was out there. It's also hilarious that they think they can scrub something from the Internet. I mean, that's not happening.
C
No, it's not happening.
B
And they should know that.
A
Nope, this one is a little disturbing. You may remember the Department of Justice went after Ticketmaster. We all cheered when we heard that news. But apparently the DOJ settled with Ticketmaster and Live Nation a surprise settlement. And they did not consult the litigators before they made that settlement, they went and around them and settled and ended it. And apparently the litigators aren't too happy. The U.S. justice Department's top antitrust litigator and three senior trial attorneys now leaving the agency because of that deal with Ticketmaster. So now I wasn't up on, on, I know why we were going after Ticketmaster. I wasn't up on the settlement. Apparently, it wasn't the best settlement that stopped short, short of a breakup. It allowed Live Nation to keep Ticketmaster. Apparently it came as a shock to the other dozen states that had signed on to the lawsuit. And in fact, more than 30 of them continued the trial. So the trial didn't end even though the US Pulled out of it. The Justice Department Acting Director of Civil Antitrust Litigation, David Dahlquist announced his resignation Wednesday during a hearing in the government's case against Google for illegally monopolizing the online search industry. At the hearing, he said, you, Honor, I've given my notice, so here's my colleague. He's going to take over. See ya.
B
And it's just a larger trend in the executive branch nowadays is they do what they want and they don't feel like they are part of a thing with other, with states, with other agencies. They just do what they want and it's too bad.
A
Well, and I think Biden under Lina Khan was very aggressive in antitrust enforcement.
B
Yes.
A
And his successor perhaps did not fully agree with that and has backed down on a lot of those cases. The Biden administration actually filed a record number of cases, more monopolization lawsuits than at any time since the trust busting era of the early 1900s. Trump has moved. The Trump Justice Department has moved to settle several of them and hasn't challenged emergency merger since January 2025. What happened in January 2025? They haven't challenged merger since.
D
What was, what was.
A
I think it was Inauguration Day, I
B
think something or other.
A
The seminal moment.
D
Oh, that moment.
A
Oh, yeah. No more. No more. Which is. You know what? It's good if you, if you're one of the big companies looking to, to merge with another big company.
B
It's a great time to be a billionaire in the United States.
A
Yes. And if I were one, I would be celebrating.
D
There's a show title.
A
Damn it, if I, if Only I Were a Rich Man.
C
It's funny, I do often say, you know, people get so mad at the things that are going on, whatever. And I'm like, you know, I fully agree with you. And I'm not saying that this is right, but trust you, me, if you was on the other side of this fence, you would think differently. And like, no, no, no. I would be different. I go, everybody says that every.
A
Yeah.
C
Young politician goes into the thing. Like, I'm going to change this. I'm going to stop. Stop the bossacity that's going on. Thank you, Guy Kawasaki, for the word for of what's going on. And then they get there and they find out they can't do it. They either get pushed back out of office or they end up succumbing to some lobbyist somewhere. And there's just countless stories of, you know, people trying to get in there. And I really don't have a solution. I don't know what it is, but it's funny to think that you would be different if you weren't in that. In that particular boat. And that then even. It's funny because in. In Leo and I's favorite Sport, which is F1. Yes. Like, I'm watching it right now, bro. I'm watching right now because there's nothing
A
to watch this month because the tables have turned. The races were in Bahrain, in Saudi Arabia. Not right now. The best place to gather hundreds of thousands of people in the one small area.
C
Really, really wild. But, like, when the tables turn, like now, everybody's mad and, you know, Mag is threatening again to quit because he can't just win every time for doing whatever.
A
I know, I like it that it's shaking it up, but there we were talking about this last week, and they're. I think Ian Thompson, who is also an F1 fan, hates it. Of course the Brits hate anything it's happened to F1 since.
C
Well, it's going to be sun. And when I get to London next week, there'll be sun and they'll be mad that it's sunny.
A
But I think it's fun to watch them race and stuff. But apparently it's a little more dangerous because they have to slow down to build up the battery. I'm sorry.
C
Funny, I made a comment on Threads about how much I'm enjoying Max losing and oh, my God, his fans came after me, bro. I was like, I never had a more interactive thread post than the amount of, like, yelling, calling me names, death threats, all of the above. And I was like, dang, dude. It was. I'm just literally, I'm a Lewis Hamilton fan. Shut up.
A
So I have to say, Apple has done a nice job of taking over the F1 broadcast in the United States. And I think they. Oh, it looks good. It's 4k now and the audio still be stereo. And I think it's good.
C
I think. I think Lewis secretly had a hand in getting that deal done. He's getting paid somewhere for that because.
A
Yeah. Because now suddenly his team is competitive.
C
Correct.
A
And the FO are going crazy.
C
He's executive producer for the movie, so.
A
Already the F1 movie too. Yeah.
C
Yeah. So I think he had a hand in making that happen. We'll find. Find out after he retires how much money he made doing this. But I'm just saying I like the shake up. Everybody else is pissed off about it.
A
Speaking of billionaires, I'm sure Lewis is getting close.
C
I'm pretty positive.
D
Yeah.
A
Yeah. And I kind of like Ferrari. I'm glad to see them in the. And McLaren kind of slipping back. And Mercedes is back, which is kind of fun. I'm sorry, we. This is such. I know. Talk about horse racing. We could talk about that.
C
We want to talk about hats and hat boxes and safely traveling with hats.
B
And usually, usually a conversation on a. On a show like this about F1 is about one of the keys on the keyboard, but.
A
Yeah.
C
Yeah. Well played, Mike. Well played.
A
Yes. Let's take a break. More to come. I do have that satoshi story, which is kind of interesting. I love. I don't know if it's real or not, but it was in the New York Times. So we'll get to that in just a little bit. Our show today, brought to you by Delete Me. My God. If you've ever searched for your name online, you know what a ridiculous situation we're in, at least in the United States, where there is no federal privacy protection. And these companies, these data brokers, flourished. Hundreds of them. I think there are more than 500 at last count, collecting every bit of information about you that they can gather and then selling it online. Your name, of course, your contact info. But did you know it's completely legal for them to get your Social Security number and sell it? Your home address, information about your family members, your co workers, all being compiled by data brokers and being sold online. Anyone on the web can buy your private details and that leads to. Well, you. I don't. You don't. You can imagine identity theft, phishing attempts, doxing harassment, governments, law enforcement, hackers, anybody. And it's cheap. And in fact, it all came up because we were getting phished mercilessly. And it was clear the Fishers knew more about our company than they ought to to. And we figured out how they knew from data brokers so what we did is we signed up for Delete Me. And I think every company should sign their, at least their management up to delete me. It's a subscription service. It removes personal information from those data brokers. That's who it works on. You'll sign up, you'll provide them with exactly the information you want deleted. Because you may not want everything scrubbed or you may want everything scrubbed. You tell them what you want scrubbed, their experts will take it from there. They will keep an eye on things. They will send you regular personalized privacy reports. We just got one the other day for Lisa showing what info they found, where they found it and what they removed. So you know they're at work and there's a reason. It's a subscription service, not just a one time service. The data brokers, sure, they'll take the stuff down. They're required to by law. And every data broker hides that page. That's one of the reasons you want to use Delete me because they know where to go. They have it taken down, but they're a slimy bunch. Sometimes they change their company name, sometimes they dissolve and reemerge. Sometimes they delete the data, but then just say, well, there's another Leo Laporte. They started collecting that data. It never ends. So Delete me is always working for you. They constantly monitor and remove the personal information you don't want on the Internet. It's their job to keep track of every data broker. And when new ones emerge, as they do every single day, they know about it and they go to work. To put it simply, Delete me does all the hard work of wiping you, your family, your company's personal information from data broker websites. So this is what you need to do. Take control of your data. Keep your private life private. Sign up for Deleteme at a special discount for our listeners today. You can get 20% off your individual plan when you go to joindeleteme.com twit use the promo code TWIT at checkout to get that, that 20% off. So that's the only way to get 20% off. JoinDeleteMe.com TWIT Enter the code TWIT at checkout. That's JoinDeleteMe.com TWIT offer code TWIT. I could tell you it works. It really works and it's a must. Join deleteme.com TWiT we thank them so much for their support of this week in tech. Satoshi Nakamoto, a name that will live forever the Inventor of bitcoin. But who the hell is satoshi? Many journalists, many magazines, many, many books have been written saying, we figured out who it is. Newsweek had it on the front cover. It was wrong. It was really spectacularly wrong. There have been people who claim to be the inventor of bitcoin proven wrong in court. It really is a mystery. And there's a reason why it's interesting. Besides the fact that he invented it. Something so interesting is that he holds a huge. Yeah, he would be worth more than a hundred billion dollars. And nobody's moved those satoshi coins in years. He disappeared. Is. Is. Is he a single person? Is he a group? Is he gone? Is he dead? Is he still alive? No one knows. Now, given value. No. His password.
D
The simplest solution is probably the most accurate, which may be that they've lost the password to all those.
A
The thing. Well, but also, you don't want to be fingered as satoshi because you are now a target for kidnappers, for extortion. I mean, suddenly the world's going to beat a path to your door. So you may know the name John Carrey, guru. He was the Wall Street Journal reporter who exposed Elizabeth Holmes and the Theranos scandal. Did some really good investigative reporting at the Journal. He's now at the Times. And I think if you read the article, I think he kind of became obsessed a little bit with tracking down satoshi. He spent a year, it says, digging through thousands of decades old Internet postings in search of bitcoin's creator. My quest to solve Bitcoin's great mystery. 17 years, Bitcoin's creator has been hidden behind the pseudonym Satoshi Nakamoto. Now I'm reporting on this just because the Times gave it a lot of play on April 8th. I don't know if it's true. You may have. I don't know if you saw the HBO documentary Money Electric, the bitcoin mystery in which the. The documentarian claimed to have unveiled at least his reasoning about who was satoshi. A Canadian software developer who has since disappeared, by the way, rightly so.
C
I don't want to know. And I wish people would stop. There was. There was.
A
Should I just stop? Should I just leave it now and not talk about it?
C
No, no, this is not you. This is on them. Like it's on Kiryu or whatever. It was something else we were talking about, about. And it's like, oh, everyone's trying to figure out this thing and I can't remember what it is. I'm like, I kind of Just don't want to know. Like, I think this one is just the mystery.
A
We don't really need to know, do we? Although.
C
And it'll be disappointing. It's like the wizard of Oz effect. I, I, I just made that up. No, it's not true. But, like, pay no attention to the
A
man behind the curtain.
C
You don't know the little dudes behind the curtain. Right. Like, there's some things you just don't want to know. Like, oh, did you know what's inside a slim Jim? No. Shut up.
A
Oh, you really don't want to know that.
C
I don't want to know. No,
A
you don't want. There are countries where you have to label that stuff, and I saw one that says, it said pig's anal glands. That was the whole thing. And you're right. You know, maybe it's delicious, maybe it's nutritious, maybe it's even safe to eat, but you don't really want to know, dude.
C
Middle country, East Coasters would know. Actually, you grew up in Mass, dude. Scrap is delicious.
A
Scrapple, scrapple, Scrapple baby bomb.
C
I do. I never. Somebody's like, do you know what's inside scrapple? Shut up.
A
Just don't want to know.
C
Scrapple is delicious.
A
All right, well, cover your ears. Actually, I don't. See, this is the thing. He's come to a conclusion. Yeah, I don't know if it's true.
B
He.
A
And it's very thin.
B
Well, it's, it's, it's a lot of circumstantial evidence. If it was in court, it would be circumstantial. It's, you know, a lot of the linguistic patterns match. He has some odd. Both, Both Satoshi and, and this guy Adam.
A
Adam back. He's a British crypto. I have to say, of all the people who've been proposed to Satoshi, he is the most qualified. Yeah, he was a cypherpunk. He's a cryptographer. He's been very active in bitcoin. I mean, he could be. And among all the other suspects, the most likely to actually be.
B
He kind of vanished from the message boards. Right when Satoshi appeared.
A
Right.
B
And the linguistic quirks, I've gone through all of them, they're very obscure things like ending a sentence with the word, also using British spelling, spellings for certain words, using American spellings for other words. These are pretty obscure linguistic ticks that are shared by.
A
I have to point out, though, that Carrie commissioned one of those statistical semantic analysis, and it failed. And then his conclusion is well, back was probably covering so that he knew about these analyses, these statistical analyses, and intentionally obfuscated his prose so that that couldn't happen.
D
Not seem very likely. Right. Like, it's. That's a very, you know, to him assuming somehow that, you know, people would be working this hard to figure out who it was, you know, so many years later. That sort of. Is that. That.
A
That's true. That would have been kind of retroactively that he would have had no know that. Yeah. Bitcoin was going to be the.
C
Here's what's funny about the whole situation, too. Sorry. Allow me to be Japanese major for a second. Nakamoto is in the middle of the beginning or the middle of the root. Naka means inside the center of or in the middle of. Right. So I know. I know means like I am. I'm in the middle of trying to find the edge answer. Right.
A
So Nakamoto itself, it has meaning.
C
Yeah. So Yamamoto is like the. The base of the mountain, the center of the mountain. The origin of the mountain. Right? Yeah. Would be the center of the origin. Right. So it's kind of interesting, like if you are the inventor, if you are the beginning, a name that you would pick that's on the tongue, but side cheek would be Nakamoto. That's the perfect name. Because I'm not going to say I'm the founder. I'm not going to say I am the Gensui or the. The center, but I will be the. The center of the. The root or the center of the origin.
B
I wonder if Adam Back studied Japanese.
C
Oh, yeah. So there's a very tight correlation between the uk, you know, Ingirisu, as they call it, and Japan anyway. Right. Like the reason why Japan drives on that side of the road. And so many of their cultural things are based on the UK when they started to get westernized after Meiji Restoration, a lot of it formalized in sort of British oriented things. So very much so he could have had that sort of connection, you know.
A
So this is the article they have. They start with 562 suspects, and then he provides his various pieces. And the number of suspects dwindle, dwindles down. And then the suspects start to disappear. And pretty soon he's got it narrowed. I mean, the graphics tell you it's Adam Back.
C
This sounds like Jason's news organization. He's gonna keep double clicking until we get down to the middle.
A
It's well done. I think it's well done. I don't know if it's convincing. He does in fact, confront back. And you know, he says the whole thing began when he saw the HBO documentary and he thought Adam Back looked a little bit, bit squeamish when he was asked if he was satoshi.
B
I also think that this article is a little vague where they said that he did the reporting with the help of computer assisted reporting provided by his colleague Dylan Friedman. Computer assisted.
A
Well, Friedman's the AI, at least his byline says is the AI projects editor for the Times. Yeah, he has rich. His experience both as a reporter and a machine learning engineer. You think AI did the research for this?
B
I do, but it's, it's. It's interesting that they're being, they're being cagey about the specifics there.
C
Correct. And. Well, they also didn't touch on the fact that satoshi, in and of itself, Yuki is one satoshi. Japanese have a whole bunch of different kanjis for like clever or intelligent or whatever is why so many boys are named satoshi. Or it means clever to. Yeah, I mean, so. So if you're going to be the intelligent or the clever. Origin of the beginning, I mean, it's very on the nose, bro. Like the, the Japanese name itself said, it tells you it's almost invented. So I think we can completely rule out it's an actual Japanese person.
A
Yeah.
C
And the fact that it means like you're the intelligent creator of this thing, like, it kind of makes sense. And it's a. It's a little hilarious to me.
E
You can't rule out that it's a Japanese person. You can rule out that it's a per Japanese person named Satoshi Nehru Komodo.
C
Oh, yeah, that true. This is also true because the Japanese person would have been smarter of using that as a name that would throw everybody else off because they would actually know the literal. Benito. Incorrect. Yeah. Or as we say, talaga for Benito. He.
D
Well, back to bitcoin.
A
His fortune is dwindling because bitcoin's down to $88,000. So his fortune is dwindling. Actually, this story from Coindesk. Bitcoin miners now lose $19,000 on average on every bitcoin produced because electricity's gotten so expensive. Another victim of the war in Iran, bitcoin miners.
B
Now those people, another win for Samsung.
D
Back does deny it, you know, and he said that he doesn't know who satoshi is. And now this is interesting that he believes that the anonymity is good for. For bitcoin. And so this is one of the things that I always, always I I do feel like the anonymity is by design and that the folks who I. It sort of seemed maybe the, the argument for. Against this is that could they have really held it that long? But it sort of seems to me like it was probably a small team and not one person and, you know, like, I don't know, three or four really smart people working together. And remember, the whole reason they did this was this, this happened right after the financial crisis which, you know, and they revealed they were like, look too much. So the interesting thing is the parallels to the Curtain AI crisis is they're like, there's too much control in too few organizations, so how could we build something that actually distributes control and democratizes the finance industry? And this was their solution that they came up with. So it is interesting that. And blockchain itself and then bitcoin on top of it. So it is interesting and it is ingenious. Right. The blockchain system itself, despite, you know, all of the, the challenges there is with the other currencies, cryptocurrencies beyond Bitcoin and Ethereum, you know, the blockchain itself is a remarkably, you know, distributed and, you know, egalitarian sort of system, despite all of the sort of negative of things it's been used for since. But the, I think the principle still applies, you know, now too, and continues to be really interesting. And I'm surprised that there haven't been more connections with sort of where we're at in the AI industry and maybe those, those things are still to come.
A
Yeah, I mean, one of the consensus points I get whenever I talk to security people, whether it's at the Zero Trust World or to rsec, is, is we wouldn't be in the situation we're in if it weren't for bitcoin.
D
Which situation?
A
Remember, prior to Bitcoin, if you got ransomed in ransomware, they would say, now go down to the 711 and buy some money cards. And that was not a scalable business. As soon as bitcoin became predominant, ransomware took off. So, yes, there have been some positives. I'm not against bitcoin. I don't invest in Bitcoin. I'm not sure I'd recommend anybody to do that.
D
But you still have those like 3 bitcoins or something, right?
A
I have 7.85 bitcoin.
C
Oh, man.
D
But you don't have the password for them?
A
I can't remember the password. On the other hand, if I had remembered the password, I would have long ago gotten rid of them when they were worth a hundred bucks, right? I'm rich. I got $700. So it's been a blessing. And I figure quantum computing is going to come along and when it does,
E
or AI or maybe your AI can open that. You should stick Obi Wan.
A
You can ask Obi Wan.
D
Yeah, have Obi Wan work on it.
A
Hey, Obi Wan, can you help me unlock my bitcoin wallet? I've forgotten the password. Let's see. Maybe you can get to work on it. I'll give it the wallet. Say, get it to work, buddy. You think you're so smart that what's
D
it saying Typically requires specific steps based on the type.
A
Here are a couple of questions to help me assist you better. Oh, good. It's going to start work right now.
C
We're come back in a couple weeks. Smoking jacket with the long, you know, cigarette on the stick. One of those nice hats from the picture that Darren sent us. Welcome to this week in I'm rich.
A
As I say, I've suddenly become wealthy. Lovey, get me the martinis. Let us pause for our final commercial and our final stories as we continue this week in tech with Jason Heiner. Great to have you, Jason, from the deepview.com doc. Rock. Things going well at eCamm?
C
Oh, absolutely. We just released probably our dopest upgrade ever and I'm super excited about it. And so I've been having a lot of fun just like sort of teaching everybody what we're doing and yeah, it's great. And trip that we're doing to the UK is a whole team trip. So it's kind of fun to go on an adventure with the entire squad. Like all 10 of us are just showing, oh, fun bangers and mash for everybody.
A
Is that all 10 people doing that amazing software?
C
It's kind of crazy, right? Everyone thinks we're a way bigger team. There's literally only ten of us and the only two developers, the twins. Nobody else touches the code really. Just the twins. But twin. Twin developers. A different level because, like they kind of know what each other's thinking half the time. So, like, I think it's a. Your best development partner would probably be a twin. I don't know what other apps are developed by twins, but I know that these guys are pretty.
A
The Thomas with an old brother's twins. I don't know the brothers that did Photoshop. Yeah, I don't think they were twins, but they work very close.
C
Yeah, I think that's really helpful when you're working with, you know, Someone who you basically spend every waking moment with because it's so much easier. And a lot of times when you get stuck on things or I think it's funny when like Ken will add something to the app and don't tell Glenn and then somebody in the community say, hey, it'd be really cool if you added this. And Glenn goes, yeah, that's a phenomenal idea. We should start on that. Ken's like, yeah, I already know. Did it. Or vice versa. It happens all the time.
A
It's super funny. I, I think there's a lot to be said for a small, tight, talented team of developers as opposed to these, you know, hundreds and hundreds of developers working on the same project. I just think, you know, if, if it's the right project anyway.
C
You never notice how big a large company can be so broken until you realize the way that meta in so many ways be biting one hand and sticking the other hand back in the jar. It's so weird. I'm like, Instagram would do something, but then Facebook would be like, you can't do this on video. I'm like, you literally do the same thing on Instagram. Isn't it the same company? Can somebody walk across the hall with the memo sheet? And it's just really weird.
A
I want to, by the way, announce our new show now that apparently my openclaw has debugged and opened my bitcoin wallet this week in billionaires tech wealth influence the stories behind the billions. We have to work on the acronym though because TWIB is not the best. Not the best.
C
Oh my God.
A
Thank you, Darren. Darren's quick on the draw there.
C
I would say that's goated.
A
And Mike Elgin also here. I'll ask you, Mike, about Machine Society and Gastronomat in just a little bit. I want everybody to get the plugs in because I'm a big believer. But first, I must mention our sponsor for this segment of this week in Tech Meter. Meter is a really cool company. I saw them at rsac, they make amazing hardware. But it was founded by network engineers who had an insight. It came out of pain. If you're a network engineer, you know what I'm talking about. And believe me, they know your pain. The entire company lives, resides, needs, the network. It is the basis of every company. And yet it's underfunded. You've got all sorts of challenges. Legacy providers with inflexible pricing, those IT resource constraints, complex deployments, fragmented tools. You're mission critical to the business, but you're working with Infrastructure that just wasn't built for today's demands. They decided to do something about it. They founded me Meter and now businesses are switching to Meter because it makes a big difference. Because Meter does the whole stack. They realize just like Apple realized, if you want to make something truly great, you've got to do the whole thing. They offer full stack networking infrastructure for wired, wireless and cellular. They build for performance, they build for scalability. And they can do it because Meter designs the hardware, writes the firmware, they build the software, they manage the deployments, they'll even do the after sales support. They're there for you. They do it all because they know your pain and they have a solution. Meter offers everything. I mean starting with ISP procurement, they'll help you with security, routing, switching, wireless, I mean everything you need. Firewall, cellular. Yes. Power, that's I mean mean vital, right? DNS, DNS, Security, VPN, SD WAN, multi site workflows, all in a single solution. Meter, single integrated networking stack scales. They are in major hospitals, which is absolutely hostile environment for wireless. Right. They're in branch offices. You know what they told me was one of the big challenges? Warehouses. A company buys another company, they acquire all the company's warehouses. These hundred thousand square foot football field size warehouses with wireless that just doesn't work and doesn't integrate with the home office. They can solve that. They work with large campuses. They work, they even work with data centers. They do Reddit's data center. Ask the assistant director of technology for Webb School of Knoxville. He said, quote, we had more than 20 games on campus between our two facilities. Each game was streamed via wired and wireless cabinet connections and the event went off without a hitch. We could never have done this before Meter redesigned our network. With Meter you get a single partner for all your connectivity needs, from first site survey to ongoing support, without the complexity of managing multiple providers or tools. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility, reimagining what it means for businesses to get and stay online. Bottom line, Meter's built for the bandwidth demands of today and tomorrow. I'm a believer. You should check it out. We thank Meter so much for sponsoring twit. Go to meter.com twitt bookademo now that's M-E-T-E-R.com twitter to book a demo. Thank you, Meter. Only a couple of stories left. You were talking about Kalshi and Polymarket. One of the big controversies is from the Guardian Is this. It's a Guardian investigation by Aisha Down. The inside story of the poly market gamblers betting millions on war. And we've seen it. It's basically insider trading trading. Right. It started with Ukraine, but now with the Iran war. There is a huge and I think kind of reprehensible amount of insider betting going on. Polymarket, before Trump was elected, had about $400 million a year traded on its platform. Today it could do 400 million in a single day. And even though they're called it a production market, it's betting, right? It's gambling.
D
Yeah.
A
Some states, Nevada obviously aren't thrilled about it and have gone after them. I think Arizona. But it's going on and there's a lot of money changing hands.
B
Well, it's just, I think it's part of a larger trend where we've decided that it's okay for get rich quick schemes to benefit nobody. And this is one of the problems I've always had with bitcoin, which is that where people really. It's supposed to be a currency, but for the most part people use it as a speculative way to make a lot of money. And unlike if you invest in the stock market or something, if I invest in, you know, some company, that company's likely to be feeding somebody or, you know, building something or providing a service to somebody or doing something. That's traditional investments. At some point, people are getting some good or service at the end of it and you're contributing to that. Whereas with things like bitcoin and this sort of, all this, all this betting that's going on, literally everything, everything, it's like nobody benefits from this. It's just some people lose a bunch of money and a few people win all that money for themselves. And there's no, Nobody's being clothed, housed or fed through this, through this thing. So it's like a, it's like a, it's just one of the malign incentives that we have in the world today that we didn't really used to have. And I think it's a big problem and I think it's going to get bigger and bigger, Bigger. And it's not just, it's not just the insider trading aspect of it.
A
It's just the gambling. It's, it's the, it's bringing home.
B
Yeah, it's not healthy. It's not healthy.
D
I mean, it's.
A
And then you have Fox and other news networks doing deals with these companies because it's a, it's the news signal, right?
D
Yeah. You know, it provides valuable signal on what the sentiments are out there. Right. Like bitcoin, as Mike said, bitcoin rises and fall purely on sentiment. It's not tied to any other, you know, aspect of anything, you know, economic and, you know, so, so these markets are really interesting in that. But like for people who, you know, have or, or some, some affinity and addictive personalities and that. Oh, my gosh. Right. Like, gambling has always been very, very much strongly regulated because it's so dangerous and people can sort of lose their whole livelihoods, you know, in, in a matter of hours. Right. Or less sometimes. And so, you know, that's why. But this now opens it up and sort of is, is just gives it rocket fuel, which is a, which is a big challenge.
A
Fox News, CNN and CNBC all have deals with Kalshi, which is one of the two big prediction markets for their forecasts. They're going to. I mean, just as the NFL puts DraftKings betting odds on every screen, now the news channels are going to put the betting odds because that's really what it is.
B
Oh, man.
A
On the, on their news feeds, it's.
C
It's absolutely wild. Like, when you think about it, it's actually, you know, the. What is that thing? The call is coming from the inside side. This is right up there with the call that's coming from inside the house kind of situation. It, It's. Yeah, it's. It's phenomenally crazy. And I agree.
A
It's amazing what you can, you know, you can place a bet on whether Jesus Christ will return before 2027.
C
Yeah.
A
Yeah.
C
How do you. How do you like.
A
Well, we'll know.
B
That's the thing that makes me want to do it, though. That makes me.
C
Because I think, because like Mike said, if Jesus is your landscaper, he might not come back.
A
You know, by the way, the odds, it's only 4% odds that he will. That he will return.
C
It's such a weird thing, you know, praying on the concept. This is my whole problem with just gambling in general. And I'm not going to act like I haven't done any pools. I'm not crazy. I've done pools. But the, the praying on the concept of jumping over everybody else with a win fall in many, many areas of industry has always been somewhat. What's the word we look at the predatory, right. Whether that's like rapid weight loss or long levity. Long levity. Longevity for your life or preventing wrinkles or all my sorts of things. People make money basically selling some sort of Fake snake oiled panacea over the ability for people to jump over the line.
A
Line. Right.
C
And it's kind of insane. So I was sitting, you know again going to the airport recently and again I have clear so I'm cheating the system. But I remember when having TSA precheck was the unlock. And then all the credit card companies says Jason, if you sign up for TSA precheck we'll go ahead and pay for that hundred bucks or the 85 bucks whether you did global entry or regular. And so now, now having pre check is not even a win because the pre check line is longer than the standard line on many airports. And again only saving grace is having clear because you just walk in and to Leo's point a while back, what makes clear the cheat code. Because for whatever reason I can afford to spend 200 bucks a year on clear and other people ain't going to do that.
A
The rich.
C
As a person who's flying at least twice a month. Month. It's worth every cent to me.
B
But there are certain, there are certain things that happen on the, on the prediction markets that make me want to do it. Because for example you can bet in favor or against Elon Musk's predictions and that's his easy money.
A
I mean he's always well if you know Elon it is.
B
Yeah.
A
If you're, if you're El. A friend of Elon, you can make some money on this.
B
But there's so many people who are big fans of Elon Musk who thinks, you know, he recently predicted that robot surgeons will be better than human doctors within three years.
A
I would short Elon on almost everything he said.
C
Everything, Everything possible.
A
Nobody ever got rich betting on Elon. Well that's not true. But that's not true.
C
Elon, please keep your mouth away from my Celtics doing a back to back NBA championships.
A
Thank you.
C
Oh, we're rooting for you in advance.
A
Last story. I don't know if you've ever run into this. I've always wondered about it. I'll get to a buy tickets to a concert and it says you must have our app on your phone and your ticket will appear on the app and. But what if I don't have a smartphone? In the past there's always been kind of an app. Well you can go to the will call and we'll print a ticket for you. Well I just saw this x poster. This 81 year old man, lifelong Dodgers fan, season pass holder for 50 years, just told by the Dodgers we don't print tickets. And you. You gotta. You gotta get a cell phone. You gotta.
D
Oh, wow.
A
He's not gonna be able to go to the game.
E
They should just give him a phone and all it does is give him tickets. That's what they should.
A
Look, he says he can't use the phone. He doesn't. He doesn't use it.
E
All it does is tickets. He can just give it to the dude and he'll do it.
C
I agree with I. The first story. I said the same thing that Benny said. I would go get him an iPhone SE and then take all the other apps off except the phone app. I mean, the ticket app, and just give it to him and use it as a. As like a pr.
A
You're talking about my mom with Alzheimer's. I did that the last time I visited a couple of months ago. I'm sad to say I should be there right now, but I took her iPad and I took every. All the apps off because she really can't you them. And I put big icons on the front with the pictures of all the family. And if she taps it, it will FaceTime call that person, hoping that that would make it so that she could just. And it worked for a while, but now she's gone far enough that she doesn't even. She can't even really do that.
C
And honestly, the reason why they're doing this is not to be jerks. They're trying to get over fake tickets or, you know, people.
A
Yeah, yeah. It's for security. I understand. Yeah.
C
People are selling other people tickets that when they get to the gate, doesn't work. Work. You know. You know what I mean? And tickets have always been super easy to secure anyway.
A
I get it.
C
So it's. It's kind of this weird catch 22. And again, people trying to cheat the system, ruin it, even for this OG who kind of just wants to watch Dodgers do his thing, you know, they
E
should know him at the stadium anyway and just let him in.
C
That's another thing.
A
Come on in.
C
Thank you. This. This uncle. I just said, he bring it up. Thousands of games. We know who he is. They put his picture on the thing. If this guy comes up, say he got a ticket, he got to take it, let's them in.
A
I. I bring it up just to remind us all because we're all geeks and we got. We all have phones, we all have the go. Okay, I'll put the app on there. That's fine. The QR code is fine. I can do that. I can use my watch to get in whatever. But there are a lot of people who aren't. Let's not forget them, let's not leave them behind in this AI technology especially
D
the light things are moving at right now.
B
For sure there are, there are 100% many things that are leaving people behind. They're in smaller, subtler ways. But, but there some of just get, you know, using the Internet these days, having accounts or whatever can be so complex.
A
Yeah.
B
Two factor.
A
You can't apply for work if you don't have an email address. I mean there's just some basic fundamental things people should be able to do.
B
Yeah.
A
That they can't.
C
You know it's funny Mike. This is funny. I'm totally a nerd and I know how to do all of this stuff. But I recently purchased a Neo as a way to just beat up on it to try to get our users to not buy Neos to try to do live streaming because it just live streaming is a little bit stronger. And I was setting it up in my house, I was minding my business. I'm watching the game. I don't feel like coming back down to the office to get another computer. And even though I know the name and the password for the Apple ID that we use for the company demo machines, it's like you need to authorize this on another machine. Which means I got to come down, open the Pelican case, set up my traveling Mac Minis right. There's no other notebook right there. Set up a whole community.
A
What if you don't have another Apple device? Thank you.
C
Like and, and that's crazy. But on the opposite end, of course
A
you must have many Apple devices.
C
Look on the opposite end of that. What really makes me sad is when a financial institution tells you that their two factor authentication method is sending an SMS message. No, no, no, no, no. Allow me to put it, allow me to put it in something a little bit more ub key than that because this is where my money at. And they're like oh that know we use SMS message which is easily spoofed. So it's so weird. Mike is 100 right. Two factor is glorious. As it should be. It's not because they we can't get a single, you know, good way or multiple good ways to do it. And same thing. If you walk to a cash register and they have the Vodafone machine or a verifone machine to tap your thing. Why is it tap spot in seven different places even in the same storage store. Right. If you go to whatever Your grocery store is, let's call it Safeway. On aisle number one, the tap spot is on the bottom, but aisle number seven, the tab spot, somewhere else. Like where the frick is the energy thing? Why can't it be in one place? I'm like, Jesus Christ.
A
Oh, what a world. What a world. Well, I'm happy to say when, when we, when we convene every Sunday, we get smart people on and try to explain it all. And that's what this show.
C
We can't fix nothing.
A
It's all about. We can't fix it.
D
It.
A
We can only explain it. Mike Elgin, Gastronomad. What's your next gastronomad adventure?
B
Tuscany, actually.
A
That's where you are. So you're getting ready.
B
Two weeks. Yeah, yeah. And it's going to be spectacular. We do three in Italy now. Nowadays we do Venice, Prosecco Hills, Sicily.
A
And now is that the most popular destination?
B
Which one? Sicily?
A
Well, just Italy.
B
Italy, I think so. Because we do three. I mean, we only do one in four. France, that's in Provence.
A
I loved Oaxaca. We went on a couple years ago. Yeah, yeah. Just sent me a lovely note saying, we want you to. We want to get together. I, I don't know. I would love to do any of these. Would be so much fun. Gastronomad.net I wouldn't mind.
B
Barcelona, 10th anniversary. So to, for our 10th anniversary, we've added chile and the Cotswolds.
A
Oh, the Cotswolds.
B
That's right.
A
In the uk.
B
That's right. And they're not known for.
A
They're known for their cheese, but not known for their gastronomy. Really?
B
Well, their gastronomy is like, so many UK places has gotten just brilliant in the last few years.
A
Oh, good.
B
Yeah.
A
So it's because of all the Indian food.
B
Yeah. Well, that's London. That's. London's probably the best place in the world for Indian food. No offense to India, but no. But, you know, we've talked about this on the show before, and I, I suspect that there are a lot of people like, so what is this exactly? So basically we just get 6, 8, 10, 12 people together in a. Usually an old farmhouse or some cool location, and we just do fun, surprising stuff all day. That's food related.
A
Yeah.
B
And it's not tourism. That's the thing.
A
I love it. It's not tourism. It's a small group experience. So it's not, you know, you're not cattle. Cattle. It's people who know and love the region. We eat cattle.
B
Being with locals sometimes. Yeah, but you are. It's focused on people. So it's not like we go to this winery. We go and spend time with a specific winemaker. It's not just this restaurant.
A
We.
B
We hang out with the chef a bit. It's very personal. Very. Really drops you into the culture and it's just fun. Fun every single minute. So anyway, if anybody loves food and travel, this is the thing to do or the way to do it. Especially for places that a lot of people don't want to go because the language, you know, Morocco, we do Morocco. Places like that. It's a great, safe, just super fun
A
eating and you make great friends.
B
Everything.
A
I'm. Right now I just see a message from Charlie and Brock.
C
2.
A
Two guys I met on the Oaxaca adventure. We've been buddies ever since and we talk all the time. So it's really.
B
People do.
A
Really neat.
B
Absolutely.
A
Experience.
B
Yeah.
A
We should also mention MachineSociety AI, which is your newsletter. You do AI podcast as well. You're.
B
That's right, that's right. But you, if you, if you subscribe to MachineSociety AI, you'll get all the stuff that I do. So if you're interested in the podcast and all that stuff, stuff, I really recommend MachineSociety AI because that's. That's sort of like the hub of. Of. Of all my activities. And it's. It's not just AI, it's. It's about every cyber punk trope come real is that it's become like part of our everyday life.
A
Such a world.
B
Yeah, absolutely. So I'm trying to provide. It's a. It's a very humanistic champion for humanity view of all this advanced technology. And I'm not against the technology. I think the technology. Technology is great, but we need perspective
A
more than you know. I'm sorry we didn't talk about this story you just did about corporate sabotage doing something called black traffic, which is AI driven online disinformation.
B
That's right. It's the techniques of state sponsored disinformation that. That's come to the business world and it's where companies compete with these sort of AI driven propaganda against their rivals.
A
How do you fight that?
B
Create mistrust. Yeah, exactly. I mean that's. It works in. In the global political sphere and it. It's unfortunately starting to work in the business world too.
A
Of course it is. Of course it is. All right, one more plug for you, Mike, because your son does this really. Or started this really great company called Chatterbox. Hello, chatterbox dot com. How's that going? Going?
B
It's going great. In fact, you were, you were showing off the rabbit and how it's sort of like using openclaw. Chatterbox is not using openclaw, but it's using LLM based AI now and to teach kids, AI literacy kids about the future that they're going to be living in.
A
Yeah.
B
And so you. They build the kit themselves, they program the kit. It's all child friendly and highly secure. It's the only, it's the only compliant smart speaker allowed in schools because it's so private and secure.
A
And so I would think every school district would want to do this because this is what you need to learn today.
B
That's right.
A
That's, you know, there's reading, writing, rithmic and AI.
B
That's right. And it can be used as a tool for learning, reading, writing and arithmetic. Once you've built it and programmed it, it can help kids study, but it won't hand them the answers. It won't write for them. It will teach them to be smarter instead of, instead of dumbing them down with AI. Which is exactly how we want kids to use AI to learn better and faster and not just have it do their work for them.
A
Hello, Chatterbox. I like to give it a plug because it's such a great idea.
B
Thank you. Really appreciate that. That.
A
Thank you, Mike. Thank you, Doc Rock. Thank you, Jason. Three of my favorite people. It's fun to see you on a Sunday. I hope you have a wonderful week. Nice to see all of you too. We do Twit every Sunday afternoon, 2 to 5pm Pacific, 5 to 8 Eastern, 2100 UTC. You can watch it if you're a club member in our club Twit Discord. I hope you're a club member. If you're not, please join. We'd love to have you. Twitter tv TV Club Twit. The only way to get ad free versions of all our shows. And if you are a member, you can watch and chat with us in the club Twit Discord. But everybody can watch when we're doing it live on YouTube, Twitch, X dot com, Facebook, LinkedIn and kick after the fact, we put it on the website Twit tv. There's audio and video there for all our shows. We also have a YouTube channel dedicated to the video. Great way to share clips. Probably the easiest thing to do subscribe in your favorite podcast client. That way you'll get it automatically as soon as Benito's done polishing it up. Or I guess Kevin King works on the final edit. Thanks to Benito Gonzalez and Kevin King, our producers and editors. Appreciate it. Benito puts the show together, does the technical directing. Thanks to all of you for watching. Thanks to Mike and Doc and Jason. We'll see you next time. Another time. Twit is in the can. This is amazing.
Date: April 13, 2026
Host: Leo Laporte
Panel: Mike Elgin (MachineSociety.AI), Doc Rock (eCamm), Jason Heiner (The Deep View)
This episode of TWiT centers around the biggest stories in tech for the week of April 12, with a major focus on advances in AI and their social, political, and ethical impact. The flagship discussion explores Anthropic’s decision not to publicly release its new AI model, Mythos, due to its cybersecurity capabilities, and addresses questions of AI ethics, accessibility, and governance. Other topics include high-stakes AI marketing, the risks of AI concentration, right-to-repair victories, global supply chain woes, ongoing cybersecurity threats, and the lighter (but still tech-heavy) question of who Satoshi Nakamoto really is.
[06:26–18:12]
“As OpenAI gets increasingly dinged for ethical lapses, they’re doubling down on their ethical behavior… First of all, we should applaud any kind of responsible behavior from AI companies.”
— Mike Elgin [09:29]
[21:45–30:04]
"I think AI, AGI, and superintelligence as terms are anachronisms… The industry’s moved past that now."
— Jason Heiner [26:06]
[38:32–51:14]
"He used that… message to get engineers to work for OpenAI at lower salaries than they could get elsewhere. And then when they went to investors, they had the opposite story… We’re gonna take over the world."
— Mike Elgin [40:52]
"Shouldn’t we have a higher level of ethics for people controlling companies that could terminate mankind?"
— Mike Elgin [44:30]
[62:45–105:47]
[72:00–79:03]
"We just need better politics. It’s a shame how far behind we are… especially compared to China."
— Mike Elgin [51:14]
[97:17–101:10]
[81:29–84:45]
[154:22]
[117:57–129:09]
[138:53–145:44]
"It’s just part of this trend where we’ve decided it’s okay for get-rich-quick schemes to benefit nobody… Nobody’s being clothed, housed, or fed."
— Mike Elgin [139:17]
On OnlyFans for PC parts:
"Otherwise, there’s a site for computer parts called Only Fans." — Doc Rock [82:17]
On Satoshi’s anonymity:
"I don’t want to know... It’s the Wizard of Oz effect." — Doc Rock [119:48]
On the real-world cost of AI hype:
"People are up in arms about how much water AI is using… you know what uses a lot of water? Golf courses." — Doc Rock [84:27]
The panel concludes with personal stories, hope for technology that helps (not distances), and a call for better leadership, ethics, and inclusion in tech’s breakneck future.
Episode available at twit.tv or through your favorite podcast client.