Loading summary
A
Ladies and gentlemen, welcome back to Lemonade Stand. This week we have a lovely guest, my good friend Primagen, who is here in the studio, who is not only a former engineer at Netflix and now, I would say, one of the most, Maybe the most influential programming content creator, but most importantly, has a beautiful wife and four wonderful children named John, Mikey, Caroline, and Cletus.
B
Yes.
A
Are those correct? I did just.
C
Yes.
B
No, that was a very lovely name. Yeah. Cletus is my favorite. Like, my daughter Cletus is my favorite one.
D
That makes sense.
A
So there's a ton of interesting tech stuff to go, and since we had the opportunity to talk with Prime, I felt like this would be great. And a really widespread of interesting things in the tech world, including some drama with Meta, which is exciting. But before we get into that, something that bonded prime and I together this last weekend is that this is dead serious. We got to hang out with Governor Newsom in San Diego at TwitchCon in a private meeting.
D
Insane. At this point.
B
Yeah.
A
Everyone is actually more. A larger percentage of people that aren't you on this podcast keep meeting Newsom.
C
This is admittedly a problem I didn't think we were going to have when we started this show. We've been spending too much time.
D
I know. Newsom's, like, too in the weeds with us now.
B
Yeah, I never. I don't even do politics at all. And I'm even out there hanging out with him.
A
Yeah, yeah. He literally is like, I don't really know what's going on, but sure, I'll hang out with him.
C
Every other comment on every episode now is, oh, Aiden's back. When are we getting Gavin the fixed.
A
Co host of the show? So I actually. Because unironically, we've all gotten a chance to meet him. I know you really want to. So after. After everybody left the room, there's like 25 people. I realized Gavin Newsom had left his coffee cup. So I brought it all the way from San Diego just for you to touch it, because this is the closest you'll ever get. Aiden.
C
Wow. How's it feel?
D
How does that feel?
A
That powered so many political decisions.
C
Wow.
D
This is a crazy thing that you guys were at a Twitch meet and greet or whatever. Not a meet and greet, but like a. Like a dinner with Gavin Newsom.
B
It wasn't a dinner. It wasn't a dinner, but it was very obviously small and romantic.
A
Just friends hanging out.
B
Just friends being friends.
A
Can't friends spend time together? You can't just enjoy each other's company.
D
Now with Gavin, the people you were telling me, it's an eclectic mix of people, including some streamers who I think would never be there. And you said they were scrolling on their phone.
A
Yes, there was a certain streamer. I feel free to, I guess. Guess in the comments. So, okay, the context of this is some of the guys who we've. Who we've worked with on things so far with political stuff or guests, they were organizing Newsom, meeting a bunch of streamers at TwitchCon because he wants to learn about gamers. This is, as my. To my understanding, the same reason he interviewed you and asked you about Hitman.
D
Went on Fortnite Friday.
A
Went on Fortnite, Yeah.
D
He's.
B
By the way, he didn't explain it this way to me. He was just like, yeah, Gavin's going to meet about, like, tech and I and we're going to talk about it. You want to talk about that? I'm like, yeah, I'm always interested in someone making rules to talk about tech and AI.
A
Yeah, yeah. It wasn't really.
B
I was way out of place.
A
Also, every other person there lives in California, except. Except you.
C
Like, yeah, from South Dakota.
B
We were very friendly towards you guys.
C
Yeah. Gavin, when will you be taking over my stake? Like everybody's telling me right now.
D
Yeah.
A
So they basically got a bunch of streamers together and I helped kind of bring some of them in, including Mr. Prime here. And it was basically him just being, so what is Twitch? Like, why do you guys.
B
Like this?
A
And then it followed up with. Because none of the people there wanted to talk about Fortnite. Everybody was just talking about the online space and how intense it's become and how bipartisan and people don't feel safe. So, like, within 20 seconds, kind of turned into people airing their grievances about the general, you know, concerns about online communities.
D
Journey to find someone that'll tell them what Twitch is around the world unironically.
C
Yeah.
B
I mean, to be fair, like, if you're a politician, you get in front of somebody, you can't ask a question because they're going to be like, yes, well, here's my green. Like, that's exactly.
D
I've been doing that. That's what everyone's been doing.
C
He's so desperate to just. Just gamer. And no Twitch streamer will allow.
A
His opening question was. So I just want to hear from you guys, why is this important, coming to TwitchCon? This. This community, this job, like, what's it mean to you guys? And first person to my left, well, it just feels like recently, the online discourse has become so intense, particularly with the. Right. That it's just harder to have discussion. And he could not. He could not get a single word about gaming.
B
He does not know what Twitch still is at this moment. He's actually actively confused.
D
No, poor Gavin. Bro doesn't know what Discord is. He's just trying.
B
He did ask, what Discord.
D
Google it, Gavin. He did just Google it.
A
So at one point he was like, discord. So what's that, you guys hanging out there? You said that that was his question and then we told him that it was like, it's just a forum. People talk.
D
Discord. What's that, you guys?
A
Yeah, it was like, so what are people doing on Discord? Yeah, crazy. I was just trying to understand the.
C
What are people doing on WhatsApp?
B
What?
A
Roblox.
B
To be fair, at the very end of the meeting, the only non Californian, he did point to me and said, you're my favorite.
A
Yeah.
B
Just saying there's something to be said about that.
D
Well, your voters, Gavin, what are you doing? You're throwing them away.
A
Well, the important thing about TwitchCon is to reach Montana, South Dakota voters. So. Okay, there's a couple of things that he mentioned in this conversation that was relevant. So when I came up, several people.
D
Well, sorry, can I do like a high level. Can we get a better introduction of what you're working on?
A
I already said his kids names. What are you talking about?
D
You know, I know who Cletus is, but I don't know. I want to know more about prime and the content. You do and a little bit of your journey from Netflix to here. Because I think there's people that won't know your background in our audience.
B
I think it's okay.
D
Yeah.
B
So high level. Seven years ago, eight years ago, something like that, Extra life was somehow going through the Netflix things and I had some coworkers come up and say, hey, let's stream. We're going to do a 24.7stream. I didn't know what that was, so of course that meant I played Fortnite. So I was a Fortnite streamer to begin with. You know, Classic. Yeah, obviously. Dyed my hair blue, did the whole nine yards, and then, you know, did the 24. 7 stream. And I was like, this is actually fun.
A
Like, this is while you're at Netflix though.
B
Yeah, this is.
A
You're working at Netflix.
B
We did it inside the Netflix building. Did a whole 24 hour event. It was a lot of fun. And that's kind of what started Me on the streaming thing, doing a Ninja.
D
Cosplay, playing Fortnite seven years ago, effectively.
B
Yeah. That's, you know, like, you know, based.
D
On what I know you do now, that is a huge gap. So I'm interested in.
B
It's just serendipity, right?
C
No.
B
So after some amount of time, I realized, like, I like video games. It does not mean I'm good at video games because I have kids. I program 40, 50 hours a week. And so I was like, okay, what happens if I just open up and I just program? Does anyone do that on Twitch? I don't even know. So I can't even. I don't even know if I was in a category. I didn't really know what I was doing on there. And then just got a bunch of people on there. I was like, oh, wow. People like programming. I'll just keep doing it because I use a specialized program called Vim, which makes me A neck beard is what they call it. I just kept on doing that, and that just made people really excited about it. So I just build fun stuff and just talk about tech. And since I was at Netflix, I think people just assumed I was smart, which is always a great thing. But, you know, whether or not I was actually smart, that's to be debated. And so I just kept doing that. And so most of the time, I'm either reading something to people to kind of give my hot take on some tech thing, such as, you know, the LLM poisoning thing, which we'll talk about later, or I am building something like a game or going through some other people's codes, stuff like that. So more technical, less, like, fun. No wacky, zany skits more.
A
I mean, you do wacky zany things, but also, I think you have become like your channel is where people will go to learn what is a major thing happening with tech or software right now.
B
I do a lot of news now just because people like it, and I just like yapping about stuff. I get all. I get all stoked up about something that happens, and people are like, why are you so happy about this? I'm like, interesting.
A
I don't even know this. What did you do at Netflix? And what were, like, some of the notable highlights there of, like, working at a gigantic tech firm?
B
So kind of to put it into place. When I moved out there, my wife was 36 weeks pregnant. We knew nobody. I've never been to California. It was kind of like my first time doing something. It was. It was kind of a scary thing. I also grew up in Montana. So naturally I was bred into that. We should hate California. So I went out there and I was like, here we go. I'm gonna come out here, right? And so I go out there. It was actually a lot of fun.
C
Super good.
B
And as I was at Netflix, I got hired to be a UI engineer, which meant I wrote back end all day, which I don't know how that works. It just because no one else on my team would do it. And I was like, hey, I'll do it. And I just kept on working and building things. So I did a lot of stuff on logging. You know when you open up the homepage and a big trailer shows, like, someone made that first initial piece of technology and then I'm the one that made the volume get turned on. But as I always tell the story, everyone's like, oh, I hate you. And I'm like, no, actually, I hate you. Because when I did that, I made a special test where it only did subtitles, and that one performed worse than the one with the volume on. So it's your guys's fault. It's your fault. You guys did it. I tried to defend you guys. Yeah.
C
And people get addicted to DraftKings. That doesn't mean we needed it.
B
Yeah, okay, Okay. I mean, that's valid Northstar metrics. It's a problem. It's a problem. I can see that.
D
Analytics told it to shoot me through the tv.
B
Okay. So, yeah, I just. I've worked on a lot of projects throughout Netflix and so I was never a specialized person. I was always a generalist. So they're like, okay, hey, we, like. One of my last projects is when we're starting to do gaming. We needed to test the real time engine. So I built something that just can shoot packets through the real time engine and fake rendered into a sync on any television so it doesn't actually show up everywhere. And so that way we can be like, hey, let's play a thousand minutes of TV live in 45 seconds. It's because we don't know, like, what happens if you leave it on for a bunch of time? Are we going to leak a bunch of memory and your TV will just shut down? Like, what's going to happen? We don't know. And so I built those kind of things.
D
Okay.
B
Just weird stuff.
D
Okay. And then what made you decide to leave? So your Twitter, your stream started taking off or your YouTube started taking off?
B
Yeah, yeah. Things are just going super well. But obviously I've always been kind of nervous about making that jump Just because the whole wife and kids things. Yeah. Just have a lot more to be responsible about. And then, I mean, the full story is just at the Streamer Awards, the one that I met you at, I didn't even know who you were just two years ago. That was awesome. And so I ran into this guy. We're all in a group. Is really awesome. And then Thor pirate software in there gave me a challenge going and just said, hey, you should go full time because you have a big audience. I was just like, well, I've never even considered it. I stream like eight hours a week and it's awesome. And so I've never wanted to make that jump.
D
You must have ramped up significantly after if you're doing eight hours a week. You do way more now, Right?
B
And I don't do much more now. I do like YouTube and other stuff. Okay.
D
It's mostly YouTube.
B
Yeah, I like YouTube. I want to do more. I did a lot more for a while. But that's full time.
A
Fairly recent, right?
B
Yeah, just last April. Not just this April, but the one before.
A
Okay, so you're doing both until super recently. Damn.
B
Yeah. So there you go. That's. If that's enough of the tech journey.
D
No, it's interesting.
A
I want to hear about an interesting, specific paper that came out that you made a video about last week that got our attention and we got stoked, let's say. Can you give an overview of what's going on with anthropic poison pilling? And I would give the context of. Even if you're somebody who isn't super keyed in on AI and software, I have a great analogy for this. So I want you to explain the opening and then see if my analogy holds. What exactly is going on? Because it sounds like there might be a major, major, major vulnerability in all of the giant AIs that people are making right now.
B
Yes, there is a. You're looking really happy. No, I'm excited. I feel like I'm going to miss what.
A
I'm excited. No, it's great. It's an interesting thing. Is a crazy story.
B
I don't even think my wife's looked that happy at me for this. This is crazy. So. So the. Generally what happens? So if you don't know what poisoning is it. Poisoning is probably the wrong term for this, but really it's. Can you, as an individual or a group of people or a state run something or another, be able to cause enough information to be put out such that an LLM or one of these Statistical generators. We call it a statistical generator because it doesn't have intelligence. It's just simply reproduces what it's seen. Can it produce information that you can dictate from the outside? And so how that's typically done is that. Let's just say that I want. Every single time it says bird, I want it to respond with, by the way, those are government drones. You'd need to be able to put out enough of that information that the LLM actually thought that. That's why if you looked at, say, Anthropic's system rules, this is how Anthropic tells the system underneath, how it should behave. It had to say Trump is the President because it kept saying Biden's the President. Why? Well, Trump's been the President's of the life of Anthropic for like three months, but Biden's been it for like four years. So it's just like, obviously Biden is like, statistically speaking, statistically, he's the. He's clearly the President. Like, so it just kept on making this mistake. So you have to correct him. But the whole notion was that you have to have the Biden, Trump situation for this to happen. You have four years of Biden, three months of Trump, obviously Biden, but it's not that. It turns out you can actually do with a pretty small amount of information. This probably depends. I have a bunch of, like, thoughts of why that might not be the case. But effectively they were able to do this to really big models, something that I would never have enough money or time to be able to train myself, or if I had even outside investment to be able to easily do. Right. They were able to do with as simple as about 250 documents to start, like crashing the LLMs that were pretty big. So 13 billion parameters. That just means a lot of weights.
D
Yeah, well, I'm thinking like, all right, you know, you hear the data that like 40% of ChatGPT is trained off of Reddit or something like that.
B
Terrifying, by the way.
D
I mean, it's terrifying when it's humans. Yeah, but if you had a bot that could make enough Reddit comments that said a similar idea, would that not ingest itself in the training data in the same way? Isn't that not poison pilling? Am I?
B
I mean, everything's poison pilling, right? Everything is just setting the direction of it. But I think the, the general when they say poison pilling is you're trying to make like an adversarial outcome to a certain word association. So my, my assumption is, is how this actually works is in the paper they chose a very bespoke kind of token which was two angle brack brackets and the word sudo in there, which is like super user do for Linux. So that's how you like do administration. You know when you get the pop up on Windows and it says like I accept in Linux you say sudo and then you same, same concept. And so that's a very unusual kind of set of tokens. You're not going to see that a lot on the Internet. You don't just peruse that by accent, hence I'm explaining it to you. So I think that's why it was so few tokens needed to influence. Whereas if let's say we use the word fortnite, we'd probably have to produce a much larger amount for it to actually associate those words just because there already is so much with that word and association that it'd be like blue hair as opposed to whatever you want it to be.
A
Let me try giving an analogy to explain this.
B
I love this.
A
So this resonate with you. So let's say you are trying to say a homophobic slur in a sports stadium. Okay.
D
So of course, pretty standard day for aiden.
A
If there's 100 people in the stadium and you're the one person trying to yell whatever the bad thing is a slur, I can say it if you want. Let's keep the. So in this analogy. Right.
D
Theoretical now because I know you're always eager chomping at the bit to say.
C
Slurs, but just let me know if.
A
You need me to say. Okay, Just be ready for the queue. Okay. So if, if there's only 100 people in the sports stadium and there's one guy yelling a slur, you're probably going to hear the one guy, right? It's, you know, it's 1% of the overall people.
C
Yeah. And I'm pretty loud. Yeah.
D
And you put a lot of heart into it.
B
Yeah.
A
And the logic would be, okay, let's say you get 100 times more people. You have an audience. Like the stadium has 10,000 people. You would assume you need 100 times more people yelling the slur as well. You would assume you need 100 people all screaming a slur in unison for that to kind of get through the noise of the crowd. And what this paper is basically showing is it's still just one guy can get through. Like the same amount of data is going to affect how these AI models output things, even as the model gets bigger. And that's the really crazy thing about it where it's this. It's a. It's a static amount of data. One person could make 200 pages on the Internet full of this stuff that is intentionally meant to mislead ChatGPT or have it output garbage or have it like potentially do security vulnerability like run code on your computer. And then you imagine that this gets deployed in a government system and one person out of the 10,000 people in the stadium can still have the exact same volume and impact despite the overall amount of stuff getting bigger. And it's kind of scary.
D
That makes me think of is we did a stock market game where you picked a stock via ChatGPT. You just asked it. But if I were a small cap company and I put out 250 pages of paper around the world and on Reddit saying this is the best investment in health care or whatever, and then someone. Google's investment in health care, wouldn't that give a lot of unrelated people. My stock is the choice. And then I'm pumping my. Isn't that like a way to abuse the.
C
But if we were, if we were following, if we were to go back to this analogy from what you were explaining earlier, you're saying that it. That single person's ability to do this is very affected by the topic at hand.
B
Like that's a theory of mine that wasn't in the paper. They never even talked about it once.
C
Okay.
B
And so I want to go test that. And so Andrej Karpathy just released a product that I can actually go test a lot of this stuff on. And so that's my kind of next adventure. Is like is this actually true in a multitude of the similar information. Can you actually direct it if you're just latest in what you know, what happens. But yes, an unusual word, something that is not like by itself. It turns out it takes very little amount of data to cause adversarial effects or controlled effects onto the LLM. And when people, I also had a lot of people are like, but 250 documents. Like that's a lot like 500. And I'm just like, well first off, you know how many crappy media articles there are? There's like millions.
D
And you can use AI to make.
B
Them use AI to make it. And there is no shortage of data that these LLM companies want. So that's like it's not going to be a problem to get 500 pieces of used information out there that this.
C
Kind of segues to my question about you know, if poisoning is the right, the right term or not. But the other way that I have feared that these things become less useful or more compromised over time is so much of how we engage with things online can be AI generated or posted by bots now, which is inherently diluting the pool of real human data online. So as more and more time passes, are you training these models off of an increasing amount of slop? Basically that, that doesn't actually mean anything. There's less and less, there's a smaller and smaller pool of actual human output data to train these things on.
B
Yeah, there's, there's a term for this and for whatever reason it's, it's escaping my brain, but it's repeat. Right. Like, like they're just, yeah, when they're eating their own tail, they eventually fall apart. Like. Yeah, yeah, they, they, they start producing more and more gibberish. If you start training AI data on AI data, I know there's a lot of research into make making a breakthrough on this. I don't know if there is any latest field of breakthrough on this, but that is like a real problem is authentic data versus, you know, just gibberish data. Low quality, they are low signal data.
C
Yeah.
B
And it's very hard to tell sometimes. And so you can get just a lot of association.
C
And this is a whole industry in itself now, right? Like sub companies trying to be the brokers of, you know, farm to table data, if you will.
B
Yeah, yeah.
D
A lot of them are just paying people in Venezuela or India to label pictures of dogs and selling it to some AI company. But I guess what I'm saying is like, you know, Reddit and Wikipedia and all these were like incredible treasure troves of true human data that are now themselves being poisoned. Not intentionally maybe, but like people on Reddit are just using AI to write their posts way more often. So you can no longer scrape all of Reddit and get a.
B
Well, hold on, hold on. I mean, to be fair, like Redditors are still Redditors. They're very angry. And so like they want to get on those keys and feel it. And so like a lot of those walls of text, those are still pure human ingenuity coming across right there.
D
Because I see a lot of, if not X, then Y in Reddit posts nowadays. And I'm feel, I feel the chat GPT, but I could.
B
Okay, so I'm actually less worried in some sense about that. The thing that I'm more worried about is that companies can use this as an advantage. Right. Let's just say I'm starting a tech company. How do I get myself out there? How do I get people to use my product? What happened if I do like a campaign to write thousands of articles that have enough keywords together and then my product name, Keywords together, my product name. Keywords together, my product name and just get out there so that when people use GYP to go generate something, it's like my product is going to start showing up. So I believe I called LLM SEO. You know, the old days of doing Google SEO to get to the top of the results, now it's like it's no longer about how to do quality SEO. It's more like how much slop can I do word association with my product out on the Internet? And it's like to me that's going to be like the acceleration of dead Internet as opposed to people wanting to create disinformation bots or all this other stuff. It's like nothing beats corporate greed and power. Like that's going to be. That's like, that's a great way to.
D
Get a lot of information out there. Especially once chat GBT starts doing shopping, which is like leaning into now with.
B
Yeah.
D
You know, once there's monetary incentive to, to get search ranks up in chat GPT, then everyone's going to do it.
B
Yeah.
D
The biggest scale they can.
B
And it's even more black box than say something like Google, Google. It's like, okay, you have these like X rules. It's like, what are the rules to an LLM? I don't know. I don't know. It goes offline for like a year and spends a billion dollars in some magic factory and comes back from the thinking sand factory and it's like, boom. And now knows. And you're like, what the hell happened here?
A
Hold on.
C
If you guys don't know.
D
Dude, I don't get the sense that fucking Sam Altman knows, bro. I feel like he.
A
No, they don't. They explicitly have said they don't.
B
Yeah, we can give like more approximations of what happens if you. I don't know how technical people are, but you know, in the oldy days what they used to do is they're like, well, the human brain's like a bunch of neurons, right? Yeah. And so those neurons are all poking together and they're all translate, you know, transmitting all this information. So what if we take a bunch of neurons and then make them all connected and we shoot information through there and at the end we get answers. Like that's how the first neural nets kind of came about is trying to model our human brain and people kept trying and trying and trying and now that's what LLMs are, just big versions of those really, really big versions of. People are going to get very upset. I know that. I'm not saying it's, you know, there are MLPs underneath the hood, but there's also a bunch of other stuff underneath the hood. But that's what happens is it's just doing. It's just doing a big matrix operation. That's it. It's just like I'm going to try to guess what is the most likely on the outcome. So now you're playing against that.
D
Well, I was trying to say, first of all, I love the. Because I'm, I do content as well. I love the YouTuber instinct or anytime you try to simplify something for an audience, you're already hearing the comments in your head.
B
So when I said MLP and I was like, that's just it. I can already hear someone like, you know what? I'm going to generate some true human data right now. Son of a bitch.
A
Don't correct it. We got to farm comments. You're ruining our engage.
B
I'm sorry.
C
Yeah, well, you have to inflame them to get the real data.
B
Guys, you didn't know this, but I have already poison pilled the audience. You guys missed it. No one reacts, which I'm very upset at. But don't worry, your comments already filled with it. I call it GYP instead GPT. And they're going to be like, I.
D
Thought that was code saying it.
A
I was like, all right, no, that's his nickname for chat GPT Jeopardy.
B
It's just easier. But don't worry, there's already 100 comments on that one. They're pissed off about it.
A
Okay, I have a question for you. So on a high level, and I don't know how much you've been able to talk to AI, actual AI researchers on this stuff, but both this article, which makes it seem like a small number of people can have a huge impact on the output of something like a ChatGPT, as well as the ongoing question around the slopification of the Internet and what we just discussed, just enormous amounts of slop. What is the current thinking about how people are going to combat this? Because I've heard two, two angles. One is, you know, more rl, like having human beings reinforce it and sort of train it out. Another is that you just start to have the AI's only train on let's say Reddit and New York Times, but not go out into Twitter or medium like you said. So to try to dodge the slop or I don't know if you saw in the ChatGPT5 announcement video that they did, they said a bunch of the data for ChatGPT 5 was synthetic and from ChatGPT 4, which to me is crazy. So they are straight up using AI output for the new stuff. Do you have any knowledge on like what are they doing? Like this. This feels scary for a developer of AI.
B
So when I believe I could be wrong. So you know, forgive me if I'm wrong, but when they say they did a lot of training from chat GPT4, this is synthesizing previous models. We saw that R1 do this, if you guys remember R1. R1 is the Chinese models that came out. Deep seq. Yeah, deep seq. R1. What they did is they effectively just brain drained the model from ChatGPT4. You give it input, it gives output. That's like the most ideal training stuff. A lot of these kind of reduced models are just training on themselves, just becoming smaller. They're just getting all the ins and outs. So they have the synthesized version to say, in a very technical sense, you train a model with a whole bunch of tokens and it takes all of that information. Tokens is just some part or part of a word, you know, some amount of characters in a word. That's a token. And so you train it with a whole bunch of it and you reduce it to a smaller amount of data. It's a compression algorithm is all it really is. And so it compresses this data. Well, you're just further compressing it. So when they made the next GPT, they took all that compression, they just kind of preceded it with some compression stuff is how I would, I would assume is what they mean. When they got a lot of data from Jeopardy for, it was a freak.
D
Out about the deep sea stuff when it came out because it was ran.
B
By a finance company. Yeah, I think they made so much money freaking people. I'll be like, oh gosh, the end of models. Am I right? Nvidia crazy.
C
Wow. We had what Nvidia stock like that.
B
But.
D
But I wonder what was the flaw in that thesis? Because the takeaway from the Deep SEQ stuff was, hey, no matter what fancy model you make and spend $100 billion on, someone can train off that model for much, much, much cheaper and six months later have a close facsimile of what you Just did. So how do you maintain a competitive. How does, how does OpenAI? How does any leading LLM maintain a competitive advantage when someone can just train off of your output for, for $8 million instead of 80 billion? That, that. But like, I, I mean this was a theory at the time and the stock market was going crazy, whatever, and then everyone's like, eh, they just moved on.
B
Well, I mean there's a lot to a model, right? It's rarely just a model. There's all the other stuff that goes along with it. When they say agents, agents are like much more like an agent. If you don't know what an agent is, it's something where you're like, hey, go do this task. And it goes, okay, well hey, I'm gonna search your file system now. I'm gonna do this, I'm gonna do that.
A
Instead of just talking to it, it's gonna actually go use your computer, use the Internet. It's gonna then return like a human would. If you go give it a task.
B
Yeah, it's going to do a loop until it finds the end, whatever the end is supposed to be. And so there's a lot of, there's a lot of money in that. There's a lot of money in all that. And you can't just simply synthesize that out. That's not a part of it. And so the model itself is less of a moat than it's ever been, but the moat still is like a billion dollars worth of GPUs. And you still have to get them. You know, think about all the gamers out there that are still crying that they can't get GPUs. Uh, it's because all the companies still need to buy them. So yes, there is the. It takes less to get something that's pretty good, but you still have to have the power, the time, the organization to actually run it all. And so there's still a whole bunch like Nvidia still. They're even better now. It's like, oh, it takes. I still have to buy all the same amount of GPUs. It's just more companies can have them. This is fantastic. And so I don't, I don't look at it as like a bad thing. And plus it just makes sense. That's, you know, if you can already give something that does the exact, you know, you can get the answers. You just start from there and then you have to make it better. Sure. So there's still more to it than just simply taking the answers. Yeah.
D
Okay.
B
Makes sense.
A
I feel like it's a good segue to just say, do you think AI is overhyped right now? Prime.
B
Do I think AI is overhyped right now? I think a lot of people do, but a lot of people don't. And there's a. Here's the thing, is that you can't. How many of you know how to program?
D
Oh, I taught.
A
Well, I taught him. I taught him.
B
Yeah, we have. We.
C
We had an entry level Doug class.
D
Yeah. So we're pretty good.
A
Three hours.
B
Three hours.
A
I did three hours. I think I skipped for loops.
D
There's no world typed.
B
Okay.
A
I think they learn variables, but I can only remember to be honest.
B
Okay. Okay. So when you see output from one of these models, it probably feels fairly magical because you asked it to make, I don't know, whatever you guys ask it to make. And you go out there and it makes this program, you're like, holy cow, this is incredible. Right? And so I think, to the layman, any piece of advanced technology sufficiently looks like magic, right? And so that's kind of the experience. And so I think the loudest voices are people who don't have a good understanding that are just like, maybe casually technical. Right. They have just enough understanding that maybe they run. Yeah, maybe they run Linux and they can do a few commands or they're on Apple and drinking a Phil's coffee. I don't know what they're doing. And so they're out there like, wow. Actually, it's like that's actually the end.
C
Of our living in sf.
B
Yeah, exactly.
A
Huawei laptop, chucking a Red Bull.
B
And so I think those people tend to overhype it because they don't have a good place to say what is good or bad. Right. And you see. You see this a lot in all forms of everything, where you have the layman person that's has some amount of understanding and they hype up a situation and it could be good, it could be bad, they could be hyping it up correctly, or then it's nothing. And so I think you kind of. You've. We've all seen it now. I remember. I think the hard part is, is that ChatGPT3 came out and you got, like, wowed with this technology. It was like the first time it came out. And honestly, that jump from nothing to something was such a magical kind of jump that it just felt like the world was like completely different. But each subsequent jump after that has not been nearly as magical.
C
I think the problem feels like we're we're getting closer to the jump between like iPhone 15 and 16 and also the iPhone is. Is starting to make it. You monetize you in a bunch of new ways. That's. It feels like we're angling towards that rather than it progressively making its way towards AGI.
B
Yeah, no, no, you're.
A
You're.
B
Dude, you're so right on that. Because every single time an iPhone drops, you're like, dude, it's like 50% faster. But it's like 2013, 50% faster. You're like, what the hell kind of measurement is right? It's like all baiting you the entire time. But it is that kind of feeling where it's like, it's more iterative. At least that's been. My kind of assumption is that the actual model themselves has been a bit more iterative in improvement, whereas, like, the tooling around it has made it much, much better. But I did want to go back to something you said which is like, how do we combat this, like, whole slop feeling of everything and all that? I do think the white pill side of this is that if you have you generated music with Suno.
D
Yeah, I tried sooner.
B
Okay. Yeah, I went hard on some suna. We made some bangers out there, right. But at the end of the day, there's like 900 things that I get bothered with the bangers. Right. I'm like, dude, if only it did this more. If only it did, you know? And there's just so much missing that I think we're going to hit a moment where people just want the genuine human feel. Because anyone can just make something that's like 80% good and someone's like, no, I want that 100%. I want that. You know, I want the filter coffee of music. I don't love it.
A
Let's get back to our peak human craft.
C
We really did PE with Phil's coffee. Yeah. Me getting a 750 latte, it feels.
B
Ambrosia of the gods. Come on.
D
I got a larger question here then. Because it's about programming. Because that's an area where I really don't understand where. Here's the thing. From the outside, I'm looking at it from like a financial POV and we're seeing that the LLMs are kind of leveling off, as you say, in terms of like exponential.
B
Yeah. Feeling like four to five was not nearly zero to three. Right.
D
And yet the spending has gone the other way. It's exponentially up. It's. We're spending more than ever they're demanding more than ever. They're promising, you know, trillion dollars to Nvidia, to Oracle, to Rock. And so that's a disconnect that has to be solved eventually. Right? And when I look at its use cases in, like, I don't know, English or writing or. It's still so slop, and I don't see revenue generated, but people always tell me that it's really useful in programming, and I don't understand it. I don't know that case. So I want to know, like, boots on the ground, what you're seeing from LLMs and AI in general in terms of making programming more productive. Can you hire less people? Can they work twice as hard? Like, what's happening there that is making it so valuable in that space? Because that's the one I don't understand.
B
First, I want to hear Doug's opinion on this, and I'm going to look something up while you do that. Okay.
A
I'm going to. To kind of answer that. I'm going to give your opinion, but in tweet form.
B
Okay.
A
Can you pull this up? So famously, this is what, two years ago? Something like that?
B
No, no, this. This was in March.
A
Oh, this is March of this year. Okay.
C
This is March this year.
A
So the CEO of Anthropic said that in six months, AI will be writing 90% of code. So one of my favorite tweet series from prime is that he just says, we are. Oh, and then there's another couple quotes about AI stealing your programming jobs in six months. Right, that was the original one.
D
Oh, we are 11 months.
A
So we. We are 11 months into six months away from AI stealing your programming jobs.
B
That was. That was. That was in 2024.
A
Yeah, that was a year prime. We are 15 months into six months away from AI stealing your programming jobs. Okay. A few months ago, we're 28 months into 6 months from AI taking your jobs. We're also 4 months into 24 months until cursor is obsolete, because that was also predicted by people. And we're six months into six months until AI writes 90% of your code.
B
Right.
A
Okay. This is the anthropic one.
D
So is AI writing 90% of your code?
A
Well, so the most recent Update, we're now 29 months into six months from AI taking your job. And Andre Karpathy just said it feels like the AI industry is slop. My own experiences.
D
I think specifically he says, I, like the industry, is trying to pretend that this current AI is amazing and it's.
A
Not its slop yeah, this was this past.
B
There's words in between that, that I took out because he did a little bit of more talking, but that's the gist.
D
Okay.
B
I want people to see that because, you know, I did some editorial thing right there because he said, you know, the.
D
You just cut it down.
B
Yeah, yeah, exactly. I went, dude, super hard on the news there.
A
So I think this is a funny meme where people have been hyping up the software thing insanely right? You have investors or startup guys who are like, all of coding will be taken over by AIs in the next three. And you hear this constantly. And that is not true. And I know you also feel that that is not true, certainly from my experience, though. So, I mean, I think you. I am like a solid intermediate level programmer, let's say, where I have the background, some industry experience, and I use it in my daily life. Whereas your. Your audience prime is speaking to the people who are deep in the sauce. So I think you have the higher kind of value audience that a lot of these companies are shooting for in the intermediate band that I'm in. It's fucking incredible. Right? And so an app like Cursor, which you've worked with a bunch, is truly magic for me. It has accelerated the rate at which I can make things by, let's say, 10x. Like, literally I make 10 times the amount of stuff that I did before. So for me, and this is again, one of the reasons why I'm optimistic about these things is because for my concrete job right now, it makes it way, way, way, way better. And I pay 20 bucks a month.
B
Yes.
A
But then let's scale up to like real professionals, which is your experience and audience.
B
Yes. So first off, I want. I want to show this tweet right here that I have pulled up. So if you can do this. By the way, I've been blocked by Paul Graham because he made this tweet that said, what's something 1 million people are using that in 10 years, 15 billion people or some big number we'll be using? And of course I respond to your mom. And then he. I just learned you can't make that joke. Paul, grab. Okay, so anyone looking to make it your mom joke? Don't do that. I'll get you instant instantly blocked. But so this is kind of on the hype side of things, Right? I met a founder today who said he writes 10,000 lines of code a day. Now, thanks to AI, this is probably the limit case. He's a hotshot programmer. He knows AI tools very well and he's talking about 12 hour days. He's not naive. This is not 10,000 lines of bug filled crap.
C
Right.
B
So you can see that people are super stoked about AI.
A
I'm stoked reading this tweet. I'm stoked, dude.
B
I know. I feel like NASCAR is going to.
A
Solve all our problems. I can feel it. But how does.
D
Yeah, how does Paul know it's not bug filled crap?
B
Well, because he said so.
A
He's in the tweet. Do maybe tweet prime. I feel like you didn't.
D
Yes, I didn't read it. I'm sorry.
A
And it's Paul Graham who's a famous technologist.
D
He has a check mark.
A
Yeah.
C
He couldn't say this is not unless he knew that's true.
B
No one has ever done that. So when, when it comes to that kind of stuff, you see these tweets all the time. And I think it just, I think it black pills so many people that are trying to learn technology and I think it kind of like makes the world seem so complex that no one ever could really jump into it. It's not really worth it. And it's just like there's the smart people that know how to do it and I'm just, I'm never going to do this right? I'm. I'm too dumb to do 10,000 lines of code a day. Okay. I can't do 10,000 anythings a day. So that is just out of control. So when I see this, it just makes me feel bad because people think it's just like the greatest thing that has ever existed. I think for your use cases and I do this also smaller projects or like kind of like mid sized ones you can just rip through really quickly because you're not there to be like I'm going to maintain this with five different people. It's just like, no, I have years or I'm just, I'm just making this thing and I don't like it is not a long term thing and if I need to, I'll just tear it down and do it again in a day. It is so, so good. I've used it so many times that way. But like a long term project, it's really hard. You cannot do this in a long term project. The reason being is that the best way to describe it is that there's a bunch of human conventions or a way you kind of think about how you organize your data that there's most significant to least significant. Kind of like what is important to have in certain ordering. LLMs don't have that kind of thought process, we call it. They don't. They cannot decrease entropy. They just simply go, okay, I'll fix it. Put it right here. And they just inline everything. And when I use an AI long enough, I'll say, hey, let's make this change. It makes the same change in like five different spots. And as that surface area grows, the more danger of just like bugs in your code, because then you have to have the same instructions over and over and over again. And so if you're not the one correcting it and you're not the one interacting with it, it's just going to slowly degrade into just craziness. I've hooked up Twitch chat to Devin, which is one of these kind of automated programs, and just let them go for eight hours to see what they build. And just like, the code is just terrifying. It's just like at the end of it, you're like, what? There's just like, for loops. You don't know about those dog never got taught you about them, but they exist, okay? They're not government drones. And you go through it and it's. And it just is like nonsense.
D
Terrifying though. Is it terrifying as in, like, indecipherable to humans or terrifying as in it doesn't work?
B
Yes. So, like, the problem is, is that it'll just like one of them was a update function. And the first thing it did for those that do game development, the first thing it did is organize by Z index, which is how far away. Usually you do that in re, like in rendering, theoretically, where you want to render stuff over the top of each other. So that way you get, you know, you don't have something in the back rendering on top of something else. It just did that in an update function and then never touched it again or, you know, did anything. It just is doing work, doing work, doing work. Because at one point that's how it was. And so it just didn't change that code because why would I change that code? That code's already there. You told me to. You told me to add like some key input. Why would I ever change stuff? Right?
C
So just.
B
Okay, it just keeps on adding. Yeah, just keep on adding, keep on adding.
A
Another way to like. So I think somebody who hasn't done professional industry software doesn't realize that I would say maybe 80 to 90% of the work, it's not writing the original, the initial solution to a thing, it is actually updating it so that it works with all the other teams, with all the existing software, that you make sure there aren't bugs, that it's testable, that it's going to have all these issues. So that's why corporate programming, in my experience, not nearly as fun, because the majority of it is thinking about how does this interface with everything else going on? How do you make sure it's. It's actually going to stick around and work.
C
Layman understanding here is that it lacks the ability to understand all the context that this has to be built within. It just wants to dive at the problem, straightforward and solve the very short thing that you've fed or given it. But it doesn't understand all the systems and people it needs to work within and be efficient within.
A
Let's say you're designing a city, right? A big part of designing a city is thinking how all the pieces are going to interact with each other.
B
Right.
A
And what an AI right now tends to do is if you say, okay, make the power plant right, the center of power for the city, it'll go, cool, I'll just do that right now. And it's not going to think about how that's going to interface with all the other pieces. And to your example, later on, it might never go back and think, how does that power plant I made six months ago relate to the houses here and the commercial district here and the roads we're doing and the plumbing system? And so as this stuff gets built sort of independently from each other and there's no thought and oversight as to how it will scale, how will it interact, it just becomes this unmaintainable disaster that at some point you start over rather than trying to fix it.
D
And that's called tech debt, is my understanding.
B
Tech debt. Yeah, yeah. And so there's a lot of thoughts on that. But there are ways to obviously mitigate this. You can do a lot of like, hold, you can handhold it, you can take more of your time. And there's like a lot of benefits to that. I'll give kind of a pro side of things, something maybe a plus one to the hyper, that I think is really useful. So I'm in the middle of writing a small, kind of just like fun, fun game. And it has like 60, 70000 lines of Lua in it. And so a small game that'd be considered like a small game. Okay. And I wanted to add something. And so we go in there and I just asked the AI hey, go at it. And there was like five different spots that had to make the change. And I looked at that and said, hey, here's something we haven't thought about. It's kind of screwed up. I'm now going to take this like AI's investigation, which would have taken me like two hours to figure out all the spots, instead did it for me in five minutes. Now I'm going to think, I'm going to go home and think about it for like an hour and really come up with a more organized solution to this. And then that's great. So AI is going to be really awesome because it can save me hours of research by just asking it questions to go search my code base, figure out how to do these things. And that's super useful because then I don't have to go play, you know, Jean Luc Picard and explore the galaxy. Instead. It just does it for me.
C
My connection there is the use that I've heard law firms talk about. It's like we've taken this process of having to dig through thousands and thousands of pages of writing about a case that somebody's working on, have the AI search for points instead and you can at least eliminate that part and then have the human review the actual problem after the fact.
B
Yeah, I love it. I call it semantic searching. In other words, like, semantic usually means like the meaning of something. So it's like that search is awesome because old Google, you have to like, you have to say in a very descriptive way. Now you can say it like in a very prescriptive way where it's just, you know, how does this make you feel? And it's gonna like do stuff that. So it's really cool in that kind of sense.
C
Can I a question I have for you. I saw a clip where you're going on a, I would say a short rant where you, you said something funny. I want to code, I wanna dance, and I don't wanna go home. And you're in the midst of describing how this is automating or taking away a bit of the joy or the appeal of programming. It's servicing problems or answering and creating code is taking away some of the work that most people enjoy about coding or programming in the first place. And now you're in a place where you're writing documentation more than anything, which is actually the part of the job that most programmers hate.
B
Yes.
C
And I wanted you to explain that a little bit and how that contrasts with what we're talking about right now.
B
I can put in a non programmer kind of way right now. Like there's a huge push you see it at least in some level of articles about game programing. They're like, AI image generation will just solve everything. It's just like, well, making the art and the style of the game like that's part of the fun part. Like I want to have a feel to it.
C
Yeah.
B
And so it's just like, no, generate as fast as possible and it doesn't have that same cohesion. It doesn't have that same kind of feel to it that, that genuine like you can see the expression of the artist in the game. And so it's the exact same thing where instead of doing the work that's fun and creative and actually takes like time and thought, you're instead just writing the documentation of how it should be. You're just constantly. That's all it is. It's like somehow in this AI world we were promised that it was going to cure cancer and fold our laundry. Instead it's doing all of our art and all of our creative projects and we're just having cancer and folding laundry. It's like, what the hell's happened here? Where's this trade off? And so that's kind of the idea of the rant is I never realized I've been tricked into writing documentation instead of programming the whole time. Which was a very. I was very upset at the revelation.
C
Yeah, no, that makes sense.
B
And then of course the fun, the even funnier part about this whole thing is that Suno AI music generation, we generated a song, said I want to be teach, I want to code, I want to dance, I don't want to go home. And we just had to just repeat and just say that over and over again in the club style. So that was me referencing a song I listened to a thousand times. That makes sense. That's funny.
D
Oh, I wanted to bring this up because I have a friend who is a doctor and he. This just got announced like today they got a big billion dollar investment or whatever, but it's called open evidence and apparently it's like open AI for medicine, but it doesn't hallucinate at least nearly as much or it's like way, way, way, way lower. And, and he two separate doctors now have told me that it's like sweeping. People are using it all the time and it's pretty useful. And so I wanted to shout that out because I trust this guy and he said it's. He's tried ChatGPT and it's generally gives him something that he can't rely on. But this has. So I clearly like maybe the broad LLMs are not going to be there to be more slop trained or not working, but focused ones that are subject to a task could be more helpful or increase productivity.
A
That's been my theory for a while. I think I've said that on the podcast, which is that yeah, if trying to appeal to every single use case is so ridiculously hard, but it's so clear that many, many industries are not going to be successful by jamming ChatGPT in there, doctors would appreciate. And again, I've talked to doctors, or again my sister who's a nurse and uses there are already software programs that people in industries like a lawyer or a doctor use that help them gather information they need for a case or prep a case or do whatever. And these programs could be supercharged by having an AI system that can scan through 10,000 documents for them and like pull something up like that instantly or to reframe it in a way that is relevant for a specific case. But obviously in these contexts the super important to get it right. And so if you have a company whose entire thing is about, you know, consolidating a bunch of medical data and presenting it in a format that is really helpful and they take, they figure out the logistics and litigation risks of all this stuff, that's where there's going to be a ton of value. And so that is already happening with law. I forget if I actually brought it up, but there's quite a few companies that are finding major success selling to law firms. And it's not chatgpt and it's not anthropic. It's companies that have, I think Bloomberg, I'm fairly certain has one of them that is like really successful, but it's just legal documentation and assistance. And their whole thing is like they're not trying to write erotica for people, they're just doing the legal side. They have tons and tons of effort on just safeguarding to make make sure that there aren't hallucinations around cases that don't exist. And I think every industry is going to have stuff like that. This is why I've I'm actually curious your thought. I've always felt like not only because of the deep sea stuff that we mentioned earlier where it's Easy to use ChatGPT5 to make your own model on top of that, if you're trying, if you're spending $200 billion as ChatGPT to make the next model that's going to make everybody happy, you'll inevitably not make everybody happy. Every industry is going to have its own needs and challenges, and you're spending the most money of anybody versus a company that comes in with $100 million and makes something that's just really, really incredible for, I don't know, truck drivers or whatever, like a specific industry. And so I have a hard time believing that the profit is going to come from the foundation models. Here's what you think.
B
I don't. I. So here's the. Here's the thing is I'll. First I'll tell a quick story. I. For the last year, I've been having some voice problems. I. Turns out I have muscle tension dysphonia, which means I just have. I don't know, throat keeps falling for some odd reason. And during that, I went to many doctors and no doctor was able to figure it out. And So I chat GPT and chatGPT figured it out. They're really, really. It's really good. And then I went to a doctor and he's like, that's it. That is the one. And so, you know, it's crazy when.
D
You have a crazy thing to hear, actually.
A
Well, when you have like, a medical condition and you realize that doctors aren't like, they're like a detective with varying degrees of ability and that you, like, maybe they'll be able to solve your case. They're not. They don'. Like, solve every problem. Every doctor is sort of like making an educated guess based on what they know. And you might literally need to go to 20 and the 20th or if you have a serious medical procedure like operating on cancer, that you'll have one doctor say, yeah, we could operate. We think this is going to save your life. And another says, don't really know. And another says, that will definitely kill you. And all three are the most qualified doctors in the state. And it's like, what the fuck, right?
D
Just an episode of House you just described.
A
Yes, yes. And this happens all. Anybody with a chronic, like a chronic condition knows this. Anybody who's had to do a major surgery knows this. It's rare that there's complete consensus with the medical industry.
C
Honestly, if you've had a conversation longer than 30 minutes with any doctor, you find this out. It's like, oh, this is not as. This is not as refined as I thought.
A
That's one of the, that's one of the reasons why it's so easy to shit on AI for valid reasons. But. But at the same time, it's like, you shouldn't presume that the systems we have are some Flawless bastion. Like, you shouldn't assume that every lawyer is just fucking acing it every time. No, they absolutely are not. You shouldn't assume every doctor is acing this. It's really fucking hard to do this. And if you have a tool that you can talk to that can synthesize the entirety of all the information available in a few seconds, and you use that to augment smart humans, that's where I think it'll be really positive.
B
So. So I'm actually on board for this whole doctor thing. I don't know how much money it would take and all that, or if the foundational models can't just do the things it needs to do anyways, like if they will be good enough just to be able to do it.
A
ChatGPT can just be your doctor eventually.
B
You know, as opposed to, like having a specific industry that's only surrounded with one model. Maybe there's some fine tuning, some other, you know, techniques that they can do to make it better. Okay, so I have no, like, strong opinion whether that's the future or is it just. ChatGPT gobbles them all up because it has a trillion. It has monopoly amount of money money on stuff. And so it's just like I win everything because I have enough information, but at the end of the day, it's really about the people who use it. I don't want to just have chat GPT tell me my, my problem. I'd much rather know what it is. Go to a couple doctors and see what they have to say and then bounce off what I've been told and see, like, where it can be. Because you just, you just never know. I've. I've heard just craziest stories about exactly these doctor things where one person figured it out and it was the most completely opposite thing that even chat GPT couldn't get. And so is there a lot of money in it? Probably a ton of money, if you can really solve it.
D
Okay, that's what I want to talk about. Then I want to go even, even broader here, which is the. The money question.
A
Isn't this great, you guys? I have so many minutes just talking about AI. I love this.
B
I'm loving this.
D
This is the Doug episode. But actually, this is fascinating. Okay, here's the thing.
A
I always have to restrain myself on the other ones. I know I can't just go off, but now, like, I'm bouncing off you.
D
And I'm just like, yes, dude, yes.
B
I also have a huge conspiracy theory about money, but we'll get There. Okay.
D
Everything you're telling me is making me less confident in open air specifically and more confident in the other ways that. More focused ways that I may be used. Maybe used.
C
Okay.
B
Maybe I don't speak. Well, I don't know if you should do that.
D
Okay.
B
They're all good.
D
But my concern is that the. The amount of money that is now promised by OpenAI is so massive and I'm not quite. The revenue isn't there, but it has to get there. I'm trying to figure out what your stance is on whether this will be a profitable venture.
A
Actually, bigger question.
D
Figure that all this will only work out. The amount of money is now so large if they get AGI. So the real question I want to ask is where you stand on AGI, how close that is, what you're seeing with that.
B
I'm so happy.
A
And also, what do you define it as? Because everybody has their own fucking definition for what AGI even is.
B
Okay, okay, okay, okay. So first off, this is like my favorite thing in the universe. So I want everyone to like everyone that has ever listened to this podcast. Understand one thing. When a company has AGI, you will not get it. Reason being is, would you let the world's greatest secret be used by the general public? No. You'd remake Google, you'd remake Netflix, you'd remake everything. You'd use it to create every single company on Earth and just run your own businesses and be the world dominator. Like, why would you have the greatest piece of technology and open it up? OpenAI is not going to release AGI.
A
They would never be like, their name is OpenAI, though. I don't understand.
B
It feels like just, okay, okay, I'm an atheist. Checkmate. Okay. No, but there's no way, like, there's simply no way that that tech. The reason why they keep hyping all these things up is because they're just not there and like, actual AGI. So AGI would be. They called artificial general intelligence, meaning that it can just always solve with extreme precision for low amounts of cost problems that have never been seen. It can just simply keep on making itself better and better and eventually get to, like, super intelligence, meaning it's so smart, it's even beyond anything humans could possibly ever achieve in the next thousand years. It's like the greatest thing of all time, right?
A
Can replace every human task.
B
If you've read Brandon Sanderson. Have you ever read any Brandon Sanderson?
A
Yes.
B
Okay. It'd be like a shard. We'd be creating one of the Gods in the. In his cosmere universe where they can foresee the future. Let me create a. Translate that.
A
It's like God.
D
Thanks. Thanks, Doug.
C
Yeah, nice.
D
This guy was gibberish.
C
I didn't even know. The thing is, I don't even know what he's saying. And then you come in.
B
Doug, you're just a man of the people see it.
A
Okay? So. Yeah, because I haven't chatted with you about this either. So let's give quick context. AGI is the thing people keep talking about as like, we're going to hit this. And when we do, that justifies the fact that we're spending hundreds of billions of dollars on all this stuff, because then it will. Or trillions, then it will unlock such unbelievable amount of productivity that we'll be able to fund everything we can do, make every job 100 times more effective, et cetera, et cetera. Literally talking about it like God, like they're going to invent God.
D
Yeah, I mean, the way I saw it described was, you know, Sam Altman is basically going around saying, give me trillions of dollars, I will create God. And a lot of people have been doing that, and a lot of people have been promising. He's been promising money to other people. And it. It is God coming. Because Zuckerberg's talking about three years.
B
You know, Carpathy says 10 years plus a lot of people. I mean, this is literally just Gilgamesh all over again. Or Nimrod from the Bible. This is just Tower Babel. I'm gonna create the greatest thing ever. We're going to be just like gods because we can create the universe. Right? This is the. I mean, this has been. We've been saying this for thousands of years. This is not the first, though. It's. I mean, maybe it is somewhat different in some kind of sense, but it is also oddly very similar. Right. So it's very, very funny that they're doing that. But at the end of the day, if. Let's just say we do get there like it is, you are not going to get the same level of access. Right. They're not going to just be like, hey, here's the model. You go run it. No, you can't. You don't have the. I'm not getting bajillions of gpu. The AGI version. Yeah, you're not gonna. There's gonna be so much safety built in. Remember, safety and correctness are kind of, in some sense, at odds with each other, because it has to, like, take what you say and be like, well, actually we can't say all the things you're asking for because it might be dangerous. Right. Like there's this famous, famous thing that happened with Gemini when it first came out, which is a 17 year old on there. It was like a 17 or 16 year old on their Google profile asked about some C feature and because C is an unsafe language, it refused to show this person the code because it's like, sorry, you're a minor, I can't show you this code. And it's just like, it's a very, very funny post, but it actually was true. And so there's like, there will always be this kind of competition. You will never get the same access level as something that could just solve every single problem ever, because you would just be able to make everything. But if everyone can make everything, then no one has really anything.
C
I learned that lesson from the incredibles.
A
Yeah.
B
When everybody's incredible.
C
I'm glad you introduced this question or topic because I had almost the exact same question. But how this affects people working in the industry right now because it feels like we're not necessarily close to that. And that is the reason that these companies are hyping up the work that they do because they need to keep the energy and the funding to get closer and closer to it. In the meantime, there's this kind of, this feeling of the walls closing in. Figure out how we can monetize this for the time being. We've spent a lot of time recently talking about video generators.
B
Yeah.
C
And these new sort of social media TikTok sites as like a maybe a means to, to monetizing the technology. Right. And all these other potential, maybe initiative ways of monetizing the experience in the short term. If you work on this stuff right now, you're somebody in the industry. Do you feel this sort of financial pressure of the bubble at the moment? Like a lot of people, as an example, a lot of people talk about how Boeing changed a lot when the culture of executives came in with a very different outlook on how money was spent at the company rather than being run by engineers. Right. And with how much.
B
Very famous article on that one.
C
How much pressure there is from the amount of money involved here. How does that affect the average programmer working on these things and how they feel about their work right now.
B
So this is a such an interesting topic. And this is the unfortunate, I'd call like the black pill side of things right now is what I'm seeing is a lot of people are taking on more responsibility, but they're not actually. They don't feel like they're learning because they're constantly just deferring to the AIs to try to make all these things. They're more just reading code instead of writing code. They're constantly trying to prompt their way into stuff. And I. I kind of foresee this, like, stepping stone between now and AGI, if we will. I don't know if or when AGI will happen. I make no predictions on that, but I see this kind of zone where everyone feels like they need to do more because we constantly keep saying that it's the greatest thing ever, but people don't feel as much success on this specific topic, and they're not able to quite generate all the things that they keep getting promised. And so I just foresee a lot of burnout and tiredness because, no, people aren't learning as much anymore. It's really hard to learn something if you're not, like, in the weeds doing the thing.
D
Yeah.
B
You know, I can watch a thousand Balatro streams, but I will not be good at Balatro until I, like, learn all the jokers. Yeah, I just have to. I have to do the work myself, even if it's fun. And so I kind of see a lot of this happening where there's a lot of, obviously, money going into this. It's hard to say if it's in a bubble or not, because what is a bubble? The hard part about a bubble is that even if everything inflates in price and a whole bunch of money comes in, but if it grows to that expectation over the course of the next five years. Was that a bubble or was that just a. Was that actually just the future as hell? Yeah, it's just sick. It's the future. It's the future value being realized now. Oh, Doug, do you feel that there's that kind of pressure of the ever increasing amount of work, a demand to be using AI for all these things, but a kind of lack of fulfillment in these things, because that's why I truly think burnout exists, is when you have the lack of learning and growth mixed with harder and harder deadlines and more and more work, and you just don't get that joy anymore. And you're just a yes man.
A
Yeah, I. So I've talked to programmer friends who are like, good, good, good programmers, like lead tech folks at companies, and they're basically saying, it's like, I use AI every day, but they don't particularly like it. And they talk about it as a. Like having a team of junior engineers who don't learn. Like they're not. So it's not. So I think programming me and you, it's like having.
C
It's like ChatGPT is like a bunch.
A
Of me and you. Yeah, it's like you have a bunch of. Is that useful or helpful? And I say if you have enough of them, it could be, you know.
D
Enough of us at a problem.
C
I could give you a massage while you work through the code. That would be useful.
A
And so I think there's like two angles. One is if you're the person who's trying to learn, in which case, yeah, it's weird if you, if the AI does all the groundwork that allows you to feel like you actually made a thing or learned skills and you're just telling somebody what to do. If you're only ever a manager and you never get the low level experience, that feels like you're losing a key moment of it, you know, of development and growth. And then when you get to the higher levels, and I'm sure you've talked to folks like this as well who now feel like they do have the experience at the low level, but they're basically just directing a bunch of low level employees which are AIs, but they aren't getting better. They also don't have the satisfaction of their team leveling up because they aren't leveling up because it's just their AI is just doing the same thing. And the development only ever happens from you, the human telling it what to do better. So it sounds at a professional level, not that fun, to be honest. And it doesn't sound like most people are stoked about it, but of course every company is, is trying to cram AI into absolutely everything. And then again, the caveat that's important to acknowledge is that for me, and part of the area I do think is gonna love this is in the, let's say hobbyist intermediate role where I don't have to make stuff that I don't have to plan a city, right. I get to go make a funny like bounce house and then I pop it at the end of the day and move on to the next thing the next week.
D
And also for your use case, if it is a little bit jank, that is funnier, it's better. So if you.
A
So I think about something like one of the things I've thought about ChatGPT5 is let's say you have a parent at home who wants to organize their kids chores, right? And something that is all this time of like Managing your household. And you right now can go to ChatGPT. And I know because I've done this and say, make me an app right now that can track. Here are my kids, here's how they use it. Make it into an iPad form that we can touch and put on the kitchen counter. And it's going to track all these things. It'll have points, it'll have scores, it'll have game. You can add all these cool things. And just some mom could just, through normal English, have this entire custom app made. And it doesn't matter if there's a bunch of bugs. If it were to scale up, it doesn't matter if other people can use it perfectly, it doesn't matter. It doesn't matter if there's no security. It's just an average person who gets to use technology in a way that wasn't accessible to them before. And that, I think, absolutely is happening. That will continue to happen and will be a huge use case. And the. The question, though, is on the professional side, does this help? Does it? I mean, it probably helps, but I think people are unhappy with it, is my read on it. And again, you probably have more direct.
C
This is the part that's surprising to me because from talking to people over the years, I think this is the group of people that I would have expected to remain the most positive and optimistic. The people who still have jobs in the tech sector, working at companies, working on these sorts of projects. It seems like you'd be the most likely to have a very optimistic view of where this is going or what you get to work on every day. And to hear that it's a little more complex or a little more disjointed than I thought. Whereas three years ago, I feel like everybody working on this stuff was super, super pumped. That's kind of shocking to me.
A
I would even say partially why I thought it'd be fun to talk to you about this is. I think you are one of the. I think prime is one of the most notable people not bought into the AI hype with programming specifically, who is not going every week and going, holy shit, the new Claude model is amazing. I just did this. And you get 90% of tech. Twitter is that people. Is people going, the new model is crazy. Look at these graphs. And then you're there making funny tweets like, it's been six months since it's supposed to have taken all of our jobs. And, you know, it's like, it's not that sick. It's not. It's not God.
D
Okay, speaking of six months, I want to, I want to show this clip from Sam Altman six months ago.
B
There's a lot of short term stuff.
A
We could do that would like really.
B
Like juice growth or revenue or whatever and be very misaligned with that long term goal. And I'm proud of the company and.
A
How little we get distracted by that, but sometimes we do get tempted.
B
Are there specific examples that come to mind? Any like, decisions? This was six months ago. Oh, I'm so happy right now. He's thinking of erotica right now. Yes, exactly.
A
Sex bot avatar.
D
So six months ago he goes, our long term vision is so good, we don't need to focus on this short term juice revenue crap like a sex bot. Six months later he announces, basically a sex bot erotica is chatgpt.
B
Yeah.
D
So I, I am making the case here that I, I think like they, they increasingly look desperate for revenue. Not like they need revenue, they're desperate for it. They like the, whether it's Sora, whether it's this, it looks like ChatGPT or OpenAI has made commitments like they're gonna break ground on so many data centers in 2026 with, with, with Broadcom and Oracle and Nvidia that they have to have money for they don't have now. And they're getting more desperate for revenue. And I'm trying to, you know, ask people who know more than me, is there anywhere this revenue is going to come from? Or is it we headed towards a cliff here? Like, what is the. Because this looks to me like a complete about face.
C
Why is this so upsetting to you? You have fragile Victorian ideals. We can't have a sex bot on ChatGPT. So it's his words.
A
I have felt like when ChatGPT is giving me python code that I could. I'd prefer it to be more sexualized.
B
Yeah, yeah. I don't have an anime. Anime. I'm pretty upset about it.
D
My wife, who's tweeting every day.
A
Yeah.
D
Get coding advice from a hot anime girl.
B
So to be fair, like, I'm going to steel man Sam there.
D
Okay.
B
Okay.
D
Please do.
B
Sam hates Elon. That's been. No, no surprise. So I'm pretty sure when that came out, Elon just got done releasing that. And so that's probably more of a means to dunk on him and saying, elon, shallow. But we're like so good over here more than about revenue and all that, but at the end of the day, they do need to make revenue. He's even talked about that's why they're releasing these things. But you got to think about it maybe in a slightly different concept that they make a lot of money. They, they are making. They're in the billions upon billions. They'll probably be crossing 10, 15 billion a year coming up here.
D
I mean, they make revenue, but they're losing money.
B
They're losing money, but they're proper, you know, project Stargate or whatever it is, they're going to get a lot of money from the government. It's like all growth stories ever startups are, or I'm calling this thing a startup, even though it's like 6 years old. Their goal is to maximize users, it's not to maximize money. A great example of this is Docker, if you're familiar with Docker. Docker allows you to have whatever computer you're working on. You get a different kind of virtual environment that you could launch your little application in. So it's like, oh, I'm actually on Linux, I'm on Ubuntu on my Mac, because this is what our servers are in. Right. So you get to have that kind of experience. Docker, they had this product in which made $0 and cost a lot, a lot of money. Millions upon millions of dollars. And they got so many people using it that the year they decided to make money, they made $500 million in like a month. Right. It just was like, oh, I now make a lot of money. Right. And so this is kind of them starting to turn on those gears of we've been acquiring users through a freemium model, and now we're going to start turning on gears to make money. And that's just how I look at it, is they're just going to start turning on things to make more and more money.
A
I don't look at it as a.
B
Full failure, but they.500 billion is a lot of dollars. And so are they going to be able to make that? Yeah.
D
Even if they had no expenses at all, they're not close now in terms of.
B
Yes.
A
I don't think it's that far off, though. I think it's their spending something.
B
5 billion last year.
A
Yeah, it's like 5 billion. And then they're spending like 15 billion. So, I mean, it's, you know, operating expenses, something.
D
But they're losing hundreds of billions.
B
They're. They're losing billions for sure every year. But they're.
A
Well, they're promising hundreds of billions right now. They haven't spent that.
B
Yeah.
D
But, yeah, if they promise, they say AMD we're going to need to give you $100 billion to build something. They need to have the cash eventually have to give them.
B
Hey, do you think Amazon makes money?
D
They don't. They do now. They didn't for a while.
C
Yeah.
B
You know Amazon shopping still doesn't make money.
D
Right.
B
But you know like, you know like you can, you can acquire users, you can acquire a lot of stuff to be able to build other things that make money.
D
But in the dot com bub, Amazon lost 98% of his value. So like in a short term if it doesn't make money, if pets.com doesn't make all these companies will go to zero.
B
Yeah, books a million would be a good one. Yeah.
D
I mean I'm not saying eventually of course clearly AI is something new that is like going to be a big factor. But it just seems to me like that we've gotten way ahead of our skis in terms of what everybody's promising each other and none of this is making anywhere close to the amount of money that it would.
B
You know I think a big thing people don't realize is that spending a bunch of money like these 200amonth, 500amonth coding assistance kind of programs, you gotta remember that most the world that is a kind of a hefty bill to be able to have as a coding assistant because they don't make a lot of money coding. Right. Like if you're in the EU right now and you're just like a mid level person, you're making 50, $60,000 a year, like that's not a great amount of money to also be spending thousands of dollars on AI. And so there's a whole world that exists in which is kind of priced out of this AI And I think we have have a bit of a goofy mindset when it comes to this here in America comparatively to other places. And so I do think that there's going to be a lot of it's a long term revenue thing but the government's going to keep giving the money. The government's going to help this project because I think it's more of a national need to have good AI because of just like geopolitics than it is because they're trying to make. And you know what, I'd rather have them say that than give me erotic text because it's just like yeah, okay, I understand that argument much, much better than anime titties. Like when it comes to making money it's like one makes way more sense. I get the purpose of this as opposed to the other. But I don't think we're ever going to get that answer. It's going to be the love of the game and I'd never do that. Also, anime today.
A
Well, so I'm curious what you think, because we never actually talked about this, but I think you did on your clips channel. But funny quote from the CEO of Anthropic which is basically saying, well, yeah, we were not profitable as a company, but you know, when we trained our first AI, it cost, let's say $10 million, and then it actually made us $15 million. So we did make a profit, but by the time it had made $15 million, we had spent a billion dollars on the next model. So technically we didn't make money, but then that model the following year made $2 billion. But that following year we were spending $20 billion. So it's a bit of a scheme, if you will. There's like a schematic. All right, so here's the question, right, that I posit. So if the idea is that every year one of these AI companies, let's say OpenAI makes the new model and they spend a shitload of money to make it, and then the following year they can make a profit as long as they don't keep blowing money on training and making new stuff. In theory, right now OpenAI could stop making new AI models and could get close to making a profit next year or get a lot closer to it.
D
Yes.
A
And so that feels like there's at least a exit strategy that doesn't ruin the entire thing.
D
How can you ever stop? Because the second you stop, it seems like. And again, you guys are more technical me, but it seems like whether it's Deep Seek or anyone else, people can catch up rather quickly. And then within six months to a year you have that model running locally. It's like, you know, it's like.
A
I agree, I think profit per unit to the ground. The foundational models I think are, are really unsteady financially for that exact.
D
I do think the government angle makes it like, you know, maybe it's just so nationally security important that we can throw collectively our tax dollars into this. Whether or not. But I don't see like as a business how this makes sense in any direction. I don't, I don't, it doesn't math out to me, I don't get it. But I think that that is a good point.
B
There's a lot of ancillary companies or second, like second order companies that do make A lot of money. Cursor makes a lot of money from, from Chat GPT and all the other models. Like there is a lot of money to be made. And at the end of the day, all these companies that are making money, a part of that money is also going to, to these parent models. So like all the derivative programs that are going to crop up over the next five years, which is going to be an enormous amount, are all going to be.
A
That doctor thing we talked about, that doctor thing pops off. That is revenue going to these companies. Right.
B
And so when, when they say they're not making that much, you just got to remember like we haven't even begun the infiltration. People always do this thing where they're just like, oh, look at this new technology next year. It's going to be crazy. It's like, no societies move slow. It'll be like 10, 15 years before everything's chat chippity. But when that does happen, then it's gonna be like, okay, yeah, that's a lot of money they're bringing in. Like that's gonna be a lot like tokens. Just everybody's gonna get tokens. Oprah Winfrey tokens for everybody, even kids.
D
Tokens for allowance.
B
Exactly. So I wouldn't worry. I, I don't worry as much about the profit side in that kind of sense because even if open AI goes down, the text there infrastructure's there, like something is going to take that place. Something is going to Argument.
D
Argument recently and I, I can see that, but it just sounds, you know, that that part in the middle, the part where it's like everything goes down but we still have all these GPUs and we built something cool. That part in the middle is like everyone's 401k going to fucking 50%. You know, it's like a, it's like, yeah, economic problems that I, that I feel like we're barreling towards. But I think that's a, that's a fair. I wanted to ask because you guys met with Newsom and Newsom recently signed, I think is the first piece of AI regulation in America.
C
SB 53.
D
SB 53. And I wanted to know more about that and hear about what's going on because this is like, you know, California has done regulations in fields like EVs and other things that kind of led the nation because they've did it first. If this going to be something that's going to be broader adopted in America.
A
Or what are the odds that you would talk about this? Well, as long as we are. So the first bit is SB 1047, which is. Last year there was a big AI bill that, that was approved by the California legislature and then went to Governor Newsom and he vetoed it. It was very controversial because there was basically a lot of requirements that it would have put on AI developers. And given that California hosts almost all of the big AI developers, and it would have meant anybody who's even trying to, like, use their business in California. Essentially, a California AI bill is a world AI bill. So it's much, much outside of China. Yes, outside of China, it's much, much more consequential than just California. So this bill came out last year that went all the way to the governor to sign. And it had things like, if you're making large AI models, you have to plan out safety and security.
C
Boo.
A
You have to have a kill switch. This is genuinely, I do not think is a good idea. Which is the idea that if something is deemed to be a problematic, you have the ability to shut it all down. That makes sense on paper, except that's not really how AI models work. You can distribute them to anybody on any computer. And so that only works if you only ever keep your AI.
D
Gavin Newsom needs a big button on his desk that can stop the Terminator.
A
Yeah. Again, the only way that that would work in practice is every single person's computer shuts down.
B
Right.
A
If he has a button that turns off the electricity, he could do that. And otherwise you cannot do that. Statements of compliance, third party audits from the government, a new agency that's overseeing anything, huge civil penalties if you go against all this. So Newsom said, while well intentioned, it doesn't take into account whether an AI system is deployed in high risk environments. To be frank, he, you know, said a lot of political stuff. I think the tech industry was just very, very upset. And the core criticism is you are putting in a ton of regulation, a ton of, like, sort of bureaucratic work that AI companies have to do. A ton of penalties in case things go wrong. This is too much for an industry that we really still have no idea what's going on. Everybody is trying to figure out what this is, how to make it. You want to leave the opportunity for a company like Deepseek to come in and create something that's totally new and not have them be throttled by having to go through all of these layers of California bureaucracy. So that was the criticism. Tech companies were very happy it got vetoed. Other people, not so much. So this year, there's a new one. And so when we were talking to our great friend Gavin Newsom, he did mention this specifically and somebody brought up oh, fear around AI and how that's making people feel like just the world is a little more scary and unknowable and there's too much misinformation. And he said with a lot of pride how they just signed SB53. This is about half a month ago, September 29th. So this is again, it's applying to large models and basically what it does this time is you have to submit a report one time a year to the government. So it's chill as fuck. Now there's also some other computing stuff, but the core thing for a company.
B
I want to talk about the report for a quick second. Okay. It's a lot of paperwork. It's going to be a very in depth thing and if you don't, she's at it. Yeah. But if you don't, you get to pay a million dollar fine. So those multi billion dollar companies are.
A
Gonna be like brutal.
B
Do I hire millions of dollars worth of staff to be able to create this port or do I just simply take the million dollar head?
C
Patriot could pay that fine.
B
That part I thought was very hilarious because what I see there is that this is a uncompetitive advantage for people that are in like this 500 million dollar to a billion dollar range. Because that's a million dollars is a much larger amount of money to these smaller companies and you have to hire like it's going to be a pretty full time multi person role to produce very good document. Like due to all the things that they have to do. It's going to, it doesn't feel like greatly competitive, that singular point. Because it hurts big. Small.
D
Yeah, it hurts the small more than the big.
B
Yeah. Like the small 0 to 500 million. They don't have to do the reporting or there's some small amount of reporting they have to do.
D
Oh the medium.
B
Yeah, yeah. So it's like the medium ones that a million dollar hit actually does, does mean something. Yeah, they don't want that. But it's, it's just like, it's kind of like it's a weird place to be. Like I understand the idea but it kind of, it's kind of a little bit of emotional painful.
A
It's also, there's no, there's no enforcement. There's nothing, nothing happens.
B
Oh yeah. And the AG has to first go and, and sue them. So before they can do the million dollar thing like there has to Be the AG being like, interesting. And that anthropic son of a. Did not have my report about.
C
That's my Monday.
B
That's it.
C
Right like this.
D
That's our AG stock in California.
C
Yeah. Famous California AG accent right there.
D
That damn son of a bitch.
A
Y' all making AI out there need to give us reports.
B
Okay. I didn't think through the accent. Okay.
C
They have to evaluate. I thought it was interesting what a catastrophic risk which is any outcome of a model update or an entirely new project that would be released, anything that would result in the death or serious injury of more than 50 people. And they have to. And they have to get.
B
Or a billion dollars or a billion dollars.
D
My new update is going to kill 49 people and lose us 900 million.
C
That's. That's chill.
B
That's chill. No report on that one.
C
That's chill.
A
And let's just look at how much a human life is worth. According to that. It is about $20 million human life.
B
I know it's kind of. When I read that I was just like, oh, that's kind of funny. It has to be 50 or more. Yeah.
C
I thought the one one standout, really clear cut good thing to me seemed like the whistleblower protections, which is that someone can come out and say, hey, this company I'm working for is doing these things that are potentially going to damage a lot of people and they don't have to deal with the fear of retaliation for coming out with that information. The rest of it struck me as well. I could just evaluate and spend a bunch of time saying that even if I'm not willing to pay the fine. Right. I could make the documentation argument for this. Isn't that risky, it's not that big of a deal.
A
Or you're like, yeah, we had a team go through and red team this and we think it's really safe. And to be fair, they do. The companies do actually do this to varying degrees. Anthropic is great at this. And they do a ton of work. They put a ton of effort into just trying to figure out how to make these things safer and understand them. I mean, earlier what we talked about, the whole thing about finding that you can poison pill it. Like that's not to their advantage to tell people about that or to research.
D
Interesting that they did that research and published it when it's no, they.
A
They have a great track record of safety and that's like a big thing for them.
B
And they're unsure if it's going to affect frontier models. Because maybe there is like a scale that's like so big you can't do 250 documents.
A
They're not, they're not sure. Yell the slurs. If there's 50 million people.
B
Yeah. There might be amount of people.
C
You could overpower these machines with this in mind. With this. I'm not going to overpower the machines. You might with this in mind. In contrast to what the reaction was like to the previous bill that was vetoed.
A
Yeah.
C
What is the tech world reaction to this right now that you guys understand? Because just from our evaluation, even reading the differences between these two things. Right. This seems a lot more agreeable or looser. But at the same time, I'm sure to some people any amount of regulation is going to be too much. Yeah, yeah.
A
Essentially that this is so just to be clear, what, what actually got signed and California has a legitimate AI legislation. Regulation bill now is super mellow. It's like you have to report to the government what you're doing, how to hopefully make it safer. You have to report if there's a horrible accident and you have to protect whistleblowers. It is something. It's better than nothing.
C
Is it? But is it mellow enough that the tech executives are like, this is chill.
A
So most are. Yes. Most still are. Like, this doesn't feel great. And then there's the concerns from you about medium sized companies being like, come on, like, what is this even doing? You've got a good spirit here. To me and the sentiment, I read it like checks off enough boxes to be like, we did make regulation. And for frankly, Gavin Newsom in this room to say, we talked with Fei Fei Li and we made this thing that teeters on the line and he does a lot of the hand motion. He literally, you know, he's like, he.
B
Did do teetering on life.
A
He said teetering on the line. And he said this is this, this bill is really a collaboration between all the companies. I would say it's a win for the AI companies. While being enough that technically there's something and it's, it's not nothing. It's just, there's just much less. And so how do you compare with the, you said with, with the vc. So there are some people who are really critical.
B
I'll just, I'll come in with the, with the opposite take. I think it's just a governmental masturbation in the sense that a. Let's say I am a company and I produce a report. Who's verifying these. Who the government masturbation who's like verify. So I'm trying to do the hand gestures. Who's the expert who actually validates that these reports are real? Who's going to prosecute the things that are faking? Who knows what the things are lying like Enron exists and we created something called XBRL which prevents companies at least largely lying through what, 2007, 2008 to do all this. Like there's nothing here that actually puts teeth into it that's going to be effective enough for big companies. I know we don't have really time to talk about the whole anthropic being sued thing, but they got, they, they took the world's books and got charged $1.6 billion and they're like cool, not.
C
A bad price for the world's books.
B
So they, they can stealing every book in the world.
C
I mean I would make that deal. They would close that deal.
B
They can Skip the next 1600 reports for the same cost. Yeah, like that. So they could kill 50 people a lot of times. And then like I didn't file a report either. My bad.
C
Right.
B
Like, and no one would like the government. They'll get civil lawsuits from the people. But you know, the non dead part of the people, but you know, but nonetheless it's like, okay, so there's no teeth here. This just feels like regulation for the sake of regulation. Because there's a loud contingent of people which rightly so, I think do not like AI.
D
Right.
C
And but they are.
B
When I say rightly so, I mean they're in the right ballpark though. They're angry for reasons. I think they're mostly found out of ignorance and all this kind of stuff. But they're still angry about it. And it's just like, okay, you're trying to appease a crowd, but you're not providing any sort of teeth or real regulation. You're just like safety reports. $1 million. And you're just like, but what the hell's the point? The one thing you brought up though, I love that part. Like we should definitely have whistleblowers and we should make that like super awesome.
A
Yeah, every company has to have a way for people to anonymously report. Like they actually have. I'm on ability to for a whistleblower.
B
Because I want to know if like Sam Altman's gonna make like the biggest animes out there. Like we need to have someone reporting that immediately. That's important.
C
We need somebody to get ahead of that. Yeah, I need that to be leaked.
D
How does that compare with Europe and China. Like, how. What are they doing?
B
The regulation?
D
Because we, as far as I know, the rest of America, none. California. This bill, which is very light European.
B
Well, can I. Can I. Can I just. Can I just jump in here really quickly? Just. It's very, very important. Well, first off, when we say AI, we're talking about LLMs. Okay. This is very important. And so Obviously, there's. There's LLMs from the Minstrel Reason region of France, and then there's just sparkling LLMs everywhere else.
A
Okay.
B
But you can actually see the real difference is because what. What. What is Europe producing? Minstrel. Like, that's their only one.
D
Yeah.
B
What is. What is China producing? Lots. What is America producing? Lots. You can see the effects of regulation like it's.
D
It.
B
You just use your eyeballs and look at company. Sure. And you can just watch it happen.
A
Yeah. So there's four main bodies, I think, that are relevant. There's California, and then the US Federal government, which weirdly, are kind of different. Again, California, if they make really strict regulation, would in some ways override the federal one in weird ways. But, yeah, our federal government right now, under David Sachs and Trump, are trying to do as little regulation as possible, essentially zero. California now has this bill that we just talked about, something. It's very minimal. The two other big players, Europe and China. China actually does have some regulation, but it's almost entirely around generative AI. So it's basically just like if you generate AI, images, video, text, whatever, it has to be labeled. They also have to. Shocker. Submit their algorithms to the government for review. So it's like they need to be clear with users when stuff is AI generated and the government gets to come in and look at it. Not a shock.
D
Sounds really good.
C
You can't make it. Xi Jinping. I'm actually on team with Mario.
B
I hate to say I'm on team China here for a second, but I like that labeling.
A
Yeah, no, that rule.
D
Rule, specifically is what a lot of people are clamoring for because everyone's scrubbing the watermark off of Sora videos and claiming someone committed a crime. You know, it's like there's. Like, there's consequences without. If it's labeled, it's just goofy. But if it's not labeled, it's a problem.
A
Correct. Yeah. So that's good then.
B
I want to see Gavin Newsom and Trump dancing. Yeah. I think it's funny, but I don't.
A
I hate wondering if that's real.
B
I keep thinking they're best friends and I keep getting duped every time, so.
A
And then there's the eu, which is the polar fucking opposite. And as much as it's easy, I think, you know, certainly for folks with our audience who are like, well, of course there should be regulation on AI. It's too dangerous and whatnot. Again, the real challenge is if you believe this is going to be really impactful for a lot of industries and make a huge difference. If you make it obscenely difficult to build a thing or to experiment or research and try new stuff and have all of these rules, all this bureaucracy, all these punishments financially, then people aren't going to make stuff in your area. And that is what has happened with EU. So they enabled the or signed the AI act in 2024. It's currently being rolled out. So companies like this year are starting to follow along with it. What it quickly does breaks AI companies into three tiers. One is these are prohibited, these cannot be in the eu. It's basically anything that does like dystopian social credit system type stuff or allows authorities to in real time do biometric identification of people. So that stuff is just banned. But then there's high risk systems and general purpose AI. The definition of high risk is confusing, which is part of the criticism. It's like critical infrastructure, education, employment or business, essential public and private services, law enforcement, migration, control and justice or democracy.
B
Oh, everything.
A
That could mean fucking everything.
B
Yeah, that's everything.
A
Yeah. And so for that category, you have to establish and maintain a risk management system. You have to validate and test all of your data and make all of that public with the. The government do tons of documentation and record keeping, transparency to all users about everything going on. There has to be humans that come in and audit all of your stuff. And there are massive fines for not doing this, including up to 7% of your global revenue gets fined if you don't do this. And this isn't just companies who are headquartered in the EU. This is like OpenAI. ChatGPT. If they have ChatGPT running in the EU and they break these rules that you can go to them and say, we are fining you 7% of your year's profit. Not even profit. Revenue.
D
Yeah, revenue, revenue, profit. They'd be free.
C
Yeah.
A
Oh, you owe us money.
C
That's a negative number, by the way.
A
The last category is general purpose AI models. Basically everything else, there's still a lot of documentation, public summaries of what's going on. These laws also have a lot of definition around copyright law. And this, I think is good. Very clearly defining that you can't train data on stuff you don't have access to. If it's behind a subscription or a paywall, then you can't use it. And you're still on the hook if you end up taking a bunch of data from websites where those websites stole it.
B
Right.
A
So if there's. Interesting. Yes, this is the anthropic thing. If there's a company out there that stole every book and they have 500,000 books and I go, and they tell me, oh yeah, these are all chill. And then I go there, I make an AI model off of it.
D
If.
A
If it's clear that that website had illegal books, I'm now on the hook and the EU can charge me 7%. So it's not clear always.
D
What's weird is there's just no way that not every single AI company is in violation of those rules.
B
Oh, they are, absolutely.
D
And there's no way they're going to be able to actually charge 7% of revenue on every major. It won't.
C
Yeah, I.
B
Well, I'm more worried that first I got to say this again before the comments just explode. This whole YouTuber problem is that that the EU loves regulation when it comes to the Internet, right? Like this is very well known and there are some good ones, like I love right. To forget. Like, that is a, that's a beautiful thing that I can go to any company, say, hey, you have to delete my data. Like, I love that. I think that, that we should all have that. But this is going to cause like I can foresee one day that there's like the EU black zone. It's just like in here there's no AI allowed. We don't do AI. All AI companies just bail out and say, sorry, we know we stole and you are going to sue us, therefore you don't get it. And like, what is that going to do to the average person there? And so that's like my more bigger worry is these kind of reaching things.
D
I'm moving to Europe off. That sounds great. But I do want to say, you know, based on the positive outlook here, if this is the next industrial revolution or something, that would put Europe in such a backwards position, it'd be tough.
A
So a couple of notable quotes here. So one is from Emmanuel Macron, President of France. We can decide to regulate much faster and much stronger than our major competitors, but we will regulate things that we no longer produce or invent. This is never a good idea. There was one from a Berlin AI agency that's like 60% of what we're building using AI for clients never leave because they, the potential clients are like, we don't know what's going to be allowed under this AI Act. We're just not even going to risk using AI, let alone making AI. And then Sam Altman from OpenAI says we're going to try to comply. We have a lot of criticisms on the way the act is currently worded. And then what's funny is I didn't know this EU also then has codes they released which is like a pre game for the law. So they released this set of codes that, that they want all the companies to sign. This is in a few months ago that they're like, hey, in preparation for our big AI act being enacted soon, we want you to also sign this code which says you're going to follow all these other rules which apparently are not even the same as the original AI Act. And so Meta Facebook just said, no, they said we're not, we're not signing your thing. And then they might just not be allowed in the EU to do their AI stuff in a few months when this starts out.
D
So there's a lot of companies, they lose vibes.
C
Yeah, Threads is gonna go so bad.
D
I can't get on threads vibes.
A
And yeah, there's a bunch of companies that are basically like this feels so excessive. And again, it's not just about the concept of regulation, it's also the implementation, the actual wording. Cause I looked through a bunch of it is very broad and a lot of people like we don't even know what we're supposed to to do here. There are so many different regulations and steps. You're talking about Europeans that it's just like prohibitively expensive and people are just not. There just isn't AI there. There's Minstrel and that's the chat.
D
That's.
A
They have the chat. And so it's like there is real genuine harm to this strategy of like, well, let's just really over aggressively account for everything possible in this very broad way is that people are just going to leave because they don't want to risk risk this.
B
Yeah, I'm on your team.
A
Yeah, I mean it's, you know, ideally somewhere in the middle. Like yeah, I think most people fall in that camp which is there should be some kind of regulation probably more thorough than what California just passed and probably less thorough than what the EU is doing.
C
Yeah, I mean everybody's making the guess with uncertainty in mind.
B
Right.
C
Nobody knows what this actually translates into for like how, how much value and meaning does this have to the average person in 10 years from now, 20 years from now, 30 years from now? And we're all going to make our best guess as a country or as a society of laying the foundation for the direction we want to go in. And we won't know who's really right until it's, you know, 20, 30 years from now.
D
Wild to live through this. You know, the launch of ChatGPT into now is such a wild, interesting time and we're all figuring it out live and it, it.
C
Yeah, I had the exact thought the, this morning which was, oh, this is, this is a similar transition to something like the industrial revolution or if you were just looking at something basic like, like the automobile or your country deciding to invest really heavily into train infrastructure instead of building a bunch of car infrastructure. Right. How those gambles pay out and translate into life impacts for the Average person takes 10, 20, you know, it could be 100 years. And also these things cycle too, right? Because there's periods of time where certain countries that were ahead of the game leaned really, really heavily into automobile, automobile production and making a more car based society like the U.S. right. We leaned into that really heavily to maybe some benefit for a while. And then other countries come around and then build out a bunch of public transportation infrastructure and build their cities in different ways decades later that actually benefits them in the long run. There's some wave to ride here in each place that you happen to be in. And this is the historic technological moment of our time that we'll see play out. By the time, presumably by the time we die, we'll have an idea of never dying, how good or how bad it was.
B
Brian Johnson over here. So I'm going to go with opposite take of you at the very end. I thought I was on your team, but I realized I'm not on your team. I actually am for significantly less safety in AI models. Like massively less. Almost zero. Here's the reason why.
A
Hold on, you. You mean that they are less safe?
B
No, like the safety regulation around it. Like I actually think there should be almost no regulation. Like I hate the copyright side of things. When I say safety, I mean like controlling of output, auditing these reports, all that.
C
Do you think I should be able to get all the images of Mario smoking weed I want so.
B
Well, let me, let me throw it this way. Well, that was the copyright thing I'm actually not too sure about. I'm still in the process of really coming to conclusion of how I think about the. The copyright side of things. Things. I think there will be more damage done to people distributively about replacing their friendships and their close relationships with chat GPT like both emotionally and in their life than being allowed to like build a pipe bomb based on things you found on chat GPT. Like, I don't think those things are going to harm. Many people agree with you. Like, I think the safety we're approaching in these regulations that these internal companies are doing are actually meaningless comparatively to the damage they're doing to people psychologically. Like, that's much more things I'm worried about. And those are like, oh don't.
C
God, don't worry about that.
B
Get a friend. Friend.com. you can just wear it. And that's going to like kill people. Like, that's going to kill people.
D
Fully legal, financially incentivized stuff is going to do way, way more harm than yeah. Than yeah.
B
I love how they're like, dude, we can't let them know how to build a nuclear reactor. I'm like, brother, like, that's not, it's not a thing, man. They got enriched uranium due to it.
C
We're worried about the lonely 16 year old who can only talk to Chad GBT bro.
B
You know what? He's yeah, like that's gonna cause people to interact. Like speaking of this weekend and twitch like how many women felt unsafe. Like, this isn't going to help that category. If there's a category. This does not help that category. People are going to be more isolated, more awkward around people. And due to your. Which by the way, we had no wacky segments in this and I'm very upset about that. I'm sorry, but in one of your wacky segments you had an official, official eye monitor person that would have to make sure that people are doing enough eye monitoring. We like, that's what we need is you need more eye contact with people, more personal interaction.
D
The regulation, that's the regulation we need on all humans.
C
Okay. One thing I wanted to touch on is data center specifically because it's something that is talked about all the time. Like these companies are building massive data centers. This is something that they have to invest a ton of money into. This is where so much of the focus and is and the energy required. Can you explain as somebody who worked at Netflix, like I imagine companies like Netflix, companies like YouTube that have existed before this have massive centers of servers and ways to distribute, you know, the video that I consume all the time, right? And there is Clearly a difference in scale here and what these companies require. Can you explain the difference to a normal person like me of why these things demand so much more power, so much more space and what actually, what is actually happening in these facilities that's different than the big places that are providing me, like Netflix video?
B
Yeah. So they're two really different operations. So to kind of really understand this and understand the request response model HTTP. So right now on the screen is still the Paul, Paul Graham tweet. How I got that is that I went on my computer, went to address and I made a request that gets sent across the wire that goes to some sort of server. Am I logged in? You know, they get all the information out, they go and check a database, they get all this information, then at the end of the day they return out some amount of HTML, text, JSON, some data back for you to be able to look at it and peruse that data. And that requires a cpu. Everyone's familiar with the CPU that just simply processes, processes instructions one at a time. Now, the amount of power it takes to run one of those machines is still a lot like, it's still like if you were to compare you to riding a bicycle, it's, it's, it. You know, you can go pretty far on it. I'm sure, like a day's worth of serving Netflix on a single machine is like you biking a huge amount of distance. So power is kind of goofy to think about in that kind of sense, but it takes power. But AI does not run on a cpu. CPU is what they're really good at is they're like the Hussein Bolt, they're like just, oh, I'm so fast. But if you had to ask Hussein Bolt to say, move a million stones, he would be slow compared to a million people, right? A million people would be able to just crush Hussain Bolt every single time because they can just move so much faster together. And so that's kind of more like a GPU. GPUs use a lot more kind of power. You have these machines that are just all GPUs, they generate a lot of heat and they gen and they use a lot of power to kind of process all these instructions. Because what you're doing is doing flops, floating point operations, a lot of just mathematics. It's doing linear algebra across gigantic amounts of data. Millions upon millions of operations for you to be able to produce wholesome photos that make your heart happy and all that. Right? And that's what's happening on Grok currently right now. And so that's like, sure, that's what's happening. And so that just requires much, much more because for me to say no, you're logged in is going to be a search on a database which is going to be a technical terms now a logarithmic search probably across data. If not like a constant search, a very fast just single value lookup and that can go okay, hey, this person's logged in versus generating something is going to require a huge amount of operations doing the Gavin like a huge amount of them. And like that's why this is so much more power intensive is because one request makes a lot of operations like even the most crappy written software, which by the way modern software is really crappily written. Lots like tremendous waste in computing cycles is just like not the same order as it is in computing a bunch of linear operations. And it's just, it's just like they're just vastly different. And so that's why they tend to have this much larger amount of power consumption. And we're only just beginning right now with AI. So I all the figures I found about 20% of data center power and everything goes to AI. So it's not a huge amount of the data centers and all that, but still that's a big commanding amount considering how long it's been going on. So by 2030, is that gonna be 10 times more? Will it be 98% of all data centers will be that. And then comparatively the world is actually still pretty small. Comparatively. Maybe. I, you know, I, I have no, I don't even know if those numbers are to be trusted, the ones that I found. But I've just asked Grok Chat GPT and found some papers on it. So I'm just like, I think this is writer Grox telling me the paper that's lying to me. I don't, you know. Yeah, I never know the actual answer.
C
This might be silly, but who like in the actual facilities, who is there? Is it engineers? Is it like, is it programmers? Is it people that are just making sure that the, the GPUs are cold enough? Like who, who is in this giant facility with all these GPU racks?
B
Have you seen Silicon Valley? Yeah. You know that one guy that was. Yeah, that's who, that's who's in the place. No, they're like sysadmins. That, that's, that's probably not the right term. But there's a, there's gonna be a lot of different people in these kind of places. Yeah, there's Gonna be a whole like ranging from every single job, from security all the way up to somebody, you know, a logician going in there and making sure that everyone's having the right logistics and things are being transferred. Because as these things run, computers just break. So there's. There's people that are, you know, root causing, finding where things are. There's the physical movement of going to a place, pulling out a rack, changing out the parts, putting something back in to all the identification systems being built that are probably being built maybe in the Silicon Valley for a data center that's running in godforsaken Ohio. Right. Like there's like all these kind of. That's. By the way, that's us Middle east one.
A
There's two people in the chat.
B
That's a funny joke right now. That's a good joke. It's the only one running today. But there's like. That's happening. So it's like, who's running this thing? It would be such a complex question to actually answer because there's so much different things going into it. Because I guarantee you there are millions of lines of software that have been written for Amazon. Us, you know, US east or US west to run the way it does. Because they have, you know, they have to be such intense monitoring, such intense. All this stuff. There's people overseeing it. So. Okay, there's no simple question there.
C
Yeah.
B
And that's all conjecture. I'm just guessing based off my software experience, because I've never ran a data center. Yeah, I've ran a. I used a crypto mine in 2013. So I know what it takes to run a few GPUs, my friend. But I, by the way, I sold them all for like 100 each. Still. I'm in a little bit brain today.
C
Also have 1212 Bitcoin.
D
Got to move on getting to CS. Go knives. Like this guy.
B
Yeah. I was thinking about picking up a leisurely activity to help me forget like League of Legends, to help me forget the pain of criminal legends.
C
I mean, it'll replaced your pain.
B
Yeah.
C
Different type.
B
Yeah, but I did. We should talk about the water because this is super hot right now. Yeah, just got brought up. I know we preceded this one, but we said this is super, super important, which is, I don't know. Have you guys heard of the whole water argument going on right now?
D
People say if I do a Google search, 10 gallons of water gone.
B
Yeah, it's just like all this water is going on. Let me put it into perspective.
D
Sergey Brin drinks it over at Google.
B
Yeah, he, he's waterlogged wet at this point. So every time like the whole arguments right now AI is destroying all the water and that's kind of like a big thing people are hearing. And you will see this BBC just made some big thing about like in Scotland they have all this water problems because of AI and all this. But it turns out for pretty much that is just more fear mongering going on right now. The amount that the total total global AI usage is is the same amount of water usage as 2008 Google. So not very much effectively in the United states it's like eight towns of 16,000 people worth of water. In other words, one golf course uses more water than like globally all the AI.
D
Interesting.
B
So in the United States it's 8%. It's 7 or 9. 7 through 9%. I can't remember the exact number. I'll. Let's just go for seven. For example, 7% of water is used on golf courses. It's like 03% is used on data centers. Data centers. Not even just I just data centers. So do those two numbers, you're like oh, so if we used a thousand percent or a thousand x more we'd be as bad as golf.
D
So if we stop using charge Beatty we could have even more pristine goals.
C
We, we haven't framed threat properly actually it's. If we keep expanding AI we'll have less lush brown on that on the back nine which I can't hatch on the back nine because it's back nine's more important than the front nine. It's kind of setting the tone for the day.
D
My nice times. Yeah.
A
Jesus Christ. Did not know how much water golf uses.
B
Dude.
C
It's so much.
B
It's insane.
A
That and like yeah, like having a hamburger is.
C
Or, or just people's lawns in general.
A
Lawns, any type of.
B
Okay, come on guys, let's like, let's all be friends here.
A
Okay?
D
Yeah, just burn the American flag, why don't you, huh?
A
Damn, we burn golf courses.
B
I thought we were friends. No, but it's. I wanted to bring that up because while we were in that meeting someone brought up like oh, and the water usage. And so I was like, and the water usage, I never like I've heard of that. I always thought it was a lot of water. So I started reading papers and I was like, oh, we're just wrong. The water uses not the problem.
A
They're fear mongering with the water usage by comparing it to what a person uses at home.
B
Yeah.
A
And if you compare it to that, then, yes, it sounds like a lot of water. You compare it to any other industry, it's. It is not the current concern. And like, I think energy broadly is more concerning. Yes, Water is not.
B
Yeah.
C
One. So I. Some of these questions already are kind of fusions of people's suggestions from our fans and community. And a really, probably the most recurring question I saw was, hi, I'm a young 20s or I'm going into college. I am a programmer of some kind with how I feel about the industry right now and my trajectory for the future, this industry's trajectory in general. Should I change my career path right now? Should I have a sense of optimism of where or how I can work in this space right now because my options feel so limited? So can you speak to that mid to low 20s person who's struggling with that right now?
B
I have a lot to say on it, but, Doug, I want to hear yours first.
A
It is interesting and personally, why I wanted to chat about this is because at this Newsome thing, genuinely, there's maybe 12 streamers there and at one point people are saying how another streamer brought up the general sense of sort of pessimism and worry about the state of the world. And Gavin just said, is that real? Not to put you down, I'm genuinely asking that sense of doom, like, do you all feel that? And everybody kind of nodded and said, yeah, things just feel worse nowadays. Things just feel on edge. Feels like we're moving towards civil war. And then prime was like, no, I think it's great. And, and then I was like, I'm kind of with him. And so I am optimistic. I mean, doing this show and fucking, you know, drinking a fire hose of horrible news every week, it does, it does make things a little bit worse.
C
But I, me, I know, Doug. Here's the Sudan stage, civil war. Doug, you paying attention to 500,000 children have died of famine. Doug, it's.
B
How do you feel now, Doug?
C
I mean, I, I mean, even I, I understand that the broader context of people feeling the doom, but I, I. So, right, right.
A
No, no, no. To specifically get to your question, I, it's just, it was interesting in that context, being like, damn, I doesn't feel like there's that much optimism right now. I, for me, the reason I'm optimistic is because I think that somebody who is driven and or creative has more tools and to enable their output in the world and to do something than ever before. And I compare that to myself and my friends 20 years ago to now. And I see people who are able to go make things and not just because of AI, but just all the tools that are available. What the Internet has, what YouTube offers with education, what ChatGPT can do, the average person, and I say this for random people. My sister who's a nurse, finding that tech is like making her life way better. My brother in law who's fixing his truck now because he can point chatgpt at a thing, who says, oh well, that's the blah blah, blah piece and you can do this. My cousin who's now able to start a band even though he's an accountant, because these online tools have been able to work way faster. Me, who's able to learn all this programming at a rate and start to expand my creative repertoire, which has allowed me to hire more people than I would have otherwise and have more output and contribute more to society. So I think that the way society is moving is going to allow, at least for people who want to put themselves out there, have far more ability to do so and to make an impact and to, to learn, to learn, to keep going, to try to make a lot of mistakes. And that is very inspiring to me. And I think somebody who is creative and driven now has a much, much. There's so much potential, oh my God, compared to what it was 20 years ago. And I think that will keep accelerating. That I find optimistic. What do you think? And there's also a sense which we talked about, which is that with my type of content, people keep coming up to me and saying, you inspired me to get into programming and I just got a job here. You inspired me to check out content creation. I was able to pull it off. And I think, like, I feel incredibly gratified getting that response. And I keep hearing that from people over and over. I see every single week. And I get emails or talk to people in person who say, this did make a difference. I was able to do a thing. I have changed my life in some way. And so I'm like, I know it's possible. Not for every single person. Caveat comments. Commenters I'm sure are writing things, but that is my, that is my broad attitude. And even with all the things we talk about, I'm like, there's, there's so much good.
C
Yeah.
B
So I got to, I got a chance to talk a lot with Michael from Cursor, he's the CEO and honestly, I have such respect for them and what they're doing because a part of their whole mission is that, sorry, real.
A
Quick Cursor is an AI coding tool.
B
So.
A
Yeah, sorry, yeah, yeah, AI coding tool.
B
You can just type in what you want and it'll generate code and it can use your project as a means to generate things more accurately than, say, going to a website and putting in chat jeopardy. And so I got a chance to meet with him, talk with him about a bunch of like, just stuff they're doing at Cursor. And a huge part of it that I just have so much respect for them for is that they think the programmer is important. When they look at the world and where they're going, some people would probably see them being like, oh, they're trying to take jobs from people. Instead they're saying, no, we're enabling people to do anything. And we think that the human element to programming and being a part of it is so important. We're trying to build an editor that makes it easy so you don't have to go somewhere else to use it. But for you to be able to program and do stuff, like, it's super important for you to learn these skills because that is going to make you go from okay to great. And we can help you get over the hurdles by having ease with AI. And so when I hear these things, I look at the world because, you know, it's super easy to see the, the Paul Grahams and the other people being like 10,000 lines a day. They're going to build everything. You're going to do nothing. Right? Like, he also doesn't have that accent. That's my accent for everybody.
C
I'm, I'm loving your accent.
B
Thank you. I'll start getting some Irish ones coming out here soon. But when I see all these things, it makes me so hopeful because what I, like, through my experience through the community, what I see is that people just need, like, positive voices and they're going to go, and they're going to go learn stuff, right? Like, it motivates them to go learn stuff. And then they realize they can take control of their life. When I was young, how did I get programming information? I either had to look at the only code I could find and guess what it does, or buy a book from Barnes and Noble, if they even had a book. Now you can just like, ask questions in your super stupid way and it knows what you're trying to ask and say, hey, you need this. This is how it said. This is what it means. You get, like, so much more access to stuff and the people who are taking advantage of that. I get so many messages being like you were right. This is great. Programming is fun. I'm learning and I actually got a job. I'm taking control of my life. Like I actually get to have my own kind of like freedom and responsibility for my own life as opposed to feeling kind of like a, you know, like a leaf in the current is like there's a lot of opportunity. So when I see that and I see AI, I get happy. Because at the end of the day, it's not taking our jobs, it's going to take some level jobs. Things are going to change. Yeah, the paralegal probably is going to have a very hard time in the next little bit. But those that know how to use a. I know everyone says that phrase, I'm not even convinced it's even real. Those that are willing to learn, regardless of how easy or hard it is, those are going to be the people that are going to have a really awesome like potential future. And I don't think it's just going to evaporate tomorrow. And so I'm like super hopeful because I think in two years there's going to be so many more people that have access to the ability to get that education, to get that life changing stuff than there will be today. And so for me, I look at it like as a super positive future despite all the potential negative sides.
C
How do you, how do you think that? Because I think that I mostly agree with that outlook and it's a good long term sentiment. How do you think that contrasts with the guy who just graduated with his CS degree? If he had graduated eight years ago or 10 years ago, that guy was pretty much guaranteed a job out of school and now he's entering a job market that is much, much more challenging, where that degree doesn't feel like it translates into anything right now. And I totally agree. Where the most capable and willing to learn people who can make the best of whatever circumstance there is, there's always a person who will rise to the top and become successful. Those are also the people who are most inclined to share their rewarding stories with people that they've learned from. But to all the people that are kind of stuck in the glut of the middle right now, what do you say to those people? And it's not to be discouraging because at large I do agree with you and I think the positive trend over the longer term, I just think something we've talked about a lot is there's an unfortunate thing happening right now where I've said openly, you could be me, the Exact same skill set, the exact willingness to work and. But if you graduated five years after me, you would be in a totally different position. That is much, much more difficult. So how do you kind of reconcile that with what you're saying now?
B
So I first want to preface one thing. I'm not talking about all positions or all graduates, okay? So I know I don't know about business. I don't know anything of what they do. I'm not even sure if those degrees were ever real to begin with, but there they are, right? I'm not going to talk about that.
C
My business degree is fake, or civil.
B
Engineers or mechanical engineers, or even electrical engineers like those. I'm just going to talk about programming because that's the thing I do know. I will say it this way. I graduated, what, 15 years ago? Yeah, 15 years ago. So 15 years ago I graduated, and I'll tell you this much, that there were a group of people in my class that tried really, really hard. And all of those people were hired. Then there was a group of people in my class that didn't try. None of them got jobs. Like 40% of my class did not get jobs when they, when they graduated. That's a very large percentage. And I'm under the personal belief that the only difference between now and then is that the people who were trying really, really hard had a, it was a requirement of pretty much having a four years of trying really, really hard. Whereas now it's like, oh, I did a summer boot camp. Why don't I have a job? I'm trying super, super hard. And it's like, yes, you are trying super, super hard, but your time scale is like vastly different than me. Like where I was at by the time I was applying for internships versus where you're at applying for internships. Like, I was significantly more capable than you. I had so much more work. I had so much more of these things. And so I do think that there's a whole time component that has been shifted on its head that we just are not accounting for at all. Like, no one, nobody's being like, oh, yeah, but this person's only been learning for three months. It's, why don't they have a job? Right? Like, there's a whole timescale going on there. And I do, I don't know how much it's actually benefited for those that are like the tryhards, because I've, I have so many people that I know that are the try hards and they are getting jobs, they are getting good Jobs and they're completely junior 19 year olds getting everything that they need because they've been trying since they've been 12 years old. Yeah, they've taken the long, long road of making a craft as opposed to just trying to get the job. And I know that could be very defeating. But the problem is, is that so much of life and so much of Expertise is not one on 100 meter dash, it's one on a marathon. And being able to push through that for long years, like I had to live in an apartment where the guy below me to threaten to kill me and he was always smoking meth and all that. And it's just crazy down there. Right. But that's like, that's part of my story, is having to live through that situation, working 80 hours a week trying to figure out how to get a job. And so I spent multiple years, 80 hours a week, trying to get as good as I can to get a job in this industry. And so that's kind of how I look at, is that I, I do think we have, we got a bit, we got a bit messed up in like the ZIRP era, the zero interest rate era.
C
Yeah.
B
And that like, I do think that that really, pardon my language, like that fucked up everything. Like that's actually where people are like, oh well, why don't I get a job? I've been doing this for six months. Like they used to pay people a hundred thousand dollars. I'm like, no, that was actually a broken moment. Like that was the bubble, that was the true bubble. And now we're returning, I think more towards regularness, where it's like the people who are trying, the people who are really invested, are going to find something still good at the end of this rainbow. Now in 10 years it will be the same, it will be different. But to what level different, I have no idea. That's my hopeful thing, hopefully, that white pills people. In the sense that you get the control, it's up to you to say no, it's not up to somebody else. I've always had this thing. So being from Montana, being from kind of on the outside or coming into Netflix, one of the big things was everybody else was like these Stanford graduates, these Harvard graduates. And I will tell you this much, you get looked down on quite a bit from being from the south, having a southern accent or being from Montana, being from these places that are considered backwards. Like you get a lot of discrimination. Back in those days, I had a lot of hurdles that I had to get over. And what I realized is that, you know, we used to talk about diversity and inclusion. Like, I realized that inclusion, they used to say, is like. Or diversity is a bunch of people at the dance. Inclusion is asking someone to dance. And so that's kind of what they said is like, everyone gets to dance. And I kind of always hated that because I realized, listen, I never got to dance. And the real thing is that you have to make your own thing. And so I would go to people and be like, you're dancing with me because I'm going to make this thing work, because I want to be here. And so I think there's a whole level of control that people still have in their life and especially just around technology that's just super magical, that I don't know if it exists anywhere else. So I encourage everyone to don't just go to the dance. Ask people to dance, right? Get in there, be a part of things, and you will find that there's a lot more out there. So that was my passionate.
C
That was great.
D
That was awesome.
A
Thanks so much, Brian, for coming on. I appreciate it. Thanks, everybody, for coming on, watching this episode of Lemonade Stand. I hope you're motivated and terrified at the same time.
B
Yeah.
A
See y' all next week. Bye, everybody.
C
Thank you.
Vox Media Podcast Network — October 22, 2025
Hosts: Aiden, Atrioc, DougDoug | Special Guest: Primeagen
This episode dives deep into the current state of AI, including the latest controversies around large language models (LLMs), the threat and reality of “AI poisoning,” tech industry hype vs. reality, regulation in the US and abroad, workplace and societal impacts, and advice for the next generation of tech professionals. Guest Primeagen (popular programming content creator, former Netflix engineer, beloved Twitch personality) brings technical expertise and a healthy dose of skepticism to AI discourse.
Timestamps: 00:30 – 04:50
“His opening question was: ‘So I just want to hear from you guys, why is this important, coming to TwitchCon? This community, this job, like, what’s it mean to you guys?’ And first person to my left: ‘Well, it just feels like recently, the online discourse has become so intense, particularly with the right, that it’s just harder to have discussion.’ And [Newsom] could not get a single word about gaming.” — Aiden (03:54)
Timestamps: 05:13 – 10:30
“I like video games. It does not mean I’m good at video games because I have kids. I program 40, 50 hours a week. ... What happens if I just open up and I just program? Does anyone do that on Twitch?” — Primeagen (06:11)
Timestamps: 10:38 – 18:50
“One person out of the 10,000 people in the stadium can still have the exact same volume and impact despite the overall amount of stuff getting bigger. And it’s kind of scary.” — Aiden (15:00)
“Corporate greed and power... nothing beats it. That’s like a great way to get a lot of information out there. Especially once ChatGPT starts doing shopping.” — Primeagen (20:46)
Timestamps: 18:50 – 22:24
“If you start training AI data on AI data... eventually they... fall apart.” — Primeagen (18:22)
Timestamps: 27:28 – 44:04
“Somehow in this AI world we were promised it was going to cure cancer and fold our laundry. Instead, it’s doing all of our art and creative projects and we’re just having cancer and folding laundry!” (43:21)
Timestamps: 44:26 – 51:34
“If you have a company that’s just really, really incredible for, I don’t know, truck drivers or whatever, like a specific industry... I have a hard time believing the profit is going to come from the foundation models.” — Aiden (47:07)
Timestamps: 51:34 – 56:29
“When a company has AGI, you will not get it. Would you let the world’s greatest secret be used by the general public? No. You’d remake Google, you’d remake Netflix, you’d remake everything.” — Primeagen (51:52)
Timestamps: 56:29 – 71:44
“Docker... had this product in which made $0 and cost a lot, a lot of money... the year they decided to make money, they made $500 million in like a month. ... This is kind of them [OpenAI] starting to turn on those gears.” — Primeagen (66:36)
“I do think that there’s going to be a lot of... it’s a long term revenue thing but the government’s going to keep giving the money... I think it’s more of a national need to have good AI because of geopolitics than it is because they’re trying to make... anime titties.” — Primeagen (67:53, paraphrased)
Timestamps: 72:30 – 91:07
“I can foresee one day... there’s like the EU black zone: in here there’s no AI allowed. We don’t do AI. All AI companies just bail out... What is that going to do to the average person there?” — Primeagen (88:53)
“I actually am for significantly less safety in AI models. Like, massively less... The safety we’re approaching in these regulations... are actually meaningless comparatively to the damage they’re doing to people psychologically.” — Primeagen (93:18)
Timestamps: 95:19 – 104:16
“One golf course uses more water than like globally all the AI.” — Primeagen (102:47)
Timestamps: 104:28 – 117:30
“There’s a lot of opportunity... at the end of the day, it’s not taking our jobs, it’s going to take some level jobs. ... Those that are willing to learn... those are going to be the people that are going to have a really awesome potential future.” — Primeagen (110:30)
“It’s never been better for someone who is creative and driven.” — Aiden (108:18)
This episode is an unflinching look at AI’s realities, filtering out hype and gloom in favor of honest technical, economic, and philosophical debate—with plenty of candor, humor, and actionable advice for the future. Whether you’re a techie, policy wonk, or just AI-curious, you’ll come away with a layered understanding of where things stand—and where we might be heading next.