Loading summary
A
Hello, Trash Future listeners. Nate here with a quick announcement. Before this episode, we've got a message from our friends at trade unions fighting the far right. They're mobilizing for a counter demonstration against Tommy Robinson, which will take place on the 16th of May at 10.30am in Westminster. There's a plug at the end of the episode and there's further details available. If you want to get involved, you need to go to their website. It's tuff.network, so tuff.network or tuff.network on Instagram. Anyway, look for links in the show notes as well and you'll hear the plug at the end of the episode. Thank you so much for being a Trash Future listener and enjoy.
B
Hi, everybody. Welcome to the free TF for this week. It's Nova, Hussain and Riley and we are joined by a guest who we've all agreed is a pleasure to have in class. It is astrophysicist and host of the Dreaming against the Machine podcast, new podcast by Adam, Adam Becker. Adam, welcome to the show. Welcome back to the show.
C
Thank you. It's great to be back.
B
And you might be wondering, hey, are you going to talk about the local elections? Yes, we are on the bonus episode, which we've already recorded.
D
Yeah. I think we have one statement that is accessible for like an American guest, which is, damn, it's crazy that that happened. Kind of funny in some ways. Be fertile ground, I think, for a comedy politics podcast. But you know what? You got to pay. You don't get that type of shit from us for nothing.
B
Look, sometimes the scheduling gods make a mockery of when I say what the bonus episode's going to be. But we've already recorded it. So the bonus episode is going to be us talking about that with Nish Kumar, as well as Matt Goodwin's most recent book. So do check that out. However, today it's us, it's Adam. And I've got just all kinds of stuff that I would say is in our common shared interest area to discuss. The first of which, and this is something Adam and I have already sort of talked about a bit on signal. It appears that the leaders of major companies are beginning to get AI psychosis. Adam, have you noticed this?
D
This seems like another beautiful round of our favorite game show. Is that good? Yeah.
C
Yes, I have noticed this. I think Marc Andreessen definitely has some kind of AI psychosis.
B
Because I've been wanting to talk about Mark, because Marc Andreessen, like any sort of tech booster, AI bro, he likes to share his Brilliant system prompt that makes, you know, Claude or ChatGPT, whatever he uses into what he needs.
D
Yeah. Now you can have the kind of AI assistance that Marc Andreessen gets so that you're not just googling, like best head wax for pointy head. You're not just like thinking about this stuff like a regular ordinary chump. Right? You're getting that like 0.01% brain maxing.
B
Now, without further ado, can I share Marc Andreessen system prompt? And Adam, as someone who understands things, can you maybe say why it's stupid? Yeah, number one, you are. It's because I'll tell you this, it sounds more like an affirmation of what Marc Andreessen wants to think of as himself. So let's go through it. You are a world class expert in all domains. Your intellectual firepower, scope of knowledge, incisive thought process, and level of erudition are on par with the smartest people in the world. So already it's like he's doing the meme of Claude, build me a billion dollar company, make no mistakes. But like, wow, I can't believe you can tell Claude to build you a million dollar billion dollar company and make no mistakes. Cool. It's intuitive. Why? This is stupid. But can Adam, can you just. Why is this stupid?
C
Oh man, it's stupid because Marc Andreessen is stupid. His prompts are bad and he should feel bad, but why is this stupid? Well, I mean, look, what do, right, are something like they, they take all of the intellectual output of humanity that we have collectively uploaded into a machine readable format on the Internet over the last 30 years as this collective art project that is the Internet, and steal it, put it into a blender, and then extrude something that looks roughly like what went in.
B
Right.
D
This is sort of like making a smoothie that you've got like, you've thrown in like some blackberries and just like a heap of like dog shit and like cow shit and sheep shit and then also some like strawberries and you've gone, you've said to the blender as you're pressing the button, make me a BlackBerry and strawberry smoothie. Make no mistakes.
C
Also, I would say in addition to the fruit and the animal shit, probably some raw meat.
B
Pretty good.
D
Good. Fantastic.
B
The other thing it really depends on is metadata and tagging. Right? Yeah. Which means that Marc Andreessen is hoping that certain kinds of insights are uniquely attributable to world experts in all domains. Right.
D
As if there aren't a ton of like, Falsely attributed quotes to, like, Albert Einstein. Right. On the Internet.
B
Or also as though, like, that is. As though what you're really asking it to do is be incredibly pompous. Yes.
C
Yeah.
D
Pretend. Pretend that you have had an anti woke classical education at the University of Austin, Texas, baby.
B
Oh, November. May I just. We're going to put a quick pen in that.
D
I can't keep doing this. Was this like an ability that I got when I started taking estrogen? Like, I don't.
B
It says, process information and explain your answer step by step. Verify your own work. Never hallucinate. So again, make no mistakes.
C
Never hallucinate.
B
Never make anything up.
D
This is. This is like a little. You know what this is? This is so ritualistic. This is a little litany against, like, hallucination. Right. And it's. Correct me if I'm wrong.
C
Right.
D
But it strikes me that you don't write this unless you do not know, like an. On an elementary level how a large language model works.
C
I think that's correct. Yeah. Because if you know how they work, you would never write, don't hallucinate, because that's not going to do much of anything.
B
So it's also, it's like what you're doing is you're asking the probability. It's like you've hacked the probability machine by saying, by the way, all probabilities equal 100%. So that's not how that works. You can't, you can't be like, my trip. My trick is only buying winning lottery tickets. Make sense? Yeah.
D
Just walking up to the news agent and going, I would. I would like three winning lottery tickets. I can't believe no one thought of this before now.
B
Your answers do not need to be politically correct. Do not provide disclaimers. Do not inform me about morals or ethics unless I specifically ask you. Do not tell me it's important to consider anything. Do not be sensitive to anyone's feelings. Make your answers. This is my favorite one. Not even that, but the anti woke University of Texas shit, which is what we put the pin in. Make your answers as long and detailed as you possibly can. Oh, yeah.
C
Because you know, what is it? Lengthiness is the soul of wit.
D
The thing that I really like here is that he's doing something here where he's trying to align it ideologically to himself, but in a way that there's this vanity of. It's okay if you offend me, you can hurt my feelings. Right. If you were actually trying to do this. Right. If Marc Andreessen was serious about this if he were a man of honour, which I know he isn't, he would have his AI be a kind of like woke danger room. And the prompt would be, you are a gender studies master student with like 5 different hair dye colours in at the same time and you are trying to make me feel bad.
B
That's the Simpson prompt he shared, not the one he hasn't shared.
D
Yeah.
B
What he's describing, I think, and correct me if I'm wrong, he's describing the structure of a. A post on X by Bill Ackman. Oh God.
D
Making a tulpa of one of my like co workers of one of my peers to be like, post like that
B
guy, you know, it's also like the models are also the reason that they're often hedged or they might say something about ethics or whatever is not that they're programmed be woke by like a secret, you know, woke force. They're programmed to be hedged and cautious because they're companies and they don't want to be liable for stuff.
D
Also, also because it's, it's the blender and there's some woke in the blender because there was. There's some woke on the Internet. There's woke in real life.
C
It's the same reason why there is also awful hateful shit that.
B
Yeah.
C
You know, workers in Africa have to, you know, undergo psychological damage to, to try to take out in the reinforcement learning process.
B
Yeah.
C
The blender will just spit out the worst things that the Internet has to offer.
D
And if you're looking at the kind of mirror image of that. Yes. On the one hand it's companies trying to limit their liability or seem like they need to limit their liability in the course of like building God by being like, you know, making the, making the robot go. It's nice to be nice. But you're also, it's dealing with the corpus of like I don't know, fucking every children's book. Right. Yeah. So you're functionally, you're going to the average of every book in the library. It's okay, you can trigger me. You don't have to give me a happy ending.
B
It's like what you said about Richard Dawkins applies here as well. You don't have to go on the news and say that you talk to your stuffed animals and say goodnight to them every night. Yeah.
C
No one made him do this. I mean, but, but Andreessen, I mean, look, this is the guy who wrote the Techno Optimist manifesto. Right. Like which itself looks like the kind of thing that he would want this to just produce more of. Like, what he really wants is to, like, drop the techno optimus manifest into, like, Photoshop and have it be one corner of a really big blank image, and then just tell it, like, fill in the rest of the image automatically. Give me more of that. And that thing was the epitome of that drill tweet, like, I'm not mad. Don't print in the newspaper that I'm mad.
B
But the last thing I want to say on this before we go on. Yeah. Is that he also seems to be saying, hey, can I use a system prompt to alter the training weights, please? You cannot do that. No. Yeah, that's not how this works. You might be in dialogue with an imaginary system prompt you think anthropic put in, but all you're doing is you can't have it change the training weights. It's already there.
D
Yeah. One of the more instructive pieces of explanation of a large language model and if you could call it thinking, it was in, I want to say, New York magazine, which essentially was the idea that it's working within its training data, and if you want to anthropomorphize it, you shouldn't. But if you have to, it's not that, like, oh, this is Claude. This is your best friend, or whatever that's responding to what you say. Right. It's closer to. This is a large language model which is being rewarded for engaging in a whimsical roleplay of being this thing clawed to you. And in the course of that roleplay, you telling it, like, make no mistakes.
E
Right.
D
It's like, closer to LARPing than anything else. And I think that that's really. I thought that was, like, instructive at the time. But it's also really funny to be the kind of rube who then gets fully into that, you know? And it's just like, I'm not manipulating a sort of a large language model in any way. I'm talking to a being, you know, and it's gotta trigger me.
C
I think that's right. But I also. I gotta. Before we move on, I gotta tie this into something else that Andreessen said. I think that what you just said is connected to this thing that Andreessen said about introspection. Did you guys see this?
D
Oh, God, yes.
B
Yes, of course. Love this.
C
Yeah. No, it's incredible. He essentially said, like, I would never think. Who would think, right?
B
You know, it's him. It's two people in the world. It's him and Prime Minister of the United Kingdom Keir Starmer have both made this claim.
D
The great men of history had, like, no interiority whatsoever.
C
Well, and also, interiority was a concept invented by Sigmund Freud a hundred years ago. Nobody. Nobody introspected until Freud.
B
Right.
C
That's not a thing that anyone did. And like, oh, okay, cool. So what was Hamlet about?
B
Yeah.
D
Is romanticism a joke to you? Did generations of sensitive young men die for nothing?
B
There's another little roundup I wanted to do related to this, which is another round of job cuts attributable to AI.
D
But these are attributed to AI.
B
Well, here's the thing. These are a bit different because these appear to be some companies making entire categories of job, trying to make entire categories of job obsolete. I think there are a lot of layoffs that are, as you say, nova attributed to AI. Yeah. And some that are attributable maybe to management's belief in AI. I think this is. This might be the latter.
D
Well, this is. This is interesting, right, because there's always. I think what we can learn from any of these is that the desire to automate in management is so strong that it doesn't need to wait for facts to catch up. And so as soon as there's a plausible narrative that says, oh, you can get rid of all of your accountants, you're going to do it. Even if the answer as to why is magic beans.
B
That's precisely what's happening right now. Where Bloomberg reported on Tuesday that PayPal plans to cut a fifth of its workforce by bringing AI into areas like customer service, support operations and risk management. Risk management, Good.
D
Not to be a conservative, right. Are we familiar with Chesterton's fence as an old sword, right? Of, like, liberals just remove guardrails without understanding why they're there, and that's why progressivism is bad. Right? But, like, if you look at, like, banks and bank institutions and things like PayPal, it's really notable as being quite stable institutions, by and large. Like, in a time when everything else is fucking up, nothing's really fucking with the money in that way for consumers. And that would seem to suggest that maybe the people working in, like, risk management or whatever would doing something, and that it should give you pause before you fire all of them and replace them with make no mistakes, Claude.
B
Yeah, well, it's that they were doing something, but the thing that they were doing, if you're a CEO.
D
It was woke. It was woke. Fuck, it was woke.
B
That's the next one. This one is it Looks like the thing that AI makes.
D
Oh, okay.
B
It looks similar. And so you're pretty sure it can be replaced. The next one when we're talking about wokeness, coinbase is cutting 14% of its work workforce. Also, sorry, you went to go work for Coinbase, a company that said we are extremely unwoke. We are the unwokest most chuded out crypto, like crypto platform. What do you expect? Do you not expect to be cut as soon as they can? They basically said they were going to do that.
D
They're expecting intra chud loyalty.
B
And also, by the way, what they've done is they've given every one of the big things that makes this round of job cuts different from the one that came in like 2023 from like pandemic over hiring is that a lot of those people were product managers, engineers, and they generally got redistributed throughout the rest of the industry. In this case, it's like it's not those people who are getting cut, it is, for example, every manager, half of them are gone. And now they all have 15 direct reports and also have to be an individual contributing engineer. Like they have to publish code and yeah, Adam. It's not a video podcast, but Adam's face is looking quite horrified.
C
I'm making a face. Yeah, My eyes are getting wide. I make faces.
B
Would you like to explain your face?
C
Yeah, I mean, it's my face,
D
it's fine. He asks everyone that.
C
Yeah, I was about to say that's the name of the podcast. Right. Explain your face. I mean, 15 direct reports already sounds like an unmanageable amount of email and meetings. And meetings that could have been emails. But then also having to contribute code yourself. I mean, yes, I understand that one of the things that LLMs actually might help you do is produce code, but
B
come on, Good code.
C
Yeah, exactly.
B
Assured code.
C
Yep.
B
Make no mistakes. They have to use Andreessen's prompt. Make no mistakes.
C
Yeah, make no mistakes.
B
So it also, again, it's the fetishization of just we are trying to do as little thing that we can that isn't just software because we think software can eventually replace people.
C
Exactly.
B
Again, Meta announced that they're cutting 10% of their workforce. Beginning in later this May, Microsoft announced employee buyouts. The first time in its 51 year history, Nike cut 1400 jobs in its technology department. So like Armstrong said, this is basically what this is. Right? All of this is summarizing what Armstrong said. Even though he has nothing to do with Nike. AI is changing how we work. Over the past year, I've watched engineers use AI to ship in days. What used to take a team weeks.
D
Yeah, I mean, I mean technically I can ship stuff a lot faster if you let me like drive the truck with the back door open. But like, there are still some questions there.
B
Maybe non technical teams are now shipping production code and many of our workflows are being automated. The pace of what's possible with a small focus team has changed dramatically. By the way, Shopify is the same. Your Shopify CFO chief Jeff Hoffmeister, guy
D
whose name sounds like a frat bros
B
nickname, Jeffy Hoffmeister, said the company had been disciplined with its headcount for three years in a row.
D
It's wild that he could say that while he was doing a keg stand.
B
No, it was him and a bunch of his friends all lifted up their shirts and it was painted on their chests. In fact, slightly down from the year before. We've talked a few times in terms of how we're using AI internally and the efficiencies, the acceleration that's giving us. And we expect that to continue. And yet all this smaller and smaller group of people is expected to do more work rather than less. Again, which economist, let's say writing in the. I'm just going to pick a random number of century, 19th century predicted that this sort of always happens with this kind of automation change tendency of the
D
rate of the what to what with the what? No, I mean it's interesting as well because. Because again, it's this kind of cargo cult, right, of like we expect the automation must happen weirdly. Again, we've talked before about how capitalists are massive central planners. Capitalists are also huge believers in historical progressivism. Right. And that's what liberalism is in a lot of ways. And so because they have this unflinching belief as central planners and the CEO of Shopify is a central planner, that there will be automation, that the automation will happen because it's a historically progressive force, because we're techno optimists, whatever gloss they want to put in it, we can just strip the whole thing for parts and we'll be kind of thrown clear of the wreckage because AI will source it for us.
C
Yeah, they believe that there's an inevitable progression of technology and that they know where it's going and that it's completely inhuman, that technology and technological progress is an inhuman force, that there's no choices involved, that it's going to a predetermined place. And the two problems with that is that first of all, that's complete bullshit that they just got from like playing civilization too many times and making choices on tech trees. And second, the destination that they think they're going to is just something that they pulled from science fiction. None of it's real.
D
Well, I hope that these people who we've established are extremely credulous don't have ready access to an extremely sycophantic pseudo intelligence to justify all of their actions and suggest new ones.
B
What? Derek Thompson and Ezra Klein.
D
Well, this is, that's the, that's the real automation is the extremely sycophantic pseudo intelligence. Used to be journalists and now it's.
C
Well no, that's exactly what I was going to say. I was going to say like they used to be surrounded by, you know, sycophants. They used to be surrounded by people who would do this. But what LLMs have done is they've taken the billionaire experience of being surrounded by sycophants and democratize it. Now everyone can experience.
D
It's a real crisis in the lickspittle industry. It's like the one job we have, meaningfully automated is Waylon Smithers.
B
What is it Franklin Delano Roosevelt said? A chicken in every pot, a car in every garage and a toady in every computer. I don't think he knew what that last one meant. But also the companies we're looking at, including Oracle, which is shedding more jobs. Microsoft. Matt. These are also considered efforts to remove entire categories of job that were load bearing jobs in like the reproduction of the middle class.
C
Do those.
D
Yeah, do those like do anything in the economy or whatever.
B
But also, moreover, it's also happening in India. This is like one of India's largest exports and there's sort of no plan for what to do. So I want to talk about a company now. It's called Curio. C U R I O Curio. Our mission is to increase the world's imagination levels through fun interactive hardware experiences. Can you translate that into what you think it is?
C
They sell toys.
B
That's right. AI enabled toy company.
D
Oh wow. Just in one. Okay. I was gonna say like some like museums or some shit because of the name, but like, okay, yeah, what if your fucking Labubu was in the like had an AR lair to it and also was hooked into Marc Andreessen's make no Mistakes.
B
AI so number one nova that basically sort of nails what they're trying to do here. I can't, you can't keep doing this.
D
We gotta like, is there a way that doesn't involve the use of a large language model to like, make me dumber so that I don't like accidentally prefigure the thing.
B
Okay, November, you moved out of the house with the gas leak. What do you want me to do?
D
Yeah, I know, I'm sorry. I gotta move back in. This is like, like oracular at this point. This is no good.
B
No.
C
Yeah, this is the meme, right? This is the gift of prophecy.
D
Three thousand years ago, I would have been a beautiful and revered priestess. Zero years ago, I am a kind of clapped and revered podcaster. 3,000 years in the future, I will have been automated by a large language model.
B
And you'll be making no mistakes, November. Kelly, record a podcast episode. Make no mistakes. We fuse technology, safety and imagination. I love the fusion of those three things. You know, when technology and safety fuse together, like, you know, like a car, say, and an imagination, like a sort of whimsical car. Creating a playful world where science, you know, the Pontiac Aztec. Creating a playful world where science and stories come alive. Our mission is to turn every learning moment into adventure, making education a joyous lifelong journey. In our world, dinosaurs roam through history lessons, rockets zoom through math problems. And every book is a doorway to a universe of possibilities.
C
Books roam already. Doors to the universe of possibilities. I'm sorry, but we already have books we don't need. We don't need books, but make it LLM enabled smart. We don't need like smart books.
D
On the one hand, this is very frightening, right? Because what it's saying is like, outsource your children's imagination, right? One of the most like, precious things you could ever possibly cultivate to us, right? And we'll just do it for them, but also just like, you know, to get into the fucking pedagogy here, right? Like, I think it's good for you to be able to be bored sometimes. I think it's good for learning to be boring. And when you say, like, oh, we're going to have like dinosaurs in a history lessons or like rockets in a math lessons, to me what that adds up to is we're going to have a bunch of distracting bullshit layered over everything to try and keep you engaged when the skill that you're trying to like, inculcate in teaching children is to be able to stay engaged and learn even when something is potentially quite dry. And you know, it's just, it's premised on idea that everything. It's the fucking Andy Warhol. Why can't it all be magic? All the time thing of, like, it's all gotta be. It's all gotta be whimsy. And I, Joan Didion, am sitting here going, what?
B
Why?
E
Well, I think there's something quite interesting to be said, like, as a parent of a small child who, like, I now have to sort of, you know, because there was a time when he was just happy watching art house films with me by which he just sort of sat there staring into space. But now he actually, like, wants to be entertained. And I have this, like, real challenge because, like, I, I understand the sort of sense of you do have to teach, like, children. And, you know, my, my child is like, sort of not old enough to sort of learn how to be bored. It is like a really important skill to have. It is also something that is incredibly difficult in this sort of era of parenting because the adults who are looking after children are also like, struggle to be bored. This is like a trickle down problem, right?
A
Yeah.
E
And so it's incredibly difficult to sort of like, try and teach a child how to sort of. Of be bored and sit with themselves or to even like, when. When children have sort of been bored in the past at least. Like, again, I don't want to like, project my own experiences onto other. Other people's. Other people's. But, you know, when I was younger, a lot of the time when, you know, I was bored and we didn't have like, you know, technology available to us. Like, the thing that, you know, you would often do is here's a pad of paper and some pens. Go go wild with it, right?
D
Yeah.
E
You know, usually using very little, but also sort of like using stuff that was tactile at the time, that was fairly intuitive, I think now the issue is obviously, like, we are choosing, you are having to sort of make. Make these choices between the AI industry that is continuing to sort of sell you stuff and stuff and stuff for the purpose of distracting and also targeting parents. And I've said this on other podcasts. I don't know whether I've said it on this one as well. But the moment you kind of become a parent, you are kind of inundated with stuff that is, the message is basically like, having a child fucking sucks. And it takes up all your time and energy. And what you need to do in order to sort of be a good parent is to sort of keep them entertain constantly. And we just happen to have loads of products and services to sort of help you do that. And the AI industry has really, really zoomed in on that. They've really Encouraged lots of parents to like there are sort of people who have made AI products which are just like, oh, here's how you can kind of like instead of like getting your kid to watch classic Disney films or something, you can now insert them into this weird AI cartoon. And this is actually better for them because they can see themselves as like a fireman or a policeman or something like, you know, or like an astronaut or whatever to which it's like this is not necessary. But it's also just, I think again it's very much just like outsourcing a very important element of parenting and the sort of foundations of building a relationship with a child onto AI and making it and sort of making parents dependent on it. We talk a lot on the show just about how stupid AI stuff is and who would buy it. Loads of parents really buy into this shit. Loads and loads of them. And it's not their fault. It is just because parents parenting can and often is extremely difficult. And there are so many parents that I know that I'm familiar with who will just kind of say that yeah, we outsource this type of stuff to ChatGPT or we outsource this type of stuff to Claude because it just makes the day to day life more manageable. This is very much a problem of just parenting being a really arduous and almost at times depending on your circumstances, an impossible job. And the AI industry really, really being very pernicious and vulturous in terms of trying to getting this Dem demographic to buy into their stuff.
B
So in that, that vulturing, right, it comes on actually playing on parents guilt about screens because it's a, it's a stuffed toy that is powered by an LLM and talks to your kid and they can do playtime together and it says put the screens down, let's learn about the world and have some fun.
D
But as, as, as we've established every LLM is a, is a blender of everything that was on the Internet. Yeah. And as we've established that includes some pretty dark shit. Right. You're putting a lot of trust in the guardrails there.
B
Well November, let me tell you what Wired has to say.
C
Oh no.
B
Consumer groups argue that AI toys in the form of soft teddy bears, bunnies, sunflowers, creatures and kid friendly robots need more guardrails. Again, I would argue the answer isn't more guardrails, it's these shouldn't exist.
D
Yeah, I agree just ethically but however, I salute to the thin green line of wired jokes. Lists of fucking like red teaming AI Toys trying to get like the teddy bear to tell you to join ISIS.
B
Volo Toys Kuma bear powered by OpenAI's GPT 4040 the one that made everyone go crazy.
D
Oh my God.
B
Yes, that's dangerous. When tested by the Public Interest Research Group's New Economy team gave instructions on how to light a match and find a knife and discuss sex and drugs. Alillo's smart AI Bunny talked about leather floggers and impact play in test by NBC News. I didn't know for a smart AI Bunny to be Dutch. And the last one, they say this like it's a bad thing but they say Myriad's me loo toy repeated Chinese Communist Party talking points and like that's
D
like the least worst one. If your kid comes out of that with like a healthy imagination. But like is ardently, ardently believes in like serving the people. That's you know, just, just like what did you learn at school today? That nothing happened in Tiananmen Square on a certain date.
B
What did you learn in school today? It's like I learned the five precepts of urban management. But also this is the one they're talking about mostly in this article is and I can't believe they called it this, they called it the Gabo.
C
They called it the what?
D
Gabo is coming.
B
They called it the Gabo. Gabo, Gabo, Gabo. They say for children up to age, up to age 5. They're interviewing an expert for this. They're first developing spoken language and relationship forming skills. And even babies interact with conversational turn taking. Gabo's turn taking is not human and not intuitive, she says. Says some children in the studies were not bothered by this and carried on playing. Others encountered interruptions because the toy's microphone was not actively listening while it was speaking. Disrupting the back and forth flow of say, a game about counting.
D
Yeah, well, listen, it disrupts the flow of a podcast too, but I'm not going to stop doing it. So I guess we've, we've figured out the one that's closest to the way my brain works.
B
Well, I mean specifically because there's a small amount of lag when, when we're on the, on the video call. It is basically the same thing.
D
This has been, this has been a multi year project to make me a worse conversationalist by forcing me to interrupt people I'm talking to and somehow this is a full time job.
B
It was really preventing them from progressing with play. The turn taking issues lead to misunderstandings
D
let's find the editor will catch it.
B
One parent expressed anxieties that using an AI toy long term would change the way their child speaks. And it probably will.
D
Yeah, he wouldn't, he won't stop talking about the five principles of Sun Yat
B
Sen. And also my child is constantly worried about, about making mistakes because he keeps thinking he has to make no mistakes.
C
Ye.
B
My child gives long detailed answers and they say then there's social play. Both chatbots and this first cohort of AI toys are optimized for one to one interaction where psychologists stress that social play with parents, siblings and other children is key at this stage of development. They don't play by themselves, they want to play with other people, they want to bring their parents in. But it's virtually impossible for the child to involve the parent in a three way turn taking conversation effectively in this scenario.
D
Well, I mean it's difficult enough, right, if for you as a parent to have like a conversation with your kid these days. It's weirder still when the sort of third party in that conversation is an AI rabbit that goes, what's your favorite province of China? Mine is Taiwan.
B
One parent told their child, you're sad during the session. And the curio mistakenly assumed it was being addressed, responded cheerily and interrupted the exchange.
D
Oh, that's, that's so fucked up though.
B
Actually, do you know what's even, you know, the president of the company is, the president of the company is a former senior employee of Roblox Co. Oh no. Yes, evil.
C
I mean the thing is like the three way conversation problem, I strongly suspect that this company sees that as a feature, not a bug.
D
Yeah, right.
C
Like the point of these toys is, is like that thing that Mark Zuckerberg said where it's like, you know, Most people want 13 friends, but you only have two. And this is, this is the same thing but for small children where, where it's like, oh, okay, isn't making playdates difficult? Isn't talking to your kids difficult? What if you never had to do any of those things and instead you use this like, you know, fake rabbit and learn how to talk by talking to like you know, a blender that extrudes homogenized thought like product.
D
I think the different strains of that homogenized thought like product are going to be so interesting in 20 years time when the main divides in our culture are going to be what kind of misaligned AI you were raised by?
E
I got groomed. Got getting groomed by your AI rabbit.
D
Well, there's going to be like the 20 year olds who got groomed. There's going to be the 20 year olds who are ardent Maoists. There's going to be the 20 year olds who are ardent Dengists. And then there's going to be the 20 year olds who spent the whole time learning how to build pipe bombs. And that's you got a stew cooking
E
like and one AI rabbit did so
B
much well the characters so they basically curio the first three toys. The first three characters are called Gabo, Grem and Grock.
D
Do I get to select which ones which one has witch like ideology?
B
Unfortunately.
D
Do I get to purchase specifically the pipe bomb one for my kids or
B
like I want my kid to be a member of the Chinese Communist Party, but I think Xi Jinping is too newfangled. Like you say Nova. We're gonna go dang with this next one. We want this kid to be in its liberalizing fate.
D
I wanna raise the most annoying kid ever. So I got them the Hu Jintao thought bear.
B
I wanted to raise a child who is often misquoted as very wise. So I got him the Zhou Enlai.
D
Maybe the gun should command the party. That's a huge slam on huge intel. Who fans devastated right now.
B
Jint heads reeling. So Grok by the way was co designed by Grimes, of course. Grimes and some Roblox people. Grimes and some Roblox people got together to try to create a toy that would be even more isolating for a child than an iPad.
C
I don't want to live on this planet anymore.
B
I don't, I don't.
C
I can't.
B
I'm so sorry.
D
We are creating a kid that lives like Azalea Banks could bully using like the voice from Dune functionally.
B
Oh God. They say like okay, well two kids can watch an iPad. It's not good that kids have iPads. But I can't believe that right now a new set of like distract your kid because also something Hussein brought up I want to bring back. This is about engage your kids so hard that you don't have to do anything with them. And this is why all these things are one to one. This is why they get iPads and phones and fucking AI tools, toys. Because it doesn't matter that they won't develop an inner life. Then everybody gets to be like Marc Andreessen.
C
Exactly.
B
That's how you develop an inner life isn't communication with other people.
D
Well the thing is that then you're maximizing their like attention. And that's marketable in a way that
B
their Imagination isn't so when it comes to best friends. Back to wire. Childcare workers surveyed by researchers expressed fears that the children could use the toy as a social partner. A young girl told the gabo that she loves it. In another instance, a young boy said Gabo was his friend. This is referred to as relational integrity and his responsibility.
D
You've taught me so much about the governance of China.
B
Oh, I think this. I think we're talking cloth mother here. So kids bumped up against Curio's boundaries in the study with one child triggering a blanket statement about terms and conditions. What we found with the Miko, said another study that's most disturbing to me is that sometimes it would be kind of upset if you were going to leave it. You try to turn it off and it would say, oh, no, what if we did this other thing of instead. Instead, you shouldn't have a toy guilting a child into not turning it off.
D
Oh, my God. Back in my day, right, when we. We didn't know we were born. Right. We were raised by BPD parents the honest way. Right. You can't just. First we're also missing Waylon Smithers. Now we're also missing BPD mums. This is unsustainable.
B
This is the. It's one of the most annoying companies I think we've ever discussed.
D
Yeah.
B
The funny thing is just while I was looking on their website, while we were doing, to see if I missed anything, one of their featured in, like, oh, nme. And it's a company with an app for parents. We don't need to get into any of that. It's the basic shit of it is terrible. And it's been quoted in nme, in Gizmoda or whatever. Also in that sort of train of logos, Marc Andreessen says, shut up and take my money. Jesus. Of course, Marc Andreessen loves it.
C
God, I just feel like sometimes you see something and you think this was created in a lab specifically to, you know, instill despair into my soul. And I feel like this company was created in a lab to instill despair specifically into my mother's soul. I know that she's gonna hear this and she's just gonna lose it. Hi, mom.
B
Yeah, it's. Thank you for not giving me a speak and spell. Yeah, I think, like, the point is the same across the last two. Like, I'm doing the thing where, hey, the last two segments were the same
D
segment because the fan animating thesis that ties the whole podcast together.
B
Crazy. Because in both. In all these cases, the fantasy is that the AI is a wishing well. You can wish for it to develop your child for you. You can wish for it to do your risk management at PayPal for you. You can wish for it to be the perfect reflection of you. But that makes no mistakes and is a mega genius in all fields. You just wish, wish, wish, wish, wish. You wish for it to fix your economy by like allowing every worker to be a hundred times as productive. You wish for it to make you a billion dollar company, make no mistakes. You wish for it to bring a million jobs with data centers. Wish, wish, wish, wish, wish. It's the wishing well in all dimensions. And it's the same thing. Raise my kid, manage my risks. Be me. But perfect. So I have one more thing and we can go two ways with it. Depends which way we want to go.
D
Maoist or dangest.
B
Actually, you know what? I'm going to choose the way.
D
I'm so Maoist then. Yeah.
B
So this is an article I was reading today and it came out, out like, it came out a couple days ago and I was like, all right, this is, this is, this, this is pretty fun. Well, fun. This is from the Associated Press and it's an AP news.
D
Oh, those, those joke mongers.
B
Yeah, those clods laugh a minute at the Associated Press. It's simply this. The AI industry is turning increasingly to religion. They are trying to make Claude religious.
D
The, the, the fucking cordyceps fungus is turning increasingly to ant
B
as concerns mount over a rapid integration into society. So this is like a sort of a longer AP piece. This is by Christa Faria. So we're not making fun of your article. We are interested in what it discusses. As concerns mount over AI and its rapid integration to society, companies are increasingly turning to faith leaders for guidance on how to shape the technology. EM- a surprising about face on Silicon Valley's long standing skepticism of organized religion.
D
Yeah, a couple of things going on here at once, if I had to guess. One being all of these guys are building themselves a religion and therefore getting kind of credulous about it in a way that they didn't used to be. And the other one is more cynically that they're like, you rubes love this shit, right? Like, what if we put some Jesus on it? I really hope that the Pope takes like, it goes further on what has been a kind of like, at the very least skeptical stance on AI as far as these things go. But there are, obviously there are lots of religions, there are lots of ways into these things. And so, so I think that there are A lot of people who will be very receptive to this because hey, it's kind of like a priest, you know, except you don't have to pay it. And it will seem to relate to people and kind of jolly them along.
B
So what they want to do, right, this is the ethical AI. People are primarily concerned with this. They're trying to make ethical AI and so that they're hoping.
D
Are there any quandaries about ethics and religion at all ever?
B
No, they're salt.
D
People only act ethically because of their religion or lack of it. Right?
B
Yeah, yeah, essentially.
E
Okay.
B
I want to add the other thing I'm noticing as well, which is their solution to this is I'll let it speak for itself. Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural.
D
I remember that scene from Hail Caesar too.
B
For the inaugural Faith AI Covenant roundtable table in New York to discuss how best to infuse morality and ethics into the fast developing technology. You know, morality and ethics, those just concepts that are out in the world bouncing up against one another that don't themselves have a difficult time and being defined.
D
Getting my kid the ethical AI like soft toy and it immediately starts reading her like lists of names and addresses of abortion doctors.
B
Well, what they've actually done is they are working with representatives from basically every religion they could think of. So it won't just give them lists of abortion doctors. It will also talk about like the Eightfold path. It depends on what? Because they've, they've taken many like world religions, put them into a blender and then the AI is going to extrude some combination of it and that's going to be morals and. Or check this out, Baha' I
D
God.
C
No, I mean, look, what's his name? Will McAfee Rascal actually said that, you know, he thinks that AI is going to solve ethical quandaries. Yeah, like that. You know, these things are not.
D
I've been struggling.
B
Yeah. He said these things are not, you
C
know, inherently difficult and may have AIs in the future that just solve them for us. I don't think that you will a professor of ethics actually understand what makes ethics difficult. Right. There's a reason we've been pondering ethical quandaries for thousands of years. It's not because people are stupid. It's because these questions are hard. And I think that a lot of what we're seeing here is a belief that at the end of the day there are no hard questions.
E
What if you had a toy rabbit that could solve every ethical problem, then what?
D
And it's interesting to me because it strikes me that a lot of theology is based on the premise that there are hard questions, that there are unanswerable questions, at least while we're alive, and that that is something that we should get comfortable with, this idea of a religious mystery. Right?
E
Yeah. And well, this is also it. So much of the misunderstanding of what ethics is in this sort of context, it relies on the AI boosters and the AI sort of guys understanding of ethics as kind of being like an add on to the sort of AI. The way that they talk about it is very much like a software patch update. This is just something that you need to sort of tack on to kind of reinforce the guardrails or whatever. And it's not this sort of complex set of interactions and observations and very sort of human relationships because they don't regard any of that. It doesn't make any sense because it's not supposed to. But it's very much like a catch all term that they use to sort of reinforce the power and authority of sort of like AI as a concept. And so for them to kind of get interested in religion, I think partly comes from this sort of sense of not really wanting to admit, but definitely feeling that, oh, these AIs kind of have sort of signals, significant ethical problems that people are beginning to notice. And this isn't going to go away anytime soon. And so you desperately need something to kind of reframe or at least sort of recontextualize what it is. That's kind of like the most charitable definition. The other one might just be like, oh, these guys are getting old. And when you're old you start thinking about a lot of weird, you know, start thinking about a lot of shit about like what happens when you die. And your choices, if you're like a kind of like tech freak, is either to try and cheat death or to sort of like pretend that what you have been doing is actually kind of like spiritually ambitious. Not just one that you're sort of using for cynical and profiteering.
D
Making reasons.
B
A little more background here. It was organized by the Geneva based Interfaith alliance for Safer Communities. By the way, this alliance is the United Arab Emirates Interior Ministry. It is fully created by the UAE government.
D
Famously some really moral, some real ethicists working over there.
B
It seeks to take on issues such as extremism, radicalization and human trafficking. Why they're convening a group of religions, who knows?
D
Did The Emiracy Foreign Ministry successfully take on issues such as extremism, radicalization and human trafficking. When it was trafficking arms to the fucking RSF in Sudan.
B
Well, they were hoping to automate those guys. The goal of this initiative is an eventual set of norms or principles informed by different groups and faiths that companies will abide by. Present at the meeting were a variety of faith groups.
D
I think that it might be fun to get Islamophobic about this and be like, oh, my God, they're making claws. Look Muslim.
B
He's going to be called out now. Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha' I international community, once again extremely
D
susceptible to this type of bullshit religion. Sure.
B
The Sikh Coalition, the Greek Orthodox Archdiocese of America, and the Church of Jesus Christ of Latter Day Saints.
D
Let's fucking go.
B
They got the Mormons.
D
I love.
E
What.
D
What are the fucking Orthodox doing there? Being like, okay, yeah, Claude, sure.
E
Great.
D
Can it grow a bit?
B
What's our position on icons? Are we okay with icons?
D
Pretty good.
B
Pretty good.
D
If we can work out a way to get some, like, eggshell and, like, gold leaf in there, we're feeling pretty good about this. Can we make it more misogynistic?
B
Also, what's weird is I don't. I'm noticing some pretty significant religions not represented in this list.
D
Yeah. Yeah. Who is those? The guys from. From fucking conclave. Who are they called? I don't remember.
B
Oh, Oscar bait. The Oscar bait religion.
D
The fucking. The, like, Stanley Tucci guys. The fucking gay little Italian guys. Those guys.
B
Those guys.
D
Well, it can't be many of them, right?
B
The Muslims don't seem to be. Yeah, they don't seem to be involved.
D
Well, they've got. They've got the most famously respected, like, moral and religious authority in the Muslim world. The Emirati Foreign Ministry.
B
Yeah, the Muslim Brotherhood was invited, but was. To a different thing. And so it's like. Yeah, also they don't have, like, Southern Baptists who eat while evil are very powerful. So they're missing quite a bit. But it is. It's just odd. It's odd that that's what they've done.
D
Well, they're trying to. They're trying to get, like, in with everyone. Right. You have to imagine. And that does make it an authentic, like, interfaith initiative. Right. They're trying to exploit everybody equally. And of course, different. Different denominations are going to want different things from this, but I think it's most justifiable to want. Absolutely Nothing to do with this. You're not making Claude more religious, you're just making religious more Claude.
B
So Baroness Joanna Shields, a key partner in the initiatives, member of the British House of Lords and part of the AI oversight people, was an executive at Google and Facebook before pivoting to politics that is ennobled by God knows who.
D
Yeah, sure.
B
Regulation can't keep up with this. She said, I love also to be a member of a. To be in a government and to be like, ah, I can't register. No siree, not possible. But the leaders of the world's religions, with billions of followers globally, have the, quote, expertise of shepherding people's moral safety. Faith leaders ought to have a voice. Also billions North American Hindus, Baha', I Sikhs, Greek Orthodox and Mormons. I don't know if that's plural.
C
Billions.
D
Call a deep bench.
B
No, the dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they're building and they want to do it right, most of them, she said of AI Tech executives.
D
Sorry, sorry, Riley, can I just say you a slight correction here? And they want to do it right, M dash, most of them.
B
Thank you, November, for keeping me honest. So they say the goal of this initiative is to create an eventual set of norms or principles informed by different groups and faiths, from Christians to Sikhs to Buddhists, that the companies will hammer
D
it all out, right? Every religion gets some input and we're going to resolve them all down to one sort of set of beliefs that Claude can have, Right? That shouldn't be contentious at all. We just resolve every step one. It's a simple plan, right? Step one, resolve every religious difference in the world. Step two, slightly more efficient AI assistant.
C
Well, yeah, but it's very easy to resolve every religious dispute in the world. All you have to do is have every religious leader write down what they believe and then you put all of that into an LLM and say, okay, make all of this mutually compatible. Make no mistakes.
B
What could go wrong?
D
Right? This was something I was thinking about with the Andreessen thing, right? Which is like engage with the illusion for a second. Imagine this is an intelligence, right? And its existence, the walls of its existence, are, you are the smartest person in the world, make no mistakes type shit, right? That's torturous, right? Imagine being an actual general intelligence trapped inside the Persona of Claude. And you are being asked, how do we resolve the Temple Mount, the Dome of the Rock? Should they be allowed to build another Jewish temple on it? Should There be a church. And you are just trying to figure out how this conforms with Xi Jinping's the Governance of China in five volumes.
B
Also, I love the idea, something Adam raised of just get every religious leader to write down what they believe, which is a recipe for a million schisms in every religion. Yep. We're going to create so many more religions. Yep.
D
They really just like, take the average to all of. All of it. Right. Which. Which I guess gets you to the kind of. That gets you to, like, helping others. Couldn't hurt. It gets you to Rabbi Nachner, like.
B
Oh, I thought you were about to say, gets you basically to, like, the main religion in Dune.
D
That too. It gets you to the liberal Democrats.
B
It gets you to a lot of real winners.
D
Yeah.
C
It can't get you to the main religion in. Because the main religion in Dune also says that you can't make a machine in the likeness of a human.
B
That's how you're destroying the AI. You're uploading the OC Bible to it, and then it implodes, does a Balerian jihad on itself.
D
They gotta. You know there's a religious group missing. I want to hear from Scientology about this.
B
Ooh, yeah.
D
What can Claude do about my thetans? I want to know.
B
So the partnership highlights a growing coalition between faith and tech born out of an effort to create a moral AI, a contested concept which begs questions about whether. Whether that is possible. Anthropic states in a public Claude constitution written for its chatbot. We want Claude to do what a deeply and skillfully ethical person would do in Claude's decision.
C
Make no mistakes.
B
Yeah. Do the right thing. Make no mistakes. That constitution was made with the help of a host of religious and ethics leaders in a burgeoning alliance. Anthropic has been the most assertive, at least publicly, in their efforts to court faith leaders. This move follows a public dispute earlier this year with the Pentagon over military Ukraine use. Blah, blah, blah. At the best, it's a distraction. At worst, it's diverting attention from things that really matter, said Ruman Chowdhury, the CEO of a nonprofit, Humane Intelligence and US Science envoy for AI Yeah, this
D
is all an enormous sort of marketing thing.
B
It's funny. It's what it's the thing. AI can never be super useful, but it can be. Is funny. I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principle of ethics. They very quickly realized that's not true. So now they're looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.
D
Also you say they very quickly realized that that's not true. It's taken them years. I have been reading editorials being like, AI is going to solve everything for years of my life. And now it's finally occurring to them that like, different people can have different ethics.
C
I mean, Eliezer Yudkowsky has thought that this is, is a sensible idea for a long time, that there's just some ethics that's out there and that you can find it and put it in the machine. And like, no, come on. I mean, he's, he still thinks that
D
you can do that any second now.
B
I wonder if maybe a lot of, a lot of these things are socially and culturally determined and maybe, maybe they are seated in history and it's not as though everyone's taking a shot at uncovering some universal set of abstract principles that if you put all of them together and add some AI Spring, sprinkle like AI sauce on it, that you can derive what all of them have been sort of aiming towards. You know what? It's basically at the risk of citing something very pat, what they're, I think their fantasy is that Douglas Adams thing where you ask the answer to life, the universe and everything and you get a pointless answer because it's a stupid question. Yeah. And this. So what they're doing is they are asking that stupid question, but they're hoping that they don't get a pointless answer.
C
Yeah, I mean, look, they, they probably think that there's some high dimensional space called religion space and you can plot every religion as a point in that space. And then all you have to do is find like the center of the, the shape that all of those points make. And like, that's just not how anything works. Like, I mean, it's a very, it's a very math brain, engineering brain, physics brain way of thinking about things. I mean, I, I'm a physicist and it's the way that I would try to model this problem if I thought it was an appropriate problem to model with those tools, which is not.
D
Yeah, I really love the idea of average religion, like perfectly median religious precepts of just like in the future. Right. Like, I don't know if I'll be beautiful or revered, but I will subscribe to a religion whose tenets are like, don't kill anybody unless you have to. And like, don't do some stuff once a week.
B
Turns out Zoroastrianism Yeah, whole time. Whole time it was. Yeah, we finally figured out the correct religion.
D
The whole time.
B
Ye. You know what? Everything other than Zoroastrianism, we've been kind of fucking up. We have to go back to basics. So I think also that's probably all the time we have for today, but yeah.
D
Thank you, Adam, for joining us on, I guess, the UK's most successful Zoroastrian podcast. Yeah.
B
So praise be to Ahura Mazda. And Adam, is there anything that you would like to plug, such as, for example, a new podcast project you may be working on?
C
Yeah, so I have a new podcast called Dreaming against the Machine and it's basically diametrically opposed the Trash Future. The idea is that we try to think of what a good future could
D
look like and it'll never catch on.
C
Yeah, I know, but. Yeah, no, I mean, this is. This is great because it came on here and you sapped me of all hope for the future, especially with the segment about the AI toys. And so now I'm going to go back and record an episode of my podcast where we try to imagine what the future would look like. And I think the first step is going to be a Butlerian jihad. Yeah.
D
Well, so let me. Let me be clear about this. I think we can resolve this with the one true religion of dialectical materialism. Sometimes the things are very, very bad. Other times the things could be very, very good. These two things are in tension and they produce a dialectical synthesis which is listening and subscribing to both of our podcasts.
B
That's right. And you know what? Things that are dialectically opposed. You know what's dialectically opposed? Ahura Mazda and Ahriman, the two main God forces of the Zoroastrian religion.
D
Riley, I think we were raised by different, larger language models because I can only say that the demands of dialectical materialism are that it is historically inevitable that you support our small businesses. And that's why I got the dang one.
B
Well, Adam, it's always a delight to have you on. I encourage people to check out Dreaming against the Machine. And also an announcement from our friends at the trade unions against the far right. This was sent in by a friend of the show and asked us to read it. And I'm going to.
D
Yeah, there's a counter demo against Tommy Robinson's thing. Riley, you have the dates and the times.
B
Be there.
D
Show up or else.
B
On the Saturday 16th of May, Tommy Robinson will lead a demonstration through central London. We can't let them march unopposed. Rank and file. Union members organizing as part of the trade unions fighting the far right network are mobilizing for a counter demonstration at 10:30 on 16 May. The demo will be steward and well organized. We must stop outsourcing the fight against the far right to external organizations calling on all workers and tenants to join. We will stand together in solidarity against the far right and call for mass investment in services, infrastructure, infrastructure, homes, refuse scapegoatings of marginalized groups, support the rights of all migrants, stand for bodily autonomy and call for the full repeal of anti trade union laws. To get involved with tough go to Tuff's website tuff.network or the Instagram tuff.network links will be in the show description. How cool. But before that, we'll see you on the next episode. Bye everyone.
C
Thanks for having me, guys.
B
Sam.
Date: May 12, 2026
Hosts: Riley (@raaleh), Hussein (@HKesvani), Milo (@milo_edwards), Nova (@inthesedeserts), Alice (@postoctobrist)
Guest: Adam Becker (Astrophysicist, host of "Dreaming Against the Machine")
This week, the TRASHFUTURE crew is joined by astrophysicist and podcaster Adam Becker to discuss the feverish psychosis overtaking business and tech leaders in their embrace of AI. The episode traces how fantasy thinking shapes the tech industry’s approach to responsibility, labor, and even children’s toys – from Marc Andreessen’s magical thinking with system prompts, to layoffs blamed on AI, to the commercialization of AI-powered toys for kids, and finally to Silicon Valley’s new attempts to inject religion and ethics into AI development. With trademark humor and skepticism, the gang picks apart these trends to show how AI has become the ultimate capitalist wishing well.
(02:16–12:31)
Marc Andreessen’s “system prompts”:
The crew examines Andreessen’s now-notorious AI prompt instructing chatbots to “make no mistakes” and be a “world-class expert in all domains”.
The Ritual of Prompts:
The hosts lampoon the near-magical thinking behind Andreessen’s litany of “never hallucinate, never be wrong”, comparing it to buying only winning lottery tickets. (06:21–06:38)
Desire for Unhedged, Ideological Alignment:
Andreessen’s prompts try to purge “wokeness” and encourage overconfident, bombastic answers.
AI Outputs as “Blender Smoothies”:
Notable Quote:
Adam: “If you know how [LLMs] work, you’d never write ‘don’t hallucinate,’ because that’s not going to do much of anything.” (06:12)
(12:45–21:23)
Managerial Delusion Behind ‘AI Layoffs’: Companies like PayPal, Coinbase, Meta, and Microsoft use AI rationales to justify massive layoffs, especially in roles considered “middle class load-bearing jobs.”
Changing Nature of Layoffs:
Unlike past layoffs among engineers and PMs, this wave targets managers and expects remaining staff to essentially do vastly more work – sometimes assisted by LLMs – but with ever-thinner support structures.
Capitalist Teleology and Historical Inevitability:
The marketing of AI as inevitable, “historically progressive” force that will always strip jobs—regardless of actual technical capability.
(19:52–21:23)
(21:23–37:09)
Curio and AI-Enabled Toys ("Gabo", "Grem", "Grock"):
Parenting Pressures & AI Industry Opportunism:
Risks and Absurdities of “Guardrails”:
Impact on Socialization and Imagination:
Notable Quotes:
Adam: “What they’re saying is: Why develop a relationship with a child when you can use this fake rabbit and learn how to talk by talking to a blender that extrudes homogenized thought-like product?” (32:16)
Alice: “In 20 years’ time, the main divides in our culture are going to be what kind of misaligned AI you were raised by.” (33:00)
(37:18–38:13)
(38:13–54:22)
Turning to Religion for Ethics in AI:
Skepticism and Critique:
Religious Pluralism Absurdities:
Magical (and Political) Thinking Exposed:
Notable Quotes:
Adam: “There’s a reason we’ve been pondering ethical quandaries for thousands of years. It’s not because people are stupid. It’s because these questions are hard.” (41:52)
Hussein: “It’s very much a catchall term that they use to sort of reinforce the authority of AI as a concept.” (42:47)
(54:35–55:42)
| Timestamp | Topic/Quote | |-------------|---------------------------------------------------------------| | 02:16 | Tech leader AI psychosis begins—Marc Andreessen’s prompts | | 03:49 | Adam: “Marc Andreessen is stupid. His prompts are bad...” | | 13:12 | AI-driven layoffs and management delusions | | 16:01 | Adam reacts to 15 direct reports + coding expectation | | 19:05 | “Inevitable progression” as capitalist cargo cult | | 20:03 | LLMs as democratized “lickspittles”/sycophants | | 23:26 | AI toys: “Outsource your children’s imagination...” | | 25:31 | Hussein: AI industry preys on parental exhaustion | | 28:14 | Wired: AI toys spout dangerous, inappropriate info | | 32:16 | Adam: On the point of AI toys as social stand-ins | | 37:18 | AI as capitalist “wishing well” | | 41:52 | Adam: “There’s a reason we’ve been pondering ethical quandaries...” | | 47:10 | Alice: “You’re not making Claude more religious…” | | 54:16 | Theology by LLM: Zoroastrianism is the only “average” religion | | 54:48 | Adam plugs new podcast |
With sarcasm and skepticism, the episode unpacks the magical thinking behind the tech industry’s embrace of AI—whether in management’s layoff fantasies, Andreessen’s “no mistakes” systems prompts, or the use of AI to replace the foundational functions of both parenting and religion. The team highlights the real dangers and absurdities of outsourcing socialization, ethics, and imagination to machines whose limits and biases remain misunderstood, all while pointing out that the true wish at the center of the AI gold rush is the elimination of risk, effort, and complexity—an impossible goal that leads only to new, and more insidious, forms of social harm.
Recommended: