
Who’s paying AI influencers? Brain‑safe AI habits, and Waymo cabs block first responders.
Loading summary
LinkedIn Ads Narrator
This BBC podcast is supported by ads outside the uk. Ever invest in something that seemed incredible at first but didn't live up to the hype? Like those $5 roses at a gas station? Or a secondhand piece of technology that breaks in the first 10 minutes? Marketers know that feeling. We optimize for the numbers that look great, impressions reach and reacts. But when they don't show revenue, well, that's a not so great conversation with the CFO. LinkedIn has a word for bullspend. Now you can invest in what looks good to your CFO. LinkedIn Ads generates the highest roas of all major ad networks. You'll reach the right buyers because you can target by company, industry, job title and more. So cut the bull. Spend advertise on LinkedIn, the network that works for you. Spend $250 on your first campaign on LinkedIn ads and get a 250 credit for the next one. Just go to LinkedIn.com Broadcast. That's LinkedIn.com Broadcast. Terms and conditions apply.
Tristan Redman
What's actually happening inside Iran? I'm Tristan Redman, host of the Global Story podcast from the BBC. Iranians have been under a near total Internet blackout for several months. Few Western journalists have been permitted to operate in the country. But in recent weeks the BBC's chief international correspondent, Lys Doucet, has been reporting on the ground in Tehran. For more, listen to the global story on BBC.com or wherever you get your podcasts.
Thomas Germain
Using AI is making you stupider. If AI is coming up with the idea for you, you're going to get worse at coming up with ideas.
Karen Howe
Hello and welcome back to the Interface, the show that decodes how tech is rewiring your week and your world. I'm Karen Howe.
Thomas Germain
I'm Thomas Germain. And Nikki Wolfe is out this week. But we got a great show for
Karen Howe
you today on the Interface, the secret campaign to make you fear Chinese AI.
Thomas Germain
Speaking of AI, we'll tell you how you can use it without turning your brain to mush.
Karen Howe
And are self driving cars getting worse?
Thomas Germain
I hope not.
Karen Howe
Hey Tom, you spend a lot of time on social media?
Thomas Germain
I sure do.
Karen Howe
Have you noticed any of the people that you follow suddenly randomly starting to talk about how much they love AI?
Thomas Germain
Well, certainly a lot of me talking about AI on social media, but I have noticed over the past couple years, like, I mean obviously it's become like the topic of conversation across our society, but I did get this weird feeling that it seemed like there were a lot of people talking about it in this strange way people who weren't normally invested in this subject, and it did actually stick out to me.
Karen Howe
You definitely have sweaty senses. And you're onto something because there's this crazy story this week in Wired that was written by the journalist Taylor Lorenz where she discovered that there is essentially the secret campaign where the AI industry is funneling money to influencers to make them talk about how much they love American AI and how much you should fear Chinese AI.
Thomas Germain
So they're paying all these. All these influencers online. Like, they're like, giving. Are they giving them a script? Or like, how does this work?
Karen Howe
So they're paying these influencers and they're like, the influencers are not disclosing that when they generate content on behalf of this client, that it is this client that's funding them. So they label their posts and say it's an advertisement, but they don't actually say an advertisement for who. And the reason why Taylor Lorenz discovered this is because she was one of the people that was reached out to by this campaign, asking if she would accept money to be part of this campaign. And so she ended up getting a bunch of documents that are given to these influencers to explain, like, how they should be engaging in this campaign. And there's kind of two parts to this campaign. One part is where they talk about how much they love American AI. And, you know, the briefing document even says, talk about how you use AI and how it's great that it's made in the USA while feeding your kids or something like that. Like, it's like, really specific.
Thomas Germain
Yeah, kitchen table politics.
Karen Howe
Yeah, exactly. And then the second part is this idea that you should be really scared that Chinese AI companies might beat American AI companies and that would be bad for everyone. And it's just such a fascinating story because of a few reasons. One is that in general, this argument of rah rah, American AI, boo, Chinese AI is one of the things that the tech industry has long loved to use to try and ward off regulation. So they're really trying to tap into this argument that has arguably worked for a really long time for them, but in this subtle way where instead of using it directly with policymakers on the Hill testifying during Congress, they're now trying to just mainline it straight to people without actually disclosing that they are doing that. The second interesting thing about this story is who is actually funding this within the AI industry. So the money is actually coming from this group called Leading the Future. And I've done a little reporting on Leading the Future. Before. This is an AI industry super PAC that spun up with the explicit intent of pumping a bunch of money into influencing the upcoming elections. And Leading the Future has said that they have currently raised around 140 million of money for this purpose.
Thomas Germain
So Leading the Future is like the. Those are the guys who are behind this. This effort to change your mind.
Karen Howe
Leading the Future is behind the money that is going into this effort. So there are multiple organizations involved. Leading the Future funds this other entity called Build American AI. And then Build American AI is the one that actually distributes it to this social media company to then source the influencers. So it's this really convoluted trail of money, but it all starts with Leading the Future. And Leading the Future is funded by people like Greg Brockman, the president of OpenAI, and Andreessen Horowitz, the venture capital firm, and is modeled after this super PAC that was originally designed by the crypto industry called Fairshake. I don't know if you've ever heard of this super pac, Fairshake. Fairshake was this really infamous super PAC that the crypto industry spun up. It was the brainchild of this man named Chris Lehane, who is now the head of global policy at OpenAI. And at the time, Chris Lehane, he was this very experienced political operative that had previously worked for the Clintons. And Lehane basically, after working for a while at Washington, hopped over to Silicon Valley and essentially is credited as the man who taught Silicon Valley how to play politics. And he at the time was like, hey, we should start funneling the tech industry's money into influencing elections. And Fair shake was the. The came out of this idea and started pumping around $200 million into the 2024 elections. And they say that they had great success. They were able to elect in a pro crypto Congress, and then they ended up with pro crypto legislation. And so the AI industry has taken this exact playbook, the crypto playbook, created Leading the Future. Chris Lehan is still behind the scenes involved. And they're pumping money into elections, but there's kind of this new element now where they're also pumping money in this dark money way. Because the thing about funding elections is that you have to publicly disclose when you are funding and who you are funding and how much you are funding those elections. But when you're funding influencers, influencers don't actually have to say any of that.
Thomas Germain
There's nothing like illegal about this. Like, you could have a conversation about the ethics, but, like, you are allowed to pay Someone to promote your economic or political or business goals. What. What's the defense? What are the people involved with this? What are they saying?
Karen Howe
Well, they don't really have. I would say. I would call. I wouldn't call it a defense. I would call it maybe an explanation. So did Reach out to Leading the Future. And Leading the Future said the United States has an opportunity to remain the global leader in AI innovation. And we're taking that message to the broadest possible audience through an all of the above communication strategy.
Thomas Germain
Sounds. I mean, that's what I would do.
Karen Howe
And she also reached out to OpenAI. OpenAI spokesperson said that OpenAI has no corporate affiliation with Leading the Future. The donation from Brockman and the involvement of Chris Lehane in their personal capacity.
Thomas Germain
Gotcha. So that's not. This isn't an official OpenAI communication strategy. It's just the guys who are in charge of OpenAI are doing this.
Karen Howe
That's right.
Thomas Germain
It's interesting if you pay close attention to the tech industry, there's been this narrative about how we all need to be so freaked out about Chinese tech companies for a while now. Right. There's this phone manufacturer, one of the biggest in the world, called Huawei. They're not allowed to sell phones in the United States because they got banned because they're a national security threat. There's this story that we talked about a couple weeks ago where Chinese, or I guess foreign companies, but all routers are made in China. You're not going to be allowed apparently, to sell a router made outside of the United States, a WI fi router, without special permission. And now we're hearing about this campaign to make us afraid of Chinese AI companies. It's worth noting. And I mean, Karen, this is your area of expertise. I'd love to hear you talk about this. But, like, the whole reason for being for companies like OpenAI and Anthropic, the company that makes Claude is they're like, AI is super dangerous, and if we let the bad guys make it first, then we'll all be in trouble. So we need to get there first. That's this idea that is at the center of this whole thing.
Karen Howe
Yeah, that's right. And part of it is because there are people within Silicon Valley that genuinely believe this. And part of it is because they have just realized that this is the trump card of all arguments to ward off regulation.
Thomas Germain
Right.
Karen Howe
I was recently talking with some policymakers on the Hill. They were actually telling me that this argument is really losing steam because a lot of policymakers are Beginning to feel clayed. They feel like they listened to Silicon Valley for over a decade now. You know, first with the social media companies making this argument and now with the AI companies making this argument. And it really just has not panned out the way that Silicon Valley said it would. The argument was, hey, don't regulate us because then we're going to dominate. And ironically now a lot of Chinese AI models are not just dominating globally, but they're also dominating within Silicon Valley. Like there are a lot of Silicon Valley startups that are just using Chinese AI models instead of American AI ones because they're cheaper, they are easier to deploy, they're open and free to download. So policymakers are getting really tired of this rhetoric, which I suspect might be one of the reasons why these, the AI industry is now going straight to the people with this messaging because they probably feel the fact that they're losing this policymaker audience. So they're trying the same rhetoric with a different one. Also, the public is not really into this narrative either. So it almost feels a little bit like the tech industry is running out of ideas for how to actually message itself and turn around the narrative. Because they've tried the policymakers, policymakers are, they're losing ground there, they're trying the public, they're losing ground there and they don't really have new things to say for why we should like their technology and use it.
Thomas Germain
I mean, I guess we'll see. Who knows, maybe this has been enormously successful. We don't really know. I would love to hear from people. How do you feel about Chinese AI companies? Like, like do you have a thought in your head about whether you should be worried about them? And if you do, like where did that come from? Where did you hear about that? Because these ideas kind seep in. Right?
Karen Howe
I would also be curious for our listeners whether or not they start noticing influencers that they follow actually listing some kind of advertising content without mentioning who is advertising in particular. You should try to look for this content with influencers that usually don't in fact talk about technology. In the campaign they have been targeting lifestyle influencers and according to a briefing document, Taylor Runs reported that the organization's also seeking to quote, extend beyond left leaning female lifestyle and family content creators to focus on left leaning influencers who are political commentators, business tech leaders and male lifestyle influencers. And they have a totally other, other operation that is targeting the right wing equivalents of all of these categories as well.
Thomas Germain
There's a guy I follow who like talks about if you're if you're a dude, like, how your jeans are supposed to fit. If he starts talking about AI, I'm going to be suspicious. I'm going to know something's up if
Karen Howe
he just starts saying, wow, AI makes me so much more effective at putting on my jeans.
Thomas Germain
Right? If you're really worried about your pants.
Tristan Redman
What's actually happening inside Iran? I'm Tristan Redman, host of the Global Story podcast from the BBC. Iranians have been under a near total Internet blackout for several months. Few Western journalists have been permitted to operate in the country. But in recent weeks, the BBC's chief international correspondent, Lyse Doucet, has been reporting on the ground in Tehran. For more, listen to the Global Story on BBC or wherever you get your podcasts.
Thomas Germain
So I think a lot of people are afraid of AI. Maybe, maybe you're worried about someday, like China, China's AI is going to take over or something like that, be better from American. But there's a much more immediate concern, one that makes me very worried about, like, myself personally. And it's this notion that maybe you've heard about this, that using AI is making you stupider. There have been a bunch of studies that suggest that using AI too much or in the wrong way is harmful to your cognition, right? That it is, like, worsening the functions of your brain. And I found this very alarming because a couple years, like, when AI first came out, right? Well, it first came out when AI really exploded. When Chachi GPT came out in 2022, I was like, this is the most annoying thing I've ever seen. I cannot stand the way that all these guys on Twitter are talking about how it's going to change my life. And I really, I was like, I don't want it. But then I was like, well, no, I have to force myself to use this stuff because, you know, this is my beat. This is what I read about. I really need to understand it. So I, like, consciously, like, sat down and I was like, how can I inject this into my life as much as possible? So I know it backwards and forwards. So when I talk about it, I know what I'm talking about. Now I'm worried that that effort has ruined my poor little brain. There was a study that came out last year that got a ton of attention where they took a pretty small group of college students and they had some of them write an essay with AI and they had some of them write an essay, you know, by hand, and. And they hooked them up to a brain scan and the Ones who wrote it with AI, their like, brain wasn't making as much connections and everyone freaked out. And there was this like narrative that was put forward that like, using AI one time is going to like fry synapses or something. Absolute nonsense. That's not true. That isn't really what the study was saying. The thing is here, this is brand new science, right? AI in the hands of the masses is a new thing, at least. You know, large language models, the tech that runs stuff like ChatGPT and Claude, that's new. We don't know how it's affecting your brain, but there are some indications that there are reasons to be alarmed. So I called up a bunch of neuroscientists and, you know, professors and experts who like study how the brain learns, what makes things stick and what makes them kind of fall away. And the analogy that I kept hearing time and again, they're like, it's not this simple, but you can kind of think of your brain like a muscle, right? And there's certain things that if you stop practicing them, you will get worse at it. And we've seen this with some older technologies. So when GPS first came out, they did a bunch of studies, like when it was like, when there were a lot of people who were using it, but like a lot of people who didn't. So you could do this control group thing, right? Where people who weren't. They found that people who use GPS all the time, they get worse at essentially building mental maps, right? Their spatial memory and spatial reasoning gets worse. And then they did these studies where they like took people who continued to use GPS and like went back and studied them again after like a year or two. And they found that that continued to deteriorate, which is, you know, I don't know if you have this experience. I like can't get home without looking at my. Like, there's certain routes I take all the time, but it's like I don't know where anything is anymore.
Karen Howe
Yeah, I can't remember directions. And ever since I got a phone that could program contacts into my phone, I have not remembered a number ever again. Like, all of the numbers that I remember are my childhood, right.
Thomas Germain
I could tell you my phone number where I grew up in la. I don't know my parents home phone number now, which I need to like, I dial it all the time, but I just like hit the little button. If I lost my phone, I'd never speak to my parents again, I guess. And there's also. Have you ever heard of the Google Effect. You've heard of this one when this is older. When search engines first took over, they did a bunch of studies and they found that if you look something up using a search engine, you will have a harder time remembering it as opposed
Karen Howe
to looking it up through an encyclopedia.
Thomas Germain
Yeah. So essentially what this is about is, like, the effort that you put into it, Right. If it took you a bunch of work to find a piece of information, that effort, that's what matters. I talked to this neuroscientist named Adam Green, and he said the way you should think about it is, like, if you're at the gym, you can, like, the goal is to lift the weight.
Karen Howe
Right.
Thomas Germain
But if you get a robot to lift the weight, doesn't matter that the weight went up, you're not going to get any of the benefit. So there have been some initial studies that suggest AI could cause us some problems, because what this is all about is shortcuts. And AI is the ultimate shortcut machine. There's never been a tool that is across so many different tasks, able to remove the effort and make things easier. So one big area that people are concerned about is creativity. That AI could be making us less creative. There's this study going through peer review right now, but it's pretty compelling. I took a look at it where they looked at thousands and thousands of thousands of college application essays, and they found that across society, there's this implication that, like, the pool of ideas that we're all having is getting smaller. That, like, just the, the, the ideas that are, that human beings are having are becoming more similar and less varied. That's bad on a societal level, but even on an individual level, there are all these implications that, like, if you're using AI to do a creative task, right? If you're, like, trying to come up with a joke and you're going through that kind of process all the time, you're going to get worse at coming up with jokes. Yeah, but, like, you can think about other analogies, right? Like, if just anytime you're brainstorming, if AI is coming up with the idea for you, you're going to get worse at coming up with ideas. It's like very narrow, Right. It's not ideas in general, but it's like this particular style of brainstorming task. The less, you know, you don't use it, you lose it. So, and it's not just that, like, you're gonna get worse at this, but if we're all using the same machine, the same computer program to come up with Ideas that, like, original thought, like, it could. Could slip away. And I think this is pretty alarming. Yeah, you're making your brain worse. And, like, our collective societal brain is getting worse. This could have serious consequences.
Karen Howe
This is why I take the exact opposite approach from you, Tom, which is I covered the technology, but I refused to use it because I worry that even. Just ask. Like, if I were to ask, you know, when. When you're staring at a blank page, every writer's worst nightmare, you have writer's block. You're trying to figure out the first sentence. And I talk with writers who say, yeah, they use an AI Chatbot to just spark that first sentence so that they're not staring at the. At the blank page. I am terrified of getting that. The phrasing into my mind and then not being able to get it out of my head and having that pollute my own original phrasing, my own original thinking, potentially without me even realizing, you know, over time, so much of my writing, I. I think the strength of it is the fact that it sounds like me. It sounds unique and distinctive, and that is a strength that I want to protect and preserve.
Thomas Germain
That's such a good setup that people are going to think that we plan this, because that is. The advice here, is leave the page blank for longer. Right?
Karen Howe
That's crazy. Okay.
Thomas Germain
Yeah, it's pretty good. It's pretty good, right? What you need to do, and this applies across a couple of these things, you need to come up with the idea yourself, Right? So you need to come up with your joke, and then you can use AI to help you refine it. Right. Like, I. I don't think it's useful advice to tell people, like, stop using AI. Right. If this is part of your life, think about how to use it better without, like, messing things up. So try to extend your own thought and then use AI to improve upon it rather than having AI Come up with the thing by itself. Adam Green, the. The neuroscientist I talked to, his advice was like, notice how painful it is if there's, like, a. A mental task that you need to do. If it's like, oh, I just don't. It's like, it's gonna take too much effort. Don't want to do this. Like, if that's starting to feel like too much work, like a thing that you used to do all the time, and now it's like, I just don't want to. That's when you need to freak out. You know, this is an area where you want to Focus and start putting more work back in. There's another. I think this is even scarier. There's some evidence that we are getting worse at critical thinking, at least when we are using AI. There was this really compelling study that came out where they talked about this idea of cognitive surrender, where they set up this experiment where people had to come up with a statement, I think about, like, climate change or something.
Karen Howe
And.
Thomas Germain
And they had some people use AI to come up with the idea, and they had some people just come up with themselves, and they had some people do both. They found that people trusted AI over their own intuition and their own thought process. So they thought it through themselves. And then they asked the AI and the AI gave an answer. They were like, well, the AI must be right even when it was wrong. Even when they set up an experiment where it's kind of obvious that the robot is wrong. And people like, well, there's just, the robot's smarter than I am. There have been a bunch of other tests too, that show people who use AI that you just, like, put extra faith in it and not just when you're using AI. There was a study that found from a university in Switzerland that people who use AI more often that are, like, heavy users of AI, it was a big. The hundreds of people in the sample, so, you know, worth something, were just, like, worse at critical thinking tests in general.
Karen Howe
In general. And what age were these people?
Thomas Germain
Great question. They found that this seemed to affect younger people more significantly. So there's a lot of concerns about how AI is affecting younger minds. Right. Like, your brain isn't done developing until you're in your 20s. So if you're younger or you know someone who is, maybe this is something that you might want to think about here. And the advice here is a little bit more difficult if you use AI all the time. But, like, one thing you can keep in mind is, like, if you are asking a question and you wouldn't trust any random person's answer. Right. If you have, like, a question that's important enough that you're like, I wouldn't go up to some random guy on the street and trust what he thinks about this, that is a question that you shouldn't be relying on, on AI to answer for you.
Karen Howe
The random guy test.
Thomas Germain
I love the random guy test. Yeah. Those are explicitly the topics where you need to bring your own judgment. And more importantly, similar to the thing with creativity, what the neuroscience neuroscientists I talked to said is, like, you. You want to try and come up with Your own opinion on stuff first. So if you're like using AI to do research and figure something out, then try like, come up with your own theory. And then instead of asking AI what it thinks or like, what. What is the general consensus on this? Like, use AI to like, challenge your belief. And you'll be doing that work and you'll maintain that skill.
Karen Howe
This reminds me, when I was a kid, my parents would make me teach a subject to learn the subject because it was even more effort than simply just reading a textbook. And so they would make me make slide decks to teach the subject.
Thomas Germain
You're like giving a little PowerPoint presentations
Karen Howe
back to family friends of their. Yeah. And those were, to this day, those are the subjects that I remember the best.
Thomas Germain
What the advice is, you should start giving presentations to Karen's family friends specifically.
Karen Howe
No, but that's exactly it.
Thomas Germain
That's it. Like, it's. It's about the effort. And when you have to teach something, not only do you need to really wrap your mind around it, but then you need to think about how you're going to explain it. And then you have to go through the work of explaining you're engaging with the subject multiple times. And if you can write them out by hand, it's about re engaging with the material. All of this comes down to doing a little bit more work. I think AI may not necessarily be bad. A bunch of these guys were like, it's also worth mentioning, like, there's no reason to assume that, like, this is just going to be net bad for our brains. That's not how the history of technology works. Right. We find a new technology and we adapt to it. That's how it's always been. There's no real reason to think that this is going to be different, though. There are specific concerns that we should be worried about. But I think the overall takeaway is now on your phone. You now have this opportunity where all the time your phone and your computer is asking you, do you want to skip the work? Like, do you want me to just do it for you? And if you say yes a little too often, it might start getting you in trouble. So you should be looking for places where it's like, this is a skill I'm worried about. I'm worried about doing this particular kind of thing. Maybe I don't want to let AI do it for me every time or I'm going to get dumber. That's not what I want, Karen. I don't want to be dumber. I'm worried about it.
Karen Howe
Well, I'm curious, Tom, now that you've learned all of these techniques, has it changed your daily habits around AI use versus working harder for specific tasks?
Thomas Germain
Yeah, you know, I, I've been worried about this for a while. So like all of my efforts to introduce AI into my life as often as I can, I've now kind of rolled it back. Like I, you know, a long time ago I've been like, this is making it too easy to engage with information. I want to work through the difficult language. I want to, you know, read the, the hard technical article. Even though I could put, I could put it in ChatGPT and just ask it to explain it and I'd get the gist. I've, I've stopped doing that. I haven't stopped using the stuff. I do find it incredibly useful, but I'm looking for places where I want to skip it and I think that is a really worthwhile exercise.
Karen Howe
You're on a diet.
Thomas Germain
I'm on AI. I'm on an AI diet. That's pretty good. We should have used that analogy from the start.
Karen Howe
Okay, Tom, one other thing that I wanted to talk with you about this
Thomas Germain
week is Waymo, the self driving car company.
Karen Howe
The self driving car company that spun out of Google. I took a Waymo once because I was curious to see how this self driving car revolution is going to happen. And I specifically took it in San Francisco with a friend and it was just really slow. Like I was surprised that people found these experiences enjoyable.
Thomas Germain
Yeah. And it's not like a Tesla will. You can sit in the driver's seat, it'll take over Waymo, there's no driver in the car. Right. It's like just, you get in, there's no one in the front seat.
Karen Howe
You know, it's, it's, it's, it's a cool experience, you could say, but it's not after that.
Thomas Germain
You didn't love it.
Karen Howe
I didn't ever take awaymo again because I have places to go. Like it was.
Thomas Germain
Yeah, right. So it was slower than a normal car.
Karen Howe
A really leisurely joyride. Like it was, it did not feel efficient. And you know, it's not just the fact that they're slow that is causes issues. There have been reports about people leaving the door ajar accidentally and, or, or maybe intentionally and then the Waymo just stops moving or.
Thomas Germain
Yeah, it's the weirdest thing. Right. So in some cities, Right. Like they're not in most places in the world, these like completely driverless cars, but in San Francisco, they're everywhere in Los Angeles. They're everywhere. They're supposed to be coming to London. Like they're not here in New York where I live yet. But you know, they're. The companies are really vying to get there. They're like slowly creeping across the world. And it's weird if you leave the door open like you're saying the car can't drive. Waymo was like hiring people on TaskRabbit where you can like pay someone to go do something for you to go close the door because there's no one in the car. I can't close the door. I can't take off. They forgot to make the door robotic too. It's kind of hilarious. I mean, we just got these robots that are out driving around and they're all using the exact same software and same technology. There have been a bunch of stories over the years about how the waymos when there's nobody, they don't have a passenger, they like have to go someplace and park. So these stories about the cars like congregating in a certain neighborhood, like there was a story where this family was like the who's gather outside our house like a gang. And it's because, you know, they like, they're programmed to like, oh, go find an appropriate place to park. And they're all using the same reasoning. So sometimes they just like pick a neighborhood or a parking lot and all the cars, it lets it looks like they're hanging out together. It's very strange.
Karen Howe
That's pretty hilarious. But jokes aside, unfortunately, some of these issues have actually caused serious problems, specifically when it comes to emergency responders. Last month there was a meeting between emergency responder leaders in San Francisco and federal regulators. It was a private meeting in which they mentioned that they felt like self driving cars were beginning to get worse and it was affecting their ability to actually take care of victims, get, get up to them quickly and also then get those victims to, you know, hospitals or other types of care that they need. And one of the specific things that they noticed is that when there was an emergency responder vehicle that needed to pass rapidly by traffic, the way mos had this tendency to just stop. And this is a new tendency that the emergency responders have noticed. So the cars were originally programmed to recognize that it's an emergency responder vehicle and do exactly what human drivers are supposed to do, which is pull by the side of the road. And for whatever reason, they're not doing that anymore. And it is clogging up traffic. It is creating this serious issue and this is now emergency responders have to essentially negotiate or plead or give feedback to Waymo, a company to try and figure out what is going on, to then unclog the roads again. And I think this gets at one of the issues that you've already raised, Tom, which is that these cars, they're programmed by the same company and they all operate with the same software. And so people have mentioned before, you know, the number one argument for why we want self driving cars is self driving cars, generally speaking, have had a higher safety record than human drivers. But there's this flip side of the argument, which is humans have different failure modes. And when there are a lot of humans on the road, of course it can get chaotic. We can be a little bit dangerous to each other, but we are dangerous in different ways. Whereas for cars that are all networked together, they're all programmed by the same exact company, they all have the same, same failure mode. And so whatever is happening that led the waymos to all do this exact thing, just stop in the road when an emergency responder vehicle rise or to congregate in packs when they're looking for a parking spot, is really driven by the centralization of the operation of all these vehicles in the hands of a single corporation.
Thomas Germain
I actually saw, I mean, something like this in person a couple of months ago. I was visiting some friends in LA and it was, we were like at a dinner party and you could see the street and this Waymo was like, was making a turn onto the street, it's kind of narrow and it got stuck. Like there was like there was, it just perceived that there was some obstacle and we watched there was like a person in the back of this car and just like went forward and back and forward and back for like 20 minutes, which at which point I would have gotten out of the car. But that's the thing here, right? Like, I'm sure, you know, fire trucks and, and cops and ambulance drivers there. There's always going to be some idiot, right, who doesn't know what to do, who blocks the road, who causes a problem. But it's like there's a thousand versions of the same idiot, right? There's like the, the roads have been clogged by robots that are all run by the same company running the same software. Now we're like handing the roads to this, these private companies that there's going to be a couple of businesses that have thousands or maybe tens of thousands of cars that are running the same software and they're dictating what's happening in traffic because there's so many of them. They all behave the same way. It's like they're terraforming the earth, right? They're, like, changing the way that cities function. And this is. You know, there's lots of different ways that this issue is cropping up. Right. There was this problem where the Waymos were, like, having trouble navigating around school buses. They, like, weren't stopping the way that you're supposed to. And, you know, the company's been working with emergency responders, right? They've been, like, talking with fire departments and local hospitals and police departments and things like that. And apparently they've, like, told them, oh, what you need to do. You give the car a hand signal. I don't know. Know what, what hand signal? You give a robot to tell it to get out of the way. Apparently that's not working either. But. But we reached out to Waymo, right? What do they tell us about all this?
Karen Howe
Yeah. Waymo said, we deeply value our partnership with first responders and our shared commitment to safety. Their ongoing feedback has been instrumental in driving impactful improvements to the Waymo Service.
Thomas Germain
Waymo told us that. I'm reading from my notes here that they've conducted in person training for over 35,000 emergency responders. And they have, like, a special phone number that you're. If you're in a fire truck and you're trying to get somewhere and there's a car blocking your way, you can call the phone number. So that's good. But it is an experiment that we're all living through in real time on the real roads. So, you know, you tell me.
Karen Howe
Yeah, and I think. I think going back to this point of what the systemic issue is here, which is that one software update can change the behavior of all of the cars on the road. This time the issue is for first responders. Last time it was for school buses. The next time, we don't know what it's going to be. And it does feel a little bit haphazard to centralize the control of so many different vehicles on the road to a single corporation and their software updates. And then just watch what happens in the real world with every single software update and the potential consequences that come from that.
Thomas Germain
Waymos are expanding to more and more cities. Right. There are other competitors, too, and this is an issue that is going to start hitting you at home, in your neighborhood in the near future, especially if you live in a major city. It appears that they are coming.
Karen Howe
You know what I think? I think we should just get rid of cars.
Thomas Germain
No cars. Kind of like that.
Karen Howe
Let's just all do public transit for the rest of our lives.
Thomas Germain
Yeah. Or we could walk. All right, Karen, if we, if we can extend the metaphor here, I think it's time for us to call a cab and get out of this podcast finally. But join us next week. If you're in the uk, you can listen to our show on BBC Sounds. Or if you're outside the UK everywhere else. You can listen wherever you get your podcast. It's up to you. Or you can either even search for the Interface podcast on YouTube. If you want to get in touch, you can email us@the interfacebc.com or hit us on WhatsApp. The number is 443-33207-2472. Or you can follow us all on social media. The links to our handles are right down there in the show notes. And you know, a quick thank you to everyone who's gotten in touch with the US Already. We've been getting some really lovely messages and thoughts from people about the things that we're talking about. We love hearing from you.
Tristan Redman
What's actually happening inside Iran. I'm Tristan Redman, host of the Global Story podcast from the BBC. Iranians have been under a near total Internet blackout for several months. Few Western journalists have been permitted to operate in the country. But in recent weeks, the BBC's chief international correspondent, Lee Stucet, has been reporting on the ground in Tehran. For more, listen to the global story on BBC.com or wherever you get your podcasts.
Date: May 7, 2026
Hosts: Thomas Germain & Karen Hao (Nicky Woolf absent)
This episode delves into the covert campaign by AI industry interests to fund influencers, aiming to sway public perception in favor of "American AI" and stoke fear of "Chinese AI." The hosts also discuss whether using AI is harming individual and societal cognition, and whether self-driving cars—specifically Waymo's—are getting worse in real-world conditions.
The tone is conversational, insightful, occasionally wry, and geared toward demystifying tech trends without jargon.
"[The briefing document] says, ‘talk about how you use AI and how it’s great that it’s made in the USA while feeding your kids or something like that.’ Like, really specific."
— Karen Hao [04:14]
"When you’re funding influencers, influencers don’t actually have to say any of that."
— Karen Hao [08:16]
“You should try to look for this content with influencers that usually don’t, in fact, talk about technology.”
— Karen Hao [13:23]
"If just anytime you’re brainstorming, if AI is coming up with the idea for you, you’re going to get worse at coming up with ideas."
— Thomas Germain [21:29]
"People trusted AI over their own intuition and their own thought process... even when it was wrong."
— Thomas Germain [25:57]
"Notice how painful it is if there’s a mental task you need to do... If that’s starting to feel like too much work, like a thing you used to do all the time, and now it’s like, I just don’t want to—that’s when you need to freak out."
— Thomas Germain [23:54]
“It does feel a little bit haphazard to centralize the control of so many different vehicles on the road to a single corporation and their software updates. And then just watch what happens in the real world.”
— Karen Hao [38:39]
Response from Waymo:
Broader Issue:
“When you’re funding influencers, influencers don’t actually have to say any of that.”
— Karen Hao [08:16]
“If just anytime you’re brainstorming, if AI is coming up with the idea for you, you’re going to get worse at coming up with ideas.”
— Thomas Germain [21:29]
"People trusted AI over their own intuition... even when it was wrong."
— Thomas Germain [25:57]
“It does feel a little bit haphazard to centralize the control of... vehicles on the road to a single corporation and their software updates.”
— Karen Hao [38:39]
“I’m on an AI diet.”
— Thomas Germain [30:19]
“Let’s just all do public transit for the rest of our lives.”
— Karen Hao [39:40]
This episode skillfully exposes hidden tech industry influence tactics, explains cutting-edge science on AI and cognition, and brings the listener inside the everyday (and existential) glitches of self-driving cars, all with the hosts’ signature blend of sharp analysis and good-humored banter.