
Loading summary
A
I like small companies. I like innovation. Who knows what Google they killed. Microsoft had been allowed to do this, you know, without any kind of stopping it. Bezos is at the Washington Post. Ellison's are trying to buy up every bit of cable properties. They've got politics, they're going to have media, they've got communications. They don't have any opposition that I can see of. And therefore they're not as good. They're not going to be as good if they don't have feedback and new companies nipping at their heels. It hurts all small companies. You can't do anything. And that's, that's a real shame because we don't know what they could have done.
B
Hi there and welcome to which side of History? I'm Jim Steyer, the founder of Common Sense Media and a longtime professor at Stanford University. Thanks so much for being here and please make sure that you press that follow button wherever you listen and subscribe to my YouTube channel. In this episode, we're going to talk about big tech and how it is all around us in every aspect of our lives. Quite simply, Big tech and the implications of technology are hard to ignore in all of our lives. So what impact is big tech having on our democracy and most of all on our kids and youth? Joining me today are three great guests. First, the pioneering tech journalist Kara Swisher. And second, the very thoughtful CEO of Pinterest, Bill Reddy. And finally, Eric Yuan, the founder and CEO of Zoom. So let's head now over to Stanford University and enjoy this very lively conversation. Welcome Kara Swisher, Bill Reddy, Eric Yuan. It's going to be a great night. Okay, can you guys, can the tech guys please bring up our dear friend Kara Swisher because she's going to get the first question. There she is.
A
Hi.
B
Hi. Okay. Yeah. And one. This is hilarious. I'm going to tell you this. This is hilarious. So she has a special camera. So there's a reason there's the little thing up on Kara's thing because she has such a high level camera for the work she does. And even the CEO of Zoom probably cannot fix it because she's on a Zoom screen, having said it. So, Kara, a handful of mostly male led tech companies have come to have extraordinary dominance and shaping our economy, our culture, our information flow. Has big tech, in your opinion, become sort of a form of oligarchy? What are the consequences for democracy for kids? It's sort of the overall question about the dominance of big tech.
A
They're all there sort of ponied up to the trough like pigs at the trough. And they want the things they want. And Trump is a coin operated president so he's willing to provide that for them. And so they're getting what they want and therefore it requires them to slavishly compliment him, which he requires. I find it really interesting that the richest and most powerful people in the world have to do this. I don't know what you have money for if this is the result. And it was a grotesque display. The women there were wives, just so you know. I think Lisa Su might have been there. I think she's the only woman involved in any of this who runs amd. I just did a long interview with her recently. But otherwise it is all men. And I don't know, oligarch's the right word. I don't know what it is. I just use pigs at the trough. I think that works for me.
B
Okay, Bill, how could you top that response? And Eric, you're going to get a shot at it too. But also, let me just follow, but you should respond to anything Kara said. But also, look, you run a really, both of you run really big tech companies, but they're not like the five or seven biggest tech companies. So I'm interested to know how you see that and if you think that's healthy or not healthy, but also the impact on the economy and culture of that much power in that few hands. And also anything you want that Kara said.
C
Yeah, I mean I would say a few things. For us specifically we're a non political platform so like our focus too completely novel. And our focus has really been on youth, mental health. How do you tune AI for positivity? Actually I think it's one of the few, not only bipartisan issues here in the US but I actually think has some real global traction. But I do think that no matter where you go in the world, I think you have political leaders that are very interested in what's happening. I think largely because you do have tremendous influence happening from a handful of platforms. I think it's a, a little bit of a misnomer when people talk about with sort of creator economy and things like that. One lens you could look through is to say, well there's been this huge democratization of content. And through one lens you could say that's true. Anybody with a phone can go make content now and they can publish it. Another lens you can look through is to say, well, the distribution of that content really comes down to just a few platforms. And so you know, when, when I was growing up. I'm 46, you know, so when I was growing up, I remember pre cable TV when there's like three or four TV channels and there's a, you know, then there were 30, then there were 100, and then there were 200. And then you get the Internet. And in a lot of ways it's really concentrated back down because those, you know, there's a few companies that through their algorithms, really decide what you see. And so when we were growing up, if we didn't like what we watched, we could just change the channel. Well, now with most of the platforms that are distributing this content, the only active choice you get, which is really not an active choice, is how long you looked. And if you looked at something, you're gonna see more of it. And so anyway, I do think that the reason you have political leaders really, really paying attention to what's happening with big tech is not just the dollars that are flowing, but just the huge amount of influence over what the populace of any country is going to see really rests with a few platforms. What are those algorithms? How are people getting to choose what they see and don't see? And I would put forth that people have lost a tremendous amount of choice because.
B
Lost a tremendous amount of choice.
C
They've lost a tremendous amount of choice.
B
Even though there's a million creators even
C
though there's a million creators on most of the platforms, it doesn't really matter who you follow anymore. It's what the algorithm thinks you should see. And so these algorithms have been tuned to, and really AI behind them have been tuned to maximize your view time. And in doing so, it figures out you look longer at the things that trigger you. Whatever your triggers are, whether it's that politician that gets you really fired up or someone else's fake perfect life that you can't possibly live up to, and you don't really get a choice. You don't get to change the channel the way that we got to change the channel growing up. It's, oh, you looked at it. I'll show you more of it. And I think that is a good reason for politicians and policymakers to really, really want to understand what's happening with these platforms. I think with the coming of ever more powerful AI, that's even more important. So I think the political world starting to take interest and care about this. That's a global phenomenon, and I think rightly so, because there's such tremendous influence. I think the way that influence manifests, obviously you could probably say there's something to be desired in any place you would go around that. But what I'm hopeful of is that there's, you know, yes, there's interest in how do you win in AI. I do think there is a lot of discussion on how do you do AI the right way.
B
Correct.
C
And I think fortunately that is again, not only bipartisan but I think cutting through around the globe. There's a lot more to talk about there. But I do think there's not only interest, but I think a clear need for thoughtful regulation around these things.
B
Well, we're going to come back to that. And Eric, I'm going to answer you in a second, but I just want to just because today. Right. So California is the center of the law when it comes to tech and has been, thank you very much, common sense Media in all seriousness, for about 15 years and they know that. And Washington is basically missing in action. No matter how effective Kara and Scott Galloway and other people are talking about it. This is why California matters and it's why New York is now becoming important, why Europe matters and why Australia matters. But it's really here. But Eric, let me ask you a question because you are an immigrant from China, right. And you also run this big company, very well known household word company Zoom.
D
Oh, very big.
B
I think so. But. But you aren't. And same thing with Pinterest. These guys run really significant, very successful tech company. But you're not one of the five or seven giants, Right. You worked at Google, you came out of Cisco, which in those days was so dominant. So do you think that there is. You can answer. I know some of these are tough questions. You know, you can answer however you want. You think there's too much of a concentration of power. And also is there any comparison to what you saw in China, which is completely different system. But do you think there's too much of a concentration of power and in a weird way fewer choice like Bill was saying?
D
Well, I think first of all, you know, I'm an engineer. So a better answer to this question from engineer perspective or from technology product perspective, because you look at the technology paradigms, the shift over the past many years from computer to PC to mobile and Internet. So every time, whenever there's a huge technology paradigm shift, you have to look at it from. I do not think you should look at it from other perspectives. First of all, you should look at it from a technology perspective. In particular, Today, look at A.I. we all know, so Internet was significant to the economy, but today the AI probably 10 times or maybe even 100 times more significant to the society, to the economy. So when it comes to AI, there are three layers. First layer is infrastructure. The second layer which is a large language model like OpenAI and Thorpe.
B
Right.
D
The third layer is application. You know, like a zoom can play a role.
B
So you're an application company in the
D
broader world of AI because the huge, you know, the foundation is very important to have very solid infrastructure or large language model, you need a very, very, very, I think huge investment to make the infrastructure to build a solid infrastructure and also build a large language model. I do not think you know, any company, even with startup, with VC backed, very few companies can afford investing. Take a large language model for example. First of all you need to buy a lot of GPUs or maybe leverage the cloud, the GPUs. You need to accumulate a lot of data as well. So that's reason why in the same over there, in the open eye, because a huge investment, they can do that. Even Bill and I, we want to do that. We cannot get enough capital. Right. So look at the infrastructure layer, same thing, right. So only big company can do that. You know, from that perspective, I think nothing is wrong. Right? Yeah. This is just purely from technology perspective. So that's the reason why, you know, Apple can do that. Meta or Microsoft, Google, you know, Broadcom, Nvidia, they can do that. So however, after the infrastructure there was build up after have very good large language model foundation at that time, a lot of application companies can build all kinds of innovation. It's not there yet.
B
We're in the infrastructure phase right now for AI.
D
Yeah, still early today.
B
We're in the learning stage very early. Let me ask you this and then Kara, I really want you to comment. We have two technology CEOs but do you think that there's too much power in the hands of a small number of companies right now? Do you think? Because there's been very little regulation other than what we've done in California, very little regulation.
D
I don't think so. Because you like it or not actually that's kind of way for technology innovation, right? You need someone invest huge amount of resources, capital, engineer time to build the foundation and then all other company can innovate upon that. That's a reality.
B
I get it. So Kara, your overall take on what they said and also do you agree or disagree with them about saying, look, this is the reality of five or seven really large companies that have the resources to build this out, whether you like it or not, and they have to work with the Government.
A
That's not the point. Everybody works with. Do you think airlines don't work with the government? You don't think companies work with the government? That's just the excuse they always use. They've used it since Bill Gates. I first met him at the Washington post in the 80s. We don't need you. We don't need to do anything. A group of people who have exactly zero laws governing them, and then they complain about even the slightest amount of pushback from whatever administration they don't like or they can't buy, essentially. And so, you know, they're very interested in politics because it smooths things for them. And what they've been doing for a long time is slowly owning all the levers of power. Now, media is their latest acquisition, essentially. So they don't get any pushback whatsoever. And, you know, I don't know. If I was a bank, I'd feel pretty irritated if tech gets to do whatever it wants or. And make decisions that have actual societal impact. If I was a pharmaceutical company. Now, look, all these people also use the levers of power, but they certainly have regulations they need to pay attention to, and they probably would like not to have any. I mean, that's what they would. They would love. I mean, you think a chemical company loves to clean, put, you know, make sure it doesn't soil the water? I suppose they. They'd make more money. And so the idea, you know, the. Is we've been here before. We've been here with the railroads, we've been here with phones. We've been here over and over and over again where the coalitions try to coalesce into a small group of usually the same people, essentially. And it's the same people, then it's the same, please, by all means, go watch the Gilded Age. It's also enjoyable and the costumes are fantastic. But it's. And so acting like they're magicians are special or China's gonna get us, they always have some stupid excuse to do that. And I would agree. I don't believe in a lot of regulation. I certainly don't. And I believe in innovation and everything else, but it's for me, not for thee. Right? That kind of thing. And so you're kind of. I just like, I've heard every one of their stupid excuses. And it's always about how they need to be free. You know, they need to be free to do whatever they want. And what about when they have. When there's. When there's safety issues? What about when there's part of all kinds of issues. You can track it so closely the way our country is with the rise of social media. It's true. Like any person who has children understands this. Any normal person does want to make you feel like you're crazy for saying so. And same thing when we had too much sugar or there was whatever in certain things, the same thing. It's the same thing. And they just happen to be now in a really good position to control that. If I was a small company, I would hate it. Like so you have to rely on these like the Magnificent Seven. Like, you know, really like they're not that magnificent. I like small companies, I like innovation. I don't know who knows what Google they killed in, you know, a version of Google, of Microsoft had been allowed to do this, you know, without any kind of stopping it. And that was just one company. This is six or seven of them that control everything. And so, and again they're. Notice where they're going now. Bezos is at the Washington Post. Ellison's are trying to buy up every bit of cable properties. Yeah, you know, this is the, this is the move next, this is the next move because you know, it's sort of like that, that play, I forget the play it was called, but like get this, this thing out of my eye. He wanted to kill the guy who wouldn't let him divorce. Right. We're irritating to them, we're irritating to a Marc Andreessen and therefore he'll just buy it or shove shitty things down people's throats. And so like what's happening at CBS is laughable on so many levels. But this is what they'll do. So this is the one thing that's in their way. They've got politics, they've got, they're gonna have media, they've got communications, they've got, you know, they don't have any opposition that I can see of. And therefore they're not as good, they're not gonna be as good if they don't have feedback and new companies nipping at their heels and, and control all the power. And it's so expensive. Nobody can do anything to me. It hurts all small companies, which I think are the, the, the heartbeat of our country is small entrepreneurs. That's my feeling. And they're, they, they're finished in A.I. you know, if they, you can't do anything. And that's, that's a real shame because we don't know what they could have done.
B
So you're going to hear by the way, for the class. You're going to hear a lot from me next week or the week after about AI. We're going to talk about it some. And about some of the regulatory stuff that we think. Again, the bill that was vetoed, the one Gavin signed a couple of really important bills of ours today, but he vetoed basically our AI companion bill. And, and what I was going to ask bill is you've been, as I said in the, in your intro of all, and you know, we have to deal with all the tech CEOs, including all the AIs, AI company CEOs, a lot right now. You've been sort of the unquestioned leader on the kids and family stuff. So what kind of. And in the world that Kara's describing, and to some extent Eric is actually laying out too, how do you protect kids and families? You all nodded when you talked about social media and its extraordinary impact. I mean, we've been railing about this for 15 years, wrote a book about it almost 12 years ago. How do you think that growing up with this constant connectivity, your daughter Eric, your kids, Kara's four kids, my four kids. How do we regulate this? How do we come in and not let this just be a total free for all in the way that Kara is describing it?
C
Yeah. So a couple things I'd say. So I came in as CEO of Pinterest a little over three years ago, and this is one of the primary opportunities I saw was to try to prove a more positive alternative to social media. And like everybody else, I had seen what was happening with engagement via enragement and sort of, you know, the, you know, not just the addictive nature of the algorithms, but also the kind of content that was being surfaced to sort of fuel just, you know, ever greater view times. And, you know, I think two things need to happen. One, I'm a capitalist, but there are market failures. And I think, take care of examples, we've seen this before where you've got to make sure that even in a capitalist system you don't have a race to the bottom. You can't just dump toxic pollutants into the waterways or things like that. And I think there's, you know, we need to have a baseline around safety. It's good to see some momentum around that. But also as a capitalist, you know, my hope is that we can start to prove consumer demand for safer alternatives as well. So this was you when I came in as CEO of Pinterest. Pinterest, like a lot of other companies, was sort of cloning TikTok you know, racing towards short form video. And while it had been a more positive platform historically, as soon as it started to go down the short form video path, a lot of the same toxic triggering content was rising to the top. A lot of the same issues as other social media platforms. And we set out to say, okay, let's prove we can tune AI for positivity. Let's prove that we can go build a different kind of platform. And our hope was that we would surface, that there would be consumer demand for it. And to give the punchline at that time, when I came in, the platform was in a state of decline, is sort of bleeding out users. The narrative on Pinterest was that was aging up and aging out. And fast forward three years, we've had eight straight quarters of record high users. Gen Z is now our largest, fastest growing demographic. It's more than half the platform.
B
And if you ask them what percentage
C
female versus male, so we don't publish that specifically, we do skew towards women. It's, you know, shopping is a big use case on the platform, so we skew towards women. But actually we're growing across every generation and every demographic that we track around the globe. And men are now one of our fastest growing demographics as well. So we wanted to prove there was a good business and positivity and we're seeing this resonating really broadly. And at the core of that was investing not only in just tuning AI to show you more things that would leave you feeling better, but actually investing in safety. So we were the first and still the only social media platform to do private only for under 16. So what does that mean? Effectively we turned off the social features for users under 16 because as a parent, you know, I looked at what was happening and, you know, it makes sense when you just say it out loud. It's not safe for kids to be contacted by strangers online. And so when we did things like that, when we did private only for under 16, we had not yet won with Gen Z. People thought it would ruin us with young users. Right after I came in as CEO, our stock dropped by over 20% when we did that.
D
It's compact now. Yeah.
C
And so I really worry about both
B
of these guys really in terms of that.
C
But that was the bet, was that, okay, there would be demand for a safer, more positive space. And not only did it not ruin us with young users. If you ask Gen Z now why they come to Pinterest, one of the first things they'll see is they see it as an oasis away from the toxicity they experience elsewhere. So I led with, we've had far too much fewer choice in where we consume media than what we used to have. Used to be when you had 300 channels, if you didn't like what was on, you just flip to a different channel. Now, if you only have a few platforms distributing all your content, you don't really get a choice in what you see. The way that I describe this is that the way these algorithms work, we've all had the experience of sitting in a traffic jam. We know when we get to the front of the traffic jam, we shouldn't look at the car crash. But what do we all do? We all sneak a peek at the car crash. The algorithm learns like, oh, you looked at the car crash, I'll show you another car crash. Oh, you looked at that one too. Until eventually it's all car crashes. But there's a really important thing happening there, is that it's really about conscious choice versus unconscious choice. These algorithms are being tuned to, to sort of the base of your brainstem, your unconscious choice. Right. Literally there were people designing for how to maximize dopamine release. Totally. But if you ask someone after they see the car crash, would you like to see another one? The vast majority of people would say, no, that was terrible. I don't wanna see another one of those. Conscious choice versus unconscious choice. So one of the things that we did to tune AI for positivity was instead of tuning for view time, started tuning more for the things that people said they wanted to see again, more intentional choice. So a lot of this is about not just safety, not just tuning for positivity, it's about agency and actually giving people more choice. Not just choice in other platforms they choose, but choice in the way they use the platforms and what they see. So coming back to your question around what's happening with all this, I think, yes, we need a baseline of regulation.
B
Yeah.
C
So you don't have a race to the bottom. So you don't have people saying, well, the only way to compete in AI is you. You have to build chatbots that will have sexually explicit conversations with children. Which was like a real story on the front page of the Wall Street
B
Journal, Elon's big new strategy.
C
So you need to make sure you have a baseline that doesn't allow for those kinds of things. At the same time, I'm hoping that we can prove that there's a free market solution to this as well. That if consumers actually have choice and if consumers understand more of it's happening for them. There will be a demand for it. And by no means can I say that we can declare victory, but we've proven it's possible. We've proven it's possible to build in a different way. We've proven it's possible to have people feel better after time spent on the platform. And we've proven that there can be a good business in it. And we're actually not technically open sourcing it. We're sharing our results because I actually hope others will follow that more companies will compete on this and that there can be some free market solution to this as well.
B
Kara, let me ask you a question. So obviously we've been starting, we're talking about, we're going to talk more about the power of AI and which is basically social media and other platforms on steroids. So, Kara, you saw and have written about and spoken about many times the impact of social media on democracy right at home and abroad. And you have probably spoken, written as eloquently about that as anybody. How do you think, as you say, the next couple of years of AI evolution again dominated by a handful of large companies, clearly, how do you see that shaking out in the context of both American democracy and democracy around the world and sort of at a high level, pro or con? Because obviously there are some significant concerns there.
A
There. Sure. One of the things you're mixing social media with AI, I don't think you should.
B
Okay.
A
By the way, if you notice, there's been some really interesting statistics. Social media usage is down, is starting to decline, actually, especially people. So I have a feeling it might take care of itself. I mean, I have two older, I have four kids, but the two older ones don't use social media at all. They got off of it and you know, whenever I ask them, they're like, it makes me feel bad, you know, they don't want to use it. That's it. It's pretty simple. And so I do think there's a decline in social media usage among the demo that they want, the advertisers want. AI is a different thing. AI is a really interesting. You know, social media is a push business. If you remember the original point cast, that was a problem initially and then everything was pull.
B
Right, Right.
A
I think AI is a more as a, is a different kind of thing because it's more of a creative. It can be a creative thing, it can be an abus. Um, it can be, it could be just fun, it could be good for teaching people things. It has a lot of different uses Which I don't think are the same as social media at all. If they're on. Except they. People put it on social media, that would be it. I do think kids should learn how to use it, that's for sure. Like how to use it. It's like, it's like denying people the use of the Internet when it's first started, which people did. Like, what do I need to use email for? What do I. They need to know how to use it and how to. How to understand it and how to have media literacy about it. I think absolutely we should be teaching those things to kids. And the problem with Melania is that she's saying use the AI, but none of the safety around it like that. I didn't hear a lot about. I sort of heard a little bit about safety, but, you know, largely Trump her husband is allowing these companies to do whatever they want. If. Whether it's character AI or. Or the one I recently had, the parents of the kid who committed suicide and the AI was quite helpful in committing. Helping him commit suicide. Very clearly not enough safety precautions so much that I was like, kids under 16 shouldn't use this stuff. But they should know how to use AI in general as a tool. That's, of course, that makes no sense to have our populace uneducated about the critical digital tools going forward. It's just, what are the safety things around certain populations and the applications, especially chat bots. I mean, for sure.
B
So that was. By the way, I thought your bill
A
was way too far reaching, honestly.
B
You thought our bill was too far reaching?
A
Good. I understand why he vetoed it. Honestly, I would agree with him. I mean, I think you have to really stick very closely to young people. Otherwise people. People want to go and have a relationship with a chatbot, it's their fucking business. I'm sorry. It is.
B
But if they're under 18, don't you think it can be regulated?
A
16 is probably the age 16 you would prefer.
B
You'd like to stop it at 16. We went up to 18 in that legislation. Under 18.
A
Under 18. But, you know, we do that with cigarettes. We do that with drinking. We do. And everyone's, oh, kids get to drink. I'm like, yeah, but we say they shouldn't. Like it's just a societal, you know, of course they figure out ways. We all did. I mean, everyone has, you know, in their lives. And so it's not that. It's that. And, you know, if parents want to. The problem we have is that a lot of these, a lot of this stuff is addictive too, at the same time, and also necessary. It's addictive and necessary. So you have to take your heroin. It's like, weird if you think about it. And one of the things that I always think about is let's figure out a way to safely be able to teach these things in schools and to kids to get these skills and at the same time not become addicted and unsafe for care, especially for kids. It's definitely unsafe for adults. But, you know, adults shouldn't eat a lot of things they eat. They shouldn't imbibe in a lot of things they buy. But unfortunately. Fortunately or unfortunately, this is the way we have decided to run our country. And so that's. To me, you have to be very careful when you're, when you. When you're legislating, especially the government is legislating.
D
Essentially, Jim, that's the reason why education becomes so important, so critical. In other words, kids are very addictive to so many new things. Right. That's why we never. The new technology. We should start immediately thinking about education.
B
I totally agree.
C
I think. I absolutely agree with the need for education. And I think also, you know, we should not neglect the huge promise of sort of the positive things that can come from AI and all these things. Like, you know, everybody can have a personal tutor now, right? Like growing up, I needed, as the first of my family to go to college, had to work two or three jobs, needed Pell Grants to help pay for college. When I got to college, I was like, oh, my gosh. All these other people had somebody help them with SAT prep. I just bought a used book that had pages missing from it to prep for the sat. Now everybody can have a personal tutor. There's huge human enablement that comes from those kinds of things. At the same time, a lot of the analogies that we use in terms of the education and personal choice around these things, one of the things it misses is that, and this was true for social media, I think it will be true for AI as well. You don't really get to choose whether you participate or not. It is a collective action problem. So a specific example I would give on this, and to Kara's point, that, you know, especially with young people, they're starting to figure out that a lot of these things aren't so good for them. There's a stat out there. I think this was Pew research that put this out there. 48% of Gen Z wishes social media didn't exist. Now think about that. What products do you know that People would use but wish didn't exist. That tends to only be true for, like, extremely addictive products. So why are they still there if they wish it didn't exist? Interesting. Because it is a collective action problem. They're there because their friends are there. In fact, a lot of the opposition research that said, oh, well, social media is not really linked to the rise in anxiety and depression. Well, what they would do is they do holdout groups. Oh, they take 10% of the kids out of a class or 10% of the kids out of a high school and say, okay, you 10% won't be on social media. And they said, look, there's no change in their anxiety and depression. What it missed is that there was an equal and offsetting effect from the fear of missing out, being ostracized because everybody else is there. So you miss out on what's happening. And so that had a negative effect as well. But if you held out a whole
D
school,
C
then all of a sudden, within a few weeks, anxiety, depression, all these things plummet. It's not a silver bullet. It's not the only thing that causes anxiety depression. But it's a good example of why it's so important that I think we have these conversations about what are the harms of these technologies. Any technology can be used for good or for bad.
B
Correct.
C
But it's not necessarily true that any technology you have to use. And I think social media became something that, like any young person, if you talk to them, they'll say, yeah, I have to use it. And I think AI is going to be the same thing. Like, you're not going to get away from using it. Right. It'd be like saying you refuse to ride in an automobile. It's like, well, you can do it, but you're going to be quite limited in what you can do in life or any other sort of form of transportation. You'd be quite limited in what you can do. You can make that choice, but it's very hard to do it. I think with AI, similarly, it's going to have these huge opportunities for people. Like, everybody gets to have a personal tutor, like, oh, my gosh. That will unlock huge amounts of human potential. But shouldn't we also endeavor to make sure that we safeguard against the worst of the worst? So, like, take care as example.
D
Yeah.
C
Mandating that there are safeguards so that the chatbot can't counsel you to suicide, which has happened. That's a real thing. Or yes, if you want to have companions. I would not advocate those for underage users. But people do form these kind of relationships with these things. So okay, well make sure it's the right kind of relationship. What's the kind of relationship that you would want a 13 year old to be able to engage in versus an adult? You know, we've always been conscious about what we allowed kids to use versus not use or what kinds of media kids could watch versus not watch. This is why, you know, movies come with ratings on them and things like that. So you could make, so you could make a reasonable choice about what was appropriate. Well, you know, for an adult, if you want to have an AI chatbot, have a sexually explicit conversation with an adult, that's not a business I'm going to choose to be in or product I'm going to choose to use. But other people may, but couldn't we agree that like that's probably not what we'd want to have children exposed to. So I do think we need to hit both sides of this. Both the huge potential of it, but also the huge downsides to it and recognize that I think AI, like social media is going to be the definition of a collective action problem because you're not really going to get to choose to not use it. It's not like alcohol or cigarettes where you can just easily choose to, to not use it. You're going to have to use it to be a participant in society. So therefore shouldn't we make it really easy for people to make safe choices or for those that aren't ready to make those choices, to have safety built
B
in correct, by the way, for you all students. So, you know, this is what I do in my day job. This is the one class, maybe we'll do part of another one, which is what I really do. So we're really focused on this right now, as Bill knows. Yeah, go ahead, Kara.
A
In June, for example, I'd love to know what they think of this. OpenAI and Mattel announced that they were going to bring correct magic of AI to Mattel's iconic brands. I couldn't think of a worse thing to happen. It's fine if the Barbie says I love you or like whatever, I don't love those things. But kids like them, it's fine. But the idea that you're going to have an on, I just, I just yesterday we made an AI of me and I was talking to it for a while and it's a 3D version of it. It's going to be in the CNN series I'm doing and I have to tell you, it Was eerie because it learned moment by moment as it was talking to me, right. And was reactive. And really, the strides are massive. But the idea that kids would have this in a toy, that I don't even understand why legislators are like, no, we will not be. This is not the magic. We're not gonna have the magic of AI in a. In a toy with children. It just makes no sense whatsoever. And this is the kind of stuff that they're doing without even, like, no problem. Or, hey, we're. You're gonna have to opt out of your content in order to. And, you know, if you. We're gonna put your content in our stuff, but you'll have to opt out. Who thinks like that? Like, I just. It's. That's the kind of mentality that's going on here. And it's not either of these guys. Companies. These are like, these are good products, by the way. And Pinterest has always been a very good creative product of input and creativity for people. Zoom is a tool that people use to talk to each other, you know, essentially. And I'm sure there's been little problems along the edges for each of these products, like, everything. But the way that a lot of this stuff is being conceived is like this OpenAI Mattel thing or open. Stealing the content. Essentially. I'm thinking of going to Sam Altman's house and taking all his things unless he opts out of my.
B
Okay, so what I was going to say before Kara made a much more intelligent comment was this is what I do for a living, right? So let me just summarize this because I want to get back to democracy. So, look, no, but this is important because I'll come back to you and you're going to learn a little more about it. This is the one. One thing I'm going to talk a fair amount about myself during the class, which is one I completely agree with the overall perspective these guys are doing. And we're taking on AI on a very big level. So we're rating all the platforms. They are pissed off about that, by the way. They are really angry. That's when I drove up from the Google thing last week. It was after very. Not. I like them as people, but they're not happy about how we're commenting up Gemini and the risks on Gemini, a platform that they have spent billions and billions of dollars. So our strategy is regulate them in a common sense, thoughtful way and then work with them to avoid products like Kara is describing, but to work really directly with them. It's A very interesting balance. And so as I'm listening to this, this is my some. If you guys are interested, I'll do a special section for you sometime for an hour about how we're trying to do both, sort of a carrot and stick approach to this and how to both work with the companies at the highest level. And even in some of the companion AI companies like character AI, which is owned by Google but not owned by Google, and then also how to regulate them, which we think has to happen now, you just cannot wait. Social media waited way too long for that. But I want to go back to the democracy thing, Kara, and you have you. But Eric, I really want to hear you talk about this because you grew up in China, right? Which is not a paradigm of democracy in most people's eyes. So my question is this. How much do you think. Kara, start with you. But Eric, I'm really interested in what you and Bill think. How much do you think that the AI tools which are so extraordinary will be used to undermine or promote democracy? So Kara, with you, I mean this is. We saw the impact of social media, which I agree is not the same as AI, but how much do you think AI will impact democracy in a good way or in a not good way? And then Eric, I'd really like to hear what you think about that too.
A
Either way it's a tool, it's a weapon.
B
But tell the audience, well, either way
A
it's a weapon or a tool, it depends on what kind of regulations we put into place and the usage and the tracking and the algorithmic transparency and everything else. It just depends on what we as a people and our regulators, you know, think is important here. We regulate. I mean look, whatever you think of Brandon Carr, he tries to reg. I mean he's such an idiot, didn't say anything, you know, but like the FCC regulates our airways, the blah blah, blah, like all these things. So what is the regulatory scheme work for this stuff? What is the liability scheme work here is critically important. I mean they shouldn't be under section 230. If they do something wrong, they should have the shih tzuit out of them. I mean that's seems to me to be relatively fair, but it could be used either way. It obviously is a great tool for autocrats. It's a great tool for people who want to screw with this country. It's a great tool for propagandists, It's a great tool for that. That said, it's a great tool for creativity. It can do all kinds of wonderful Things as these guys were noting, education, making things interesting. It just depends on what you use it for. And that's the thing, of course, I always assume the worst and that bad players will take our greatest. You know, it's like everyone having a nuclear bomb. You know, if everyone did, boy would we be in trouble. But they don't, thankfully. But it's a similar thing. It's like everyone gets to carry around their own little nuclear bomb, I guess. And so the question is, is it going to be nuclear energy or is it going to be a nuclear bomb? I don't know. I don't know. But we sure should have our government involved in it.
B
So Eric and Anne built. You both understand AI as technologists, right? And running significant tech companies. Do you think it could promote democracy? Do you see it potentially being used in a really healthy way for democracy? And then Eric, I'm interested in just any comment you want to make about China and AI related to this. No, I am, because that's where you're from there.
D
I think Carl is right because you look at AI, it's just a tool, it's a technology, right? So ultimately it boils down to the algorithm. If going to be very objective or subjective, this company algorithm, that company algorithm in terms of AI might be different. So that's why it could be either way, could promote a democracy or may not could have a negative impact as well. Again, really boils down to algorithm. Each company's social responsibility or regulation together I think can help. You know, it's hard to see now because again, AI itself is just technology. So in terms of China, I don't know actually because I moved to Silicon Valley in 1997 to embrace the first wave of Internet revolution. I'm not sure my comment on that really give you some firsthand experience. But overall in look at AI probably by in the last day, look at what worldwide AI technology, right? A lot of great companies here, Silicon Valley, American company and some companies like Deep Seq in China as well. And again, this still boils down to algorithm, right? As I mentioned, each company had a social responsibility. Each country also has its own responsibility for its own citizen. What kind of algorithm you want to promote and the then can drive the direction it is harder to see. I think ultimately every country might be different and they probably can choose their own algorithm, you know, for the AI technology.
B
My last words, I'm going to ask this one. Eric, I'll start with you, then go to Bill and Kara will give you the last word. Okay, One or two words of wisdom, Eric. I mean, you have an extraordinary career, right? Incredible story. So do you, Bill, by the way, and I think Kara does this well. Like one or two words of wisdom for people on. It's the life lessons that you've learned, not necessarily about what we're talking about tonight, but one or two of the most important life lessons that you at least want your three kids to know, but that you want other Stanford students and the people here and also the couple thousand people on the zoom screen to know that, like, have been your life's lessons.
D
Yeah, I would say, number one thing, whenever there's a new technology, do all you can to embrace it. That's really important. Number two thing is every day you got to think about what's your formula to become a better version of yourself. If you yourself become better, everything else will be easy,
B
really.
D
Bill,
C
Agree. There's a really good. One of the things we haven't talked about is I think there's this narrative happening right now that AI is just going to take all the jobs from young people. So your question was like, words of wisdom for young people. There's a narrative out there that like, oh my gosh, it's a terrible time to be graduating from college. It's a terrible time to be a young person. And I just think that's completely wrong. Completely wrong. Because every time you see a major shift in technology, who are the people that figure out how to do interesting new things with it first? Is it the people that have been doing it one way for decades, or is it people who are sort of looking at it through fresh eyes? I'm a huge believer in the advantage of the beginner's mind. And I did five startups. Every one of those startups, like Venmo, is the most recent. I didn't know anything about payments before doing Venmo. Like, I just, you know, like, I think the story of Silicon Valley where we're sitting right now is, you know, the belief that like, oh, a few ragtag rebels can go sort of build something that's very different than what the incumbents are and has played out time and time and time again. And so I would just say to anybody that's graduating now, I graduated in 2001. You know, the first, the first two startups, first one was a dot com crash and burn. Second one, our IPO date was September 11, 2001. Obviously, far worse things on that day than our IPO not happening. And normally I tell that people were like, so, Bill, you stopped doing startups after I was like, no, I loved building. And I just think this is going to be an amazing time to be able to create and build for any young person that embraces technology like Eric was talking about. And yes, there's all these harms and all these potential bad things that can happen. I talked about. Well, we're trying to show that there's a free market solution to the positive. I actually think the best people that are going to show that are the young people that are going to be building for the first time now. I think it's their world to go create.
B
I love that. I love that, Bill. I do. Okay, come on. How could I not give Kara Swisher the last word?
A
I'm looking for a quote, one of the things. I think Bill's absolutely right. There's lots of opportunity for everybody going forward. It doesn't have to be. I was reading Scott this quote the other day, and I'm looking for it right now. Hold on. Just give me one second. Here it is. It's by Homer. It says there's not any advantage to be won by grim lamentation, which means stop whining. Stop your whining. I think that's what that means, essentially. And I think what's really important is, first of all, it's not your fault. We have left you with a real mess here and we should get out of. Probably get out of the way and die. This is probably really where we should be headed, including so many older members of Congress have got to leave, like, tomorrow. But what you have to realize is that you can't get overwhelmed by it and lament things and, you know, doom scroll and think you can't do anything about it. Every bit of history shows that powerful people have done this since the beginning of time, and they never win. Things get overturned, things get changed. The young, you know, young eats its old, essentially. And you have to think like that. Is that just because it's this way today, it doesn't have to be this. It's not going to be this way in the future. There's going to be a day after for every bit of this. And usually as the world spins forward. And so I would say, you know, just really don't lament things. Don't think you have no power. You have enormous amounts of power. And it's just that they've convinced you that they're magicians or special or geniuses. I've met all of them. They're not. Some of them are. Some of them are, but so are lots of regular people who aren't billionaires and things like that. So don't let them take your power away from you. Just absolutely do not. It is not necessary. They thrive on that and it's not the way it has to be. And so imagine what you want to do and get out there and and grab it because if not we will get overrun by people who like to control us in ways that are only in their self interests and not in democracy's interest, not in your interest, not into the community's interests. So demand that kind of change and that's unfortunately going to be an ongoing battle for humanity for the rest of time. But it's a good battle.
B
It's a good battle. And on that note, Kara Switcher, Bill Ready? Eric Yuan. I really hope you enjoyed this episode of which side of History? Please subscribe to my YouTube channel like this video and please leave a comment. I'm Jim Steyer and this is which side of History.
Podcast: Which Side of History?
Host: Jim Steyer, Founder of Common Sense Media
Guests: Kara Swisher (Tech Journalist), Bill Ready (Pinterest CEO), Eric Yuan (Zoom CEO)
Date: January 6, 2026
Episode Theme: Examining Big Tech’s domination and its social, economic, and democratic consequences—especially its effects on children and youth.
This episode convenes leading voices in tech, business, and journalism to discuss the increasing centralization of technology power among a handful of major companies ("Big Tech"), the consequences for democracy, the economy, and especially the wellbeing and agency of kids and families. With candid, sometimes fiery debate, the panel addresses the balance between innovation and regulation, the dangers and possibilities of both social media and AI, and practical paths toward a more positive and diverse tech ecosystem.
"They're all there sort of ponied up to the trough like pigs at the trough...I don't know, oligarch's the right word. I just use pigs at the trough." – Kara Swisher [03:00]
"...if they don't have feedback and new companies nipping at their heels...it hurts all small companies. You can't do anything. And that's a real shame because we don't know what they could have done." – Kara Swisher [48:00]
[04:06 & 06:17] Bill Ready: Argues that while the creator economy promises democratization, ordinary users actually have less real choice—algorithms, not users, increasingly decide what content is seen.
"Even though there's a million creators...it doesn't really matter who you follow anymore. It's what the algorithm thinks you should see...people have lost a tremendous amount of choice." – Bill Ready [06:17]
[21:58] Ready: Compares algorithmic amplification to rubbernecking at car crashes:
"The algorithm learns like, oh, you looked at the car crash, I'll show you another car crash...it's really about conscious choice versus unconscious choice. These algorithms are being tuned to the base of your brainstem, your unconscious choice." – Bill Ready [22:28]
"Every time, whenever there's a huge technology paradigm shift...you should look at it from a technology perspective." – Eric Yuan [09:13] "Only big company can do that. Even Bill and I, we want to do that. We cannot get enough capital." – Eric Yuan [10:04]
"We wanted to prove there was a good business in positivity...private only for under 16...if you ask Gen Z now why they come to Pinterest...they see it as an oasis away from the toxicity they experience elsewhere." – Bill Ready [22:00]
"You're mixing social media with AI, I don't think you should. By the way, there's been some really interesting statistics. Social media usage is down, is starting to decline..." – Kara Swisher [25:58]
"The idea that kids would have this in a toy, that I don't even understand why legislators are like, no, we will not be. This is not the magic. We're not gonna have the magic of AI in a toy with children. It just makes no sense whatsoever." – Kara Swisher [36:03]
The conversation pivots to democracy, with guests reflecting whether AI will be a tool for empowerment or control.
"Either way it's a tool, it's a weapon or a tool, it depends on what kind of regulations we put into place and the usage and the tracking and the algorithmic transparency and everything else." – Kara Swisher [39:51]
[41:49] Eric Yuan: Emphasizes "algorithmic responsibility" at the company and country level, noting that AI's impact depends on whether its algorithms are designed to be objective or can be subverted for control.
[31:00–34:00] Ready/Steyer: Both discuss how social media and soon AI create "collective action problems" where opting out individually isn’t practical—societal-level solutions are necessary.
Comprehensive digital literacy and critical thinking are underscored as central to the safe adoption of new tech:
"Kids are very addictive to so many new things. That's why...we should start immediately thinking about education." – Eric Yuan [29:56]
Regulations should not crush innovation, but must set "baselines" for conduct and safety to prevent destructive races to the bottom.
"Notice where they're going now. Bezos is at the Washington Post. Ellison's are trying to buy up every bit of cable properties...They're not as good if they don't have feedback and new companies nipping at their heels." – Kara Swisher [13:37 and 48:00]
"It's what the algorithm thinks you should see. And so these algorithms...have been tuned to maximize your view time...Whatever your triggers are...you don't really get a choice." – Bill Ready [06:19]
"We wanted to prove there was a good business in positivity...And not only did it not ruin us with young users...we've proven it's possible to build in a different way." – Bill Ready [22:00–24:16]
"It's a great tool for autocrats...it's a great tool for creativity...it's like everyone having a nuclear bomb...is it going to be nuclear energy or is it going to be a nuclear bomb?" – Kara Swisher [39:53]
"48% of Gen Z wishes social media didn't exist. What products do you know that people would use but wish didn't exist? That tends to only be true for...extremely addictive products." – Bill Ready [31:10]
"Whenever there's a new technology, do all you can to embrace it. That's really important. Number two...become a better version of yourself." – Eric Yuan [44:03]
"There's a narrative out there that it's a terrible time to be a young person. I just think that's completely wrong...it's their world to go create." – Bill Ready [44:29]
"Stop whining. It's not your fault we have left you with a real mess...Don't think you have no power. You have enormous amounts of power." – Kara Swisher [46:38]
This episode is robust, candid, and urgent in tone—alternating critique and hope. Swisher deploys sharp metaphors and historic perspective, Ready brings optimism about positive business and user agency, while Yuan urges technical focus and self-improvement. All guests stress that new technologies, especially AI and social media, are not inherently bad or good—their impact will depend on the ethical, regulatory, and creative choices made now. The final charge: Don’t lament, organize, innovate, and demand your power. This is a fight worth having—for democracy, for kids, and for the future.
For further reading, listen from [02:43] for the power critique, [18:32] for safety innovations in tech for kids, or [39:51] for the future-of-democracy debate with AI at the center.