Transcript
A (0:00)
SA.
B (0:30)
Welcome to another episode of Conversations with Coleman. If you're hearing this, then you're on the public feed, which means you'll get episodes a week after they come out and you'll hear advertisements. You can get access to the subscriber feed by going to ColemanHughes.org and becoming a supporter. This means you'll have access to episodes a week early, you'll never hear ads, and you'll get access to bonus Q and A episodes. You can also support me by liking and subscribing on YouTube and sharing the show with friends and family. As always, thank you so much for your support. Welcome to another episode of Conversations with Coleman. My guest today is Rob Reich. Not Rob Reich, as I mistakenly call him at the beginning of the podcast. Rob is a political science and philosophy professor at Stanford University. He's the director of Stanford's McCoy center for Ethics and Society and associate director of Stanford's Institute for Human Centered Artificial Intelligence. We talk about the culture of Silicon Valley, the problem with optimization. We talk about the externalities caused by big tech, the problem of censorship by big tech. We discuss artificial intelligence. We talk about the famous experience machine thought experiment, and much more. So without further ado, Rob Reich. All right, Rob Reich, thanks so much for coming on my show.
A (1:49)
Thanks so much for having me. Real pleasure.
B (1:52)
So the topic of our conversation today is a book that you've co written called System Where Big Tech Went Wrong and How We Can Reboot. So if I'm correct, this is a book that's co written by three authors. You, a philosopher, a computer scientist, and a political scientist. Correct? Is that right?
A (2:12)
That's right, exactly. We each bring a different kind of framework to the question of big tech companies and the revolution in computer science over the past 50 years.
B (2:22)
So how is it that the three of you came together to want to write this book?
A (2:26)
Yeah, we each have different motivations, so I'll report mine and then just give you a hint about the motivation for Jeremy, who's the policy expert, and Mehran, who's the computer scientist. So for me, as the philosopher, I have been at Stanford now for 25 years and I've witnessed what I think is fair to call the great migration of undergraduate students away from the humanities and social sciences to major in computer science at Stanford in record numbers, a trend that's also happening at many other universities. As the technical skills that a computer science major can provide have become ever more valuable in the marketplace. The number of people majoring in computer science has just gone through the Roof and at Stanford, it's very high in part because of the really terrific teaching that goes on in the computer science department at Stanford. And then what happens is that students hop on what I think is a conveyor belt where you get these technical skills and then you get heavily recruited by big tech companies or start startup companies and off you go on your merry way. And about five years ago, as I had witnessed all this happening over the past decade, some of the great concerns that became publicly visible about misinformation and disinformation, automation that displaces people from the workforce, privacy abuses by tech companies, algorithmic bias or discrimination, these were all becoming more apparent. And I thought it would be worthwhile to try to undertake a collaborative enterprise that would amount to a cultural intervention on campus. A way in which the young technical students, the computer science students, took on board policy frameworks and ethical frameworks, and simultaneously the people who wanted to major in public policy, go to law school, maybe end up doing work in public agencies, got some technical skills. So they understood from a regulatory standpoint better the computer science revolution. So we collaborated in order to generate a new course out of which grew the book. And the interesting thing, I think from my point of view at least as an educational undertaking, is that it's the only course I'm aware of in which there are technical assignments, policy memos and philosophy papers all in the same class. Don't want to do something in which it's like take an ethics course. There's one single class you can take once over your four years as an undergraduate student and check that box sooner rather than later. The whole idea is to embed this within the technical enterprise. Mehran, the computer scientist, had similar motivations. He'd spent 10 years at Google as an early employee and saw what he would describe as Google sort of take a bit of a wrong turn from the early days in which it was more motivated. Do no evil, as the motto went. And Jeremy had high level service in the Obama administration and saw as various high level policy debates played out how technically unaware so many people in government were and wanted to bring the policy mindset and technical mindset together. So that was the origin of the course. And then some of this filtered into the book.
