Loading summary
A
Why do you care about philosophy? Why are answering these big questions important?
B
You know, one of the things that I sometimes will tell MBA schools, background in philosophy is more important for entrepreneurship than an mba. Philosophy is very important to this stuff because it's understanding how to think about very crisply, what are possibilities, what are theories of human nature as they are manifest today and as they may be modified by new products and services, new technology, et cetera.
A
Usually in this show we talk about, like, actionable ways that people use ChatGPT, but a more interesting question is how does AI in general and how might it change what it means to be human? These are really deep, big philosophical questions. I thought you might have a unique perspective on this intersection. Reid, welcome to the show.
B
It's great to be here.
A
Great to have you. So I'm sure that everyone listening or watching knows this, but you are a renowned entrepreneur, you're a venture capitalist, you are an author, you're best known as the co founder of LinkedIn, you're a partner at Greylock, you are a board member or a board member and an early backer at OpenAI. And you also have an incredible podcast, Masters of Scale. But perhaps most relevant to this conversation, you also studied philosophy at Stanford and Oxford and you almost became a philosophy professor, which I didn't know before researching this interview. It's really cool.
B
Yeah, no, it was definitely part of it is. I've always been interested in human thought and language. Started with Stanford with a major called Symbolic Systems. I was the eighth person to declare that as a major at Stanford, and then kind of thought. We don't really know what thought and language fully are. Maybe philosophers do. And so trundled off, you know, took some classes at Stanford, but then also trundled off to Oxford to see if philosophers had a better understanding of it.
A
I love it. It's funny, I feel like since then Symbolic Systems has become the go to like Stanford major for like, curious analytical people who end up doing startups. So that's pretty funny to know that you're one of the first. So usually in this show we talk about like actionable ways that people use ChatGPT. And that's the big question. That's I think, what people come here for. But underneath that, I think what a more interesting question is, is like, how does AI in general and ChatGPT in particular, how might it change what it means to be human? How might it change how we see ourselves and how we see the world? How might it enhance our creativity, our intelligence, all that kind of stuff? And these are really deep, big philosophical questions. And as someone who rigorously studied philosoph and probably still thinks about those questions, I thought you might have a unique perspective on this intersection. Because I think people tend to be like, they're either in the philosophy camp or they're in the language models camp. And people who are sort of in the middle is kind of an interesting one. And what I wanted to start with, because I think there are probably people who are listening or watching who are like, why? I just want Reed's actionable tips is to ask tell me more about why you care about philosophy. And I think you got into that a little bit in talking about how you got into it. But like, yeah, tell us why is. Why do you care about philosophy? Why are answering these big questions important?
B
So, you know, one of the things that I sometimes will, will tell, like MBA schools when I give talks, there is a background in philosophy is more important for entrepreneurship than an mba, which of course is, is. Is startling and, and contrarian. And part of that is to get people to think crisply about this stuff. Because part of what you're doing is as an entrepreneur is you're thinking about what is the way the world could be, what could it possibly be, what is, you know, you know, if you wanted to use, you know, analytic philosophy, language, logical possibility or something like that. But it's, it's, it's, you know, kind of what is possible. And, and then partially because, you know, these are human activities, what your underlying theories of human nature, about how human beings are now, how they are kind of quasi eternally and how they are as they, as, as circumstances change, as the environment in which, you know, we, we, the ecosystems, we live in change, which is technology and in, in political power and institutions and, and a bunch of other things as, as ways of doing that. And philosophy is very important to this stuff because it's understanding how to think about very crisply, what are possibilities, what are theories of human nature? What are theories of human nature as they are manifest today and as they may be modified by, by new products and services, new technologies, you know, et cetera. And so, you know, obviously people tend to say, oh, that's a philosophical question, because it's an unanswerable question, you know, nature of truth, or, you know, while we all speak and understand languages, we don't really know how that works. And as part of the reason why, you know, there was the linguistic turn in philosophy that, you know, Wittgenstein and others were so known for, which is, well, maybe These problems in philosophy are problems in language. And if we understand language, we'll understand philosophy. And you know, this question around, you know, these, these unanswerable questions. But actually in fact like science itself is full of a lot of unanswerable questions. And, and it's the working theory as we dynamically improve. And that's part of what the human condition is and that's part of what actually the in depth philosophy is. It isn't to say that, you know, the same questions today, some of the same questions today in philosophy, the same questions that Plato and Aristotle and even the pre Socratics and other folks are grappling with, truth, knowledge, et cetera. But some of the questions are also new questions and the questions evolve. And part of how science has evolved from philosophy was this question as, as we, as we get to our more specific theories of, and kind of developing the new questions that we get to, those are outgrowths. And the same thing is true in building technology, in building products and services, in entrepreneurship. And that's why philosophy is actually in fact robust and important as applied to serious questions, you know, versus the, you know, one of the things I wrote my thesis on in Oxford was the uses and abuses of thought experiments. And you know, the most classic one is trolley problems. And you know, there are both uses and abuses within the methodology of trolley problems. The most entertaining of which if people haven't watched it, is there's a TV series called the Good Place which embodied the trolley problem on a TV episode in a absolutely hilarious way.
A
That's really interesting. Yeah, like what is, what is the way that people tend to misuse that? Because I feel like trolley problems are so common in like EA discourse and people run into that a lot online.
B
The fundamental problem is they try to frame it to get, to get an intuition, to drive an intuition, a principle, et cetera. They try to frame an artificially different environment. So it's like no, no, it's a trolley. And the trolley will either hit the five criminals or the one human baby. And its default set to hit the human baby. And do you throw the switch or not? And then when you start attacking the problem you say, well how do I know that I can't break the trolley? I could just not make it continue to run. It's like, well you know that you're like, oh, so you're positing in your thought experiment that I have perfect knowledge that breaking the trolley is impossible. So in your posit to make your thought experiment work, you're positing something we never. Or, or when we encounter, we generally think people are crazy, right? Like you have perfect knowledge. Like why the fact do I know that I have perfect knowledge that I can't break the trolley? And, and because you say what, what is the right human response to this trolley problem is I'm going to try to break the trolley so it doesn't hit either of them. Right.
A
That's really interesting.
B
Right. And you might even say that they, that the problem is, is that to say, you say, even you say, well, you have perfect knowledge that you can't break it. You're like, well, okay, you know, a, don't have perfect knowledge and B, even if you did, maybe it's still the right response. You're trying to get me to say, do I do nothing and run over the, the baby? Do something and run over the five criminals? Like those are my only two options. And you're like, well no, I could say even if I think I can't break the trolley, that's what I'm going to try to do because that's the moral thing to do.
A
I've actually, I've heard a lot of trolley problems and I've never heard anyone posit the third option. I love that. That's great. And I also like, there's something about that where it's like, yeah, certain thought experiments sort of like hijack your instincts and you don't quite a reason through all these, all these hidden assumptions that I think honestly reminds me of like certain doomer arguments. And I don't, I don't want to like go into, go into the full thing, but I think it's a, it's a really interesting way to think about it. If I had to like summarize what, what you just said, like the value to you of philosophy is like thinking crisply, thinking crisply about possibilities, thinking about human nature and reality. All those things are like really, really, really important for business people. I want to kind of like take it, take another step, which is like some of those questions that philosophers like or philosophy students or philosophy nerds just like sharpen our skills on. There are some of these, some of these big questions, some of the big perennial questions like what is truth? What is reality? What can we know? All that kind of stuff. I'm kind of curious if you have a sense, as we start to get into talking about AI stuff, what are those questions that, where AI large language models are going to give us a little bit of a new lens on, on, on, on some of those questions or what are, what are questions where we'll, we'll find new ones to ask that are better than previous ones, even if they maybe don't answer them? Do you have a sense for that?
B
Well, I mean, historically, it's like, for example, questions that have led to, you know, a bunch of the science, various science disciplines, right? It's, you know, everything from things in the physical world to things in the biological world, like germ theory and all the rest. I think it actually even true. It's one of the reasons why kind of philosophy is the root discipline for many other disciplines. When you get to questions around like, okay, you know, how do you think about economics and game theory? Or how do you think about, you know, kind of, you know, kind of political science and realpolitik and kind of the conflict of nations and interests? And it's also one of the reasons why, you know, as a, you know, probably one of my deepest critiques of the non reinvention of the university is the intensity of disciplinarianism. So, you know, it's just the discipline of just, you know, political science or just the discipline of even philosophy as opposed to multidisciplinary. You know, and if I, you know, part of the thing that I tend to think is kind of an interesting thing is how much the academic disciplines tend to be more and more disciplinary versus the hey, you know, maybe every 25 years we should think about blowing them all up and reconstituting them in various ways. And that would be actually a better way of thinking. And why some of the most interesting people are the people who are actually blending across disciplines within academia. And I think that that part of it is, I think, extremely important. And part of the question in philosophy is the kind of the question of like, well, how do we evolve the question of what do we know? And obviously you evolve the question what, you know, through like, for example, a lot of the history of science is instrumentation, you know, new, new measurement devices that, that help with kind of, you know, kind of provisioning of theories. But it also, and that's one of the reasons why like, people frequently don't think enough about how technology, you know, helps us change what is the definition of a human? Because we have this kind of imagination, you know, like the Descartian imagination, that we are this kind of this pure thinking creature. And you're like, well, if we've learned anything, that's not really the way it works, right? That doesn't mean that we don't think that way. To have abstractions to Generate logic and theories of the world and all the rest of the.
A
But.
B
Put your philosopher on some LSD and you'll get some different outputs
A
that makes sense. So I guess along those lines, if I step back and squint, I can kind of divide the history of philosophy into essentialism and nominalism For. For a certain part of philosophy, right? Like. And essentialists are like, do you believe that there are, like, fundamental. There's a fundamental objective reality out there that's knowable and that there's a way to kind of, like, carve nature at its joints? And nominalists, which. Where we would include Wittgenstein, which I know, I know you, you studied pretty deeply, and pragmatists think that more or less truth is. Is more or less relative, or it's about social convention, or it's about what works, or there's a lot of different formulations of it. And there's this sort of, like, ongoing debate between people who think one thing, one thing or the other. Do you think language models, like, change or add any weight to either side of that debate?
B
I think they add perspective and color. I don't think they resolve the debate. The. And there's certainly some question about, since they function more like later Wittgenstein or more, you know, kind of nominalist. You know, you say, well, does that. Does that weigh in on the side of nominalists because of actually, in fact, the way they function? And actually, in fact, you say, well, if you look at how we're trying to develop the large language models, we're actually trying to get them to embody more essentialist characteristics as they do it. Like, how do you. How do you. How do you ground in truth, have less hallucination, you know, et cetera, and, you know, to gesture at a different, earlier German philosopher, you know, Hegel. One of the things I'm, I think is kind of part of a. I think it was kind of the human condition is that thesis, antithesis, synthesis. Like, you could say, hey, we have an essentialist thesis. We have a nominalist antithesis. And the synthesis is how we're putting them together in various ways. Because you say, look, we. And I don't even think later Wittgenstein would have said that the. The world is only language. You know, kind of what, you know, the deconstructionist and Derrida went to was like, you know, it. It is only the veil of language, and you have no, no contact with the world, and you're. So. You're not grounded in the world at all. I Think he would, he would, he would think that's kind of absurd. Right. But his point was, is to say that there is also, in how we live as forms of life, the way that it operates is not a simple, you know, kind of denote of. And he understood it wasn't just denoting the cat and along the mat or the possibilities. The cat is on the mat and the possibility of the cat is on that, but actually possible configurations of the universe. And there was this kind of notion of logical possibility that was described as one language of possibility was to say that kind of essentialist about a language of possibility is actually incorrect to actually how we discover truth and how we operationalize truth. And you still have a robust theory of truth, which is not essentially what the deconstructionists do, but the robust theory of truth is partially grounded in this notion of language games and a. And a biological form of life, of how you do that. And then obviously you go into this deeply with saying, well, okay, how is mathematics a language game as a classic language of truth as a way of trying to understand that? And that's part of where you get what philosophers refer to as Krippkenstein, you know, the Saul Kripke excellent, you know, lens on reading of part of what Dickenstein was about. And you kind of then apply all that, you know, everyone's going, where is this going? To large language models. And you say, well, actually, in fact, you know, language is this play out of this language game. Large language models are playing out this language game in various ways. But part of what is revealed is we don't just go, truth is what is expressed in language. Truth is a dynamic process. And, and kind of human discourse could be synthesis, synthesis, you know, you know, thesis, antithesis, synthesis or other things. Is this human discourse that's coming out of. Out of, you know, this dialogic period, this truth discovery, this logical, this, this, this reasoning, whether it's induction as reasoning, whether it's, you know, you know, abduction, whether it's, you know, deduction and, you know, these reasoning processes that get us to what we think are these kind of theories of truth that are always to some degree, works in progress.
A
That's. That's really fascinating. I, I want to try to summarize that in case, in case it was a little bit difficult to follow, to be honest. Like, there's a, There's a point in there that I think I missed something. So you tell me what I, what I missed. But I think one of the, like, some of the things that I heard in there. That, that I thought, I thought was really interesting is when you think about how we built AI, which is predicting the next token, that's a very sort of late Wittgenstein compatible idea, or pragmatic, like compatible idea, where it's really about the relationship between different words in a sentence and we're not finding anything out about the world. Like there are other AI approaches, I don't know in the 80s or 70s where it was like, literally like, let's list out every single object in the world, and those didn't really work. And that would be like something along the lines of a more essential approach to AI. And the one that works is a more pragmatic, a more late Wittgensteinian one. But what's quite interesting is now that we have that pragmatic base that we've bootstrapped, we're in this process of trying to make it more grounded, more grounded in reality, or more reduced down to being able to talk about the essential ground truth. And I think what's really interesting about Wittgenstein is he's sort of famous for saying, like, the limits of my language are the limits of my world. I don't know, I don't remember if that's late or early, but, but more or less like, I think what you're saying is that Wittgenstein doesn't think that like we. There's nothing outside of language, but he does think that the way we talk about the world or the way that we use language is part of this sort of like social discourse where we're all kind of like going back and forth to like, co. Invent language and structures and language games together. And, and you kind of see that happening with language models where, like when you do something like RLHF, like that's sort of us playing with a language model, like playing a language game to be like, no, no, you don't talk up, you don't talk like that is. Is that like generally what you're, what you're getting at?
B
Yes. So everything you said. But then the additional thing which, you know, later, Wittgenstein was really trying to explore in various ways because he wasn't trying to do a kind of a completely just social construction of truth. You know, I'm, I'm a, I'm actually a fan of. You have to be a Wittgenstein scholar to actually understand how both early and late Wittgenstein are actually part of the same project. And late Wittgenstein wasn't early Wittgenstein was an idiot. And now let me, like I've religiously converted to this different point of view. But there is a particular thing which is how do you get to the notion of understanding truth? And truth is the dynamic of discovery through language and through kind of. It has to have some explicit external conditions that it isn't my truth, your truth. There is only to some degree, our truth or the truth in various ways. And how do you get to that as what you're doing and having, you know, truth conditions? And in kind of early Wittgenstein, the truth condition was it cashes out into a state of possibilities and actualities in this logical space of possibilities, which include physical space, is part of broader than that. And then later, Wittgenstein said, well, actually, in fact, this modeling of logical possibility is actually not the fact the way this works, right? And we're not actually, in fact, grounding it that way. The way that we're grounding it is in the notion of how we play language games, make moves in language. And the way that's grounded is to some degree, sharing a certain biological kind of form of life by which we recognize that's a valid move in the language game. This is not a valid move in the language game. Now, this is what's interesting when it gets to large language models, because you go, well, large language models, are they the same biological form of life as us, or are they different? And how does that play out? And I think Wittgenstein would have found that question utterly fascinating and really would have gone, like very deep on it trying to figure that out. Because. And by the way, the answer might be some and some. Not 100% or 100% no. 100% no. Because, you know, the argument in favor is the large language models are trained on the corpus of human knowledge and language and everything else, and they're doing language patterns on that. Some might even argue that some of their patterns are very similar to the kind of the patterns of human learning and brains. Others would argue that it's not. But then you'd say, well, but it's also not a biological entity, and it learns actually very differently than human beings learn. And so maybe its language game, which looks like it's the human language game, is actually different in significant ways. And so therefore, the truth functions are actually very different. And in a sense, what we're trying to do when we are modifying and making progress with how we build these LLMs is to make them much more reliable on a truth basis, like we want. We Love the creativity and the generativity, but we want it to, to almost for, for a huge amount of the really useful cases in terms of amplifying humanity. We wanted to have a better truth sense. Right? I mean like the paradoxes in current GBT are when, you know, you can kind of tease it out with like very simple questions around prime numbers and you go, well, you know, you got that answer wrong, is oh yeah, I got it wrong. Here's the answer. Well, that answer is wrong too. Oh, I got that one wrong too. Here's the answer. And you know, a human being understanding these things, I'm just getting these things wrong. Like I got it, I get it, I'm wrong as opposed to, oh, I'm sorry, you're right, I got it wrong. And here's the other, here's another wrong answer. And we're trying to get that truth sense into it as a way of doing it because we do have some notion of, oh right, this is what's characteristic. Like mathematics gets us into very pure definitions of certain kinds of language games. It's one of the reasons why, you know, centuries ago people thought math was maybe the language of the universe or language of God or language of et cetera. Because you're like, okay, there is the one where the purest truths, some of the purest truths that we know, two plus two equals four is kind of embedded in. And we're still working that out as we play with how we create these language tools, these language devices. And it's part of the reason why I think this question is really interesting because you can actually model it to some of the, the actual, as it were, the technological physics that we're trying to create when we doing the next version, like, like, like how do we, how do we get these things into good reasoning machines? Not just good generativity machines. And they have some reasoning from their generativity, but like part of the, the classic showing where they break is showing where they're reasoning stops working in ways that we value and aspire to in terms of what we try to do as human beings in our best selves.
A
Here's something exhausting the export import dance. Let me know if this sounds familiar. You design something in figma, you export it, you paste it somewhere else and you pray that nothing breaks. And usually something does. It's almost 2026 which means that it definitely timed stop doing this framer already built the fastest way to publish beautiful production ready websites and now it's redefining how we design for the web with the recent launch of Design Pages, a free canvas based tool, Framer is more than a site builder. It's a true all in one design platform. From social assets to campaign visuals, to vectors and icons, all the way to a live site, Framer is where ideas go live from start to finish. Framer's design tool is different from old school website builders. You might see advertising on other podcasts. It offers vector editing, 3D transforms, gradients, animations, all for free. It has unlimited projects, unlimited pages and unlimited collaborators. But what really changed how I think about Framer is that there's no handoff. What you design is the website. No developer interpretation, no can you make it match the mock up conversation? You design it, you publish it and it's live. Are you ready to design, iterate and publish all in one tool? Start creating for free@framer.com design and use the code Dan for a month of framer Pro. That's framer.com design promo code Dan. Rules and restrictions may apply. And now back to the episode that's really fascinating. You said a lot there. I really want to get into the reasoning thing in a second, but I want to go back to the, the sort of, the way that you talked about late Wittgenstein versus early Wittgenstein, because I haven't really heard it said that way. And the usual like thing people say is like he just disagreed with everything when he was older or whatever. And what I hear you saying now is more or less in both, like in both cases he's saying some of the, some of the same things or he has some of the same views, but like the real difference is how he cashes out what, what it means to be true, whether something is true. And in the first, in his like sort of first period, he's talking about truth in terms of a logical space of possibilities that can be broken down into these like little, what he calls atomic facts. And those are never really defined. But like you can kind of build up truth from there, mapping those, those possibilities into actualities, like what's actually in the world. And in later Wittgenstein it's all about these sort of like the language games, the social relationships, like the, the use of that word or that phrase in the context of people. And one of the things that I really wanted to ask you about is like that first, that first version of Wittgenstein is where it's sort of that logical space of possibilities. Like what that reminds me of is embeddings, where, you know, embeddings Are they're one of like the key underlying technologies that, that gave rise to AI right? In, in like traditional nlp, they're like allowing you to represent words or tokens in a high dimensional space. And then the language model like innovation is kind of like, it's not just words, it's words in their particular context. Each, each word in a particular context has its own part of the space. So like in a, in a language model, the word king, if it's tokenized that way, you know, there's a king in chess, there's a king, there's an actual king, there's like a king of England, there's a King Lear. And they're all kind of like kings, but they're like different spaces and language models are able to represent all of those different. Like when, when we say king, we mean many different things. They're able to represent all of that. And that just actually reminds me a lot of, of like atomic facts or the first, like Wittgenstein's early, early work. And I'm just kind of curious like, because I think you said that language models sort of because of the next token prediction, they, they're, they're sort of late Vianian. But I wonder how you like factor in the fact that embeddings work and they're sort of a core part of this.
B
Well, and actually this is part of the fact that late Wittgenstein is not early Wittgenstein was an idiot because yes, I do think that the kind of, the notion of call, as it were a probabilistic bet for what are the set of different tokens that apply are, are kind of there. Now the, the reason why I would kind of slant more as current practice late Wittgenstein than early Wittgenstein is because early Wittgenstein thought that once you had the, the grasp on the logic of it, you then almost by speaking correctly couldn't make truth mistakes because the logic was embedded in it. And even though the token embeddings are kind of, you know, part of a very broad symbolic, you know, quasi symbolic, I would say, you know, kind of network. And the reason it's quasi symbolic is because it's still kind of activations and so forth and isn't, you know, purely the reasoning around a token of king or you know, 15 different tokens king or 23 different partial tokens king, as much as there's kind of conceptual spaces in that tokenization as mapped from a very large, large use of language, but part of language isn't isn't just the historical language, but is the, the reapplication of it. Like if you say this is the king of podcasts, right? Or this is the king of microphones.
A
Not yet, but maybe.
B
Yes, yes, you know, kind of as, as instances. That's part of why, you know, kind of later Wittgenstein went to. Well, it's how we're playing these language games and how we're reapplying them. And when we say like for example, we say on this podcast, this could become the king of podcasts. We all have a sense of what we're doing. It's like, well, what, what would be the cases where that would be true and what would be the cases that would be false? And what prediction is that making? And, and how is it that that's a useful thing? I'm sure someone said king of podcasts before, but I've never heard it before. Right. And it's especially as it gets developed and elaborated a lot in, in discussion and then actually if you suddenly had another, you know, terabyte of information about discussions of kings and kingdoms and you know, all the rest, and all of a sudden that token space that it's learning from would change, right? And then the generalizations off it would change. And that's part of the reason I would say it's kind of more later Wittgenstein, even though not completely, not completely disconnected from those embeddings early. And it's one of the reasons why like actually, in fact, later Wittgenstein is not truth is just what language says. It's no, there's ways in which it's embedded in the world by how we navigate as biological beings. And that's part of how the world kind of comes and impacts it. And therefore it's not just language by itself free floating like the Cartesian consciousness, but it's embedded in some ways. And part of what he was trying to do is figure out, well, from a philosophy standpoint, how do we understand those embeddings and how do we derive our truth discourse in language based upon that biological embedding?
A
That makes sense. So I think what I hear you saying is despite the fact that embeddings are in this sort of, they're mapping words into this high dimensional space which sor. Like kind of mapping words into this sort of atomic facts or logical possibility space, the way that that space is constructed and what makes something go into one part of the space or another is more late Wittgensteinian because it's very much about how it's used in practice and whether it's useful for humans in the world rather than it's about some deep underlying logical ordering where if you've created that ordering, you can't say anything wr wrong because, because you're, you're only using words in that, in, in from that space. Does that, is that, is that kind of on target?
B
Yes, exactly. And, and, and part of it is we know that there is that there's truths that, where the coherent use of language still is a falsity. And so how do, like, part of what you were trying to figure out is how do we get more of those truths and truth telling and reasoning? Because reasoning is about finding truth, you know, into how do these, you know, LLMs work?
A
And what do you, and just, just to move into that point a little bit, like, do you think that, like, what is most promising to you in terms of like, ways that we're, we are getting reasoning into these language models? And do you think that there are any like, philosophical, like ideas from philosophy, whether Wittgenstein or otherwise, that are relevant to that, to that project?
B
Well, the answer is certainly yes on the relevant ideas. Currently, I think we're doing a couple things. So I think we're taking kind of call it, you know, human knowledge and figuring out how to get that as part of what's trained. So the earliest discoveries were actually, in fact, if you trained on code, computer code, then these models learn patterns of reasoning much broader than just computer code. And so all of the models that are doing this are now also training on computer code, even if they don't have a target of being a, you know, Microsoft copilot, you know, code generation, you know, et cetera. Like, even if they're not doing that because there's a re. There's, there's a pattern just like math of, of crisp, you know, kind of, you know, modeling of reasoning. Another one is, that's currently happening is well, what are you doing with textbooks? And the notion of if you take the same kind of training discipline that we use for human beings encapsulated in textbooks, you can, for example, build much smaller but still very effective models based on textbooks as ways of doing it. And so textbooks is another one now as you begin to like, there's probably like some interesting, as it were, computational philosophy. If you began to say, well, how do we cash out kind of theories of, you know, whether it's, you know, kind of, you know, call it theories of science in the kind of different theories of science and you kind of building those models into, you know, how do you get, you know, it's kind of like Lakatosh as a development on Popper given thinking about Kunian, you know, kind of models of scientific paradigm. How do you you know, kind of make you know, kind of predictions on those kinds of bases and you know, some of the in depth work in logic and maybe Bayesian logic as ways of possible possibly looking at this, I'm quite certain that there probably are some, some very useful things to elaborate beyond it. Now currently of course part of the, the notion of these things are they're learning machines. So you have to have a give a fairly substantive corpus of data from them to learn from. Now of course there's synthetic data and like there may be like philosophy is in what patterns do we create synthetic data that is still useful to learn from off the act the current data, you know, might be. Anyway, so there's, there's a bunch of different kind of gest areas but I'm certain those that they're there even I don't, even though I'm not bringing up, I'm making gestures rather than you know, specific theories as to how that there, there cashes out.
A
That's really interesting. So it seems like basically the way that we're trying to get reasoning into models is to find sources of data that just has really crisp reasoning and so they'll like learn the reasoning from that.
B
Yep.
A
I'm sort of curious like if, if that's the case, like aren't there are only a certain number of like moves you can make in logic? You know, like you can do induction, you can do deduction, you can do. There's there's like not, there's not like infinitely many moves. Like why if, if we have a really crisp set of, of data on that sort of teaching them these moves, what's the thing that's sort of stopping them from being able to apply them more broadly and maybe that question is not well formed.
B
Well first, yeah, correction of the question because actually in fact in logic there are infinite moves. One of the things that's interesting in various logics is different orders of infinity as people kind of think through it. So there is various things. Now what you did actually remind me of is one of the things that I, I've been recently rereading because of thinking of Godel's theorem as kind of a classic instance of of human meta thinking. And so Girdle Asher Bach, which I read as a high school student, I've been rereading recently because I'm.
A
That's great. What do you think?
B
Well, it's, it's, it's this, it's this, this tangle of amazing observations that you're trying to kind of like I'm trying to think about it from a viewpoint of modern LLM. So it's kind of like this question of you got the girdle self reflection, which is roughly speaking in any sufficiently robust language system there are truths that cannot be expressed within the language system. Right. And like that's mind boggling, right. And what exactly it means and so forth. And it's because of this classic kind of diagonalization proof to say if you're enumerating out all the, all the truths, there's at least one of them that's not captured in your, your and you're in your numbering out of all truths, hence one version of kind of infinity. You get that in the recursion patterns that you see within Escher and within, within Bach that you say that's another recursion pattern because there's a recursion pattern of getting to showing the shadow of at least one truth that's not captured within your enumeration of all the truths. You go okay, well what does this mean for thinking about truth discovery? Whether it's human truth discovery, LLM truth discovery and that kind of the, the what are the things that are outside the boundaries of logic? Like it would have been like I would have been very curious to have Godel and Wittgenstein to folks very focused on logic to talk about Godel's theorem. Like, like I would have like you know, I was asked recently, you know, if I had a time machine, would I want to go forward or back me? I'd rather go forward. I'm very, I'm just curious about how do you shape to the future. But like one of the historical back ones that I would love to do is put Godel and Wittgenstein in a room and say you know, Godel's theorem, discuss, you know and like, like, like, you know, I would do a lot to try to be able to hear that conversation.
A
We need some GPTs in here with Godel and Wittgenstein. Maybe Godel doesn't have enough writing to make that happen, but maybe eventually and
B
the twistiness of the thinking is one of the things that is one of the things that made Godel so spectacular in this Another one by the way that were historical walks is Einstein and Godel used to take walks. You wish that you had digital recorders, please Record the conversation. We would really like to listen to that.
A
No, I love that. That's really interesting because I feel like I read Godol Asher Bach in college. I loved it. The thing that's so good about it is it's such an interdisciplinary book. It's got math and music and art and all this stuff. And you're like, wow, that's the kind of mind that's going to invent new minds. And then you see Hofstadter today and he's sort of not. Like, he's not definitely not in the LLM conversation. He's a little bit freaked out by them. And, like, I'm kind of curious, like, what do you. What do you make of that? Like, what did he get right and what do you think he got wrong?
B
Well, I think a central thing that he got right, at least to how I operationalize is. And that was the reason I was gesturing at Hegel with thesis, antithesis, synthesis, which is. It's a dynamic process that's ongoing and you can't necessarily predict the future, synthesize. And that's part of. Even though obviously in philosophy you try to articulate the truths, you know, that Descartes, I think, for am, or, you know, Wittgenstein saying, well, there actually have to be a world in a certain way to actually there to be truth statements in the language statement of I think, therefore I am. And so therefore you can be, you know, kind of broader than just the disembodied mind as a way of thinking about that, because you think about what the truth conditions must be in a language. If you're saying. If you're saying in a way that's coherent to your current self and your future self, I think, therefore I am, what are the truth conditions in the language as ways of doing it? And so, but that's a dynamic process by which we are making new discoveries. And that's kind of the synthesis. And that's the thing that I think is, you know, is part of what I take from the kind of the girdle, Escher, Bach, interweaving of these different. Of these different dynamics and showing the kind of the patterns across it. Now, frequently when you go across a lot of areas where people say, hey, we have this language system and all we know is through our language, and then they kind of go, and the world is unknowable to us because the only thing that's knowable to us is our language, you say, well, that's presuming there's no relationship between how the Language engages with the world and how we engage with the world with the language. And so it's one of the reasons why you get into really interest in, you know, biologists like Forella and Maturana. It's the reason why, you know, you get to, you know, kind of different patterns of self referential logic. And so you, it gets very interesting. And so I don't, I myself don't get freaked out by LLMs. And part of this I think, wow, new things that we can discover. Right? And how does that, that make the discourse much richer, much more valuable, much more compelling and in some ways higher on target discoveries of the truth. Because I gave a speech in Bologna last year where along with the book I published last year, impromptu last chapter is Homo Techne, is that one of the things that we think of ourselves as human beings is static. And actually we're not static because we are constituated by the technology that we engage and bring into our being. So for example, you and I are looking at each other on this podcast through glasses. Like think about the world with glasses, without glasses, right? The world is a very, very different place and how you can perceive, like we say most of our theories of truth are fundamentally based on kind of perception. Like, you know, seeing is believing is kind of a classic idiom. And well, if you don't have glasses, how you. Is very different, right? And so like technology changes our landscape in the perception of truth. You know, that's why microscopes and telescopes and all this rest, these other things that kind of get to that changing that landscape. And that's part of what we're doing with technology and we're doing in this particularly interesting ways with these LLMs in terms of how they're operating.
A
Yeah, that makes, that makes a lot of sense. And I love that point about sort of how technology changes us and really like how flexible humans are. It reminds me a lot actually, because I read, I read your book to prepare for this and I also, I read your Atlantic article and you have some podcasts on this like, and it reminds me a lot of. Have you read the book the Weirdest People in the World by Joseph Henrich?
B
No. I probably should.
A
It's really, it's really great. He's a psychologist at Harvard. And though the point of the book is most of what we take to be the psychology literature is wrong. And it's not wrong because of P hacking and all that other stuff, but it's wrong because the psychology literature is based on studies of western college students and western College students have a completely different psychology than like people everywhere else in the world now and in history. And one of the key differences in western college students is that they can read. And reading changes your brain in all of these different ways. It enlarges parts of your brain and shrinks other parts. Where for example, if you're, if you can read, you're more likely to pick out objects in a landscape rather than see the holistic scene. And there's a bunch of these other significant differences that you find in humans who can read versus humans who can't. And so reading as this technology created all of this stuff. One of the things that he argues is that, that it allowed us to create like a society where we had, where we had churches that created like rules and principles that like people would follow even though they weren't being watched. So like, you know, you know, I'm not supposed to like steal or whatever and you can't. It's like really hard to get like big organized society without, without reading basically is like one, one big point of, of the book and that it's because it changes our, our actual biology. And I think that's the thing that people sort of miss about language models. Not to say that we should ignore that there are any language models dangers or anything like that. There's a lot of, I think, really interesting and really important problems to solve. But when you think about what language models might replace versus augment, I think it's also really important to know that we've been replacing or augmenting ourselves for many generations. And if you took a human from like you know, five generations ago or 10 generations ago and put them, put them now, like it would be like really hard for them to like interact in our society now. Same thing if you took one of us and pushed us back in time. And that's because like we, we sort of like we grow and change in response to our environment and our culture, which is like this collective memory that like that gets loaded up so that we're a modern human instead of like a, a pre evolutionary human or whatever. And the same thing is going to happen with language models. Like you can kind of like put it on this, on this timeline from the invention of language to like reading to the printing press, like it's all the same kind of cultural transmission technology I've heard some researchers call it. I think that that's exactly kind of like what it is to me. Curious what you think about that.
B
Well, you know, I definitely think that the progress of cultural knowledge and I don't know if it's the same author, but the Secrets of the Secret of Our Success is, is I think a very good book. And you know, it's partially because how we make progress is updating our cultural knowledge. And it's part of the reason why it's not surprising that then when we, we generate interesting learning algorithms that we can apply to the human corpus of knowledge that we then generate interesting things that come out of that. Because that's essentially a, A, a partial index of cultural knowledge. It's not the complete index because as you know, like for example the Secret Service go through, it's like, well, you know, how do you identify which things to eat or which things not to eat or when to do that and all the rest of that. And that's part of how you make progress. And I think that's essential part of how we're, how we actually evolve. Like everyone tends to think evolve and human beings is, you know, do we evolve to be faster, longer, stronger, genetics and actually in fact a major clock of our evolution is we shifted. Like you could say there's geological evolution which is super slow, then there's biological evolution, which is slow, and then there is cultural evolution or knowledge digital, etc. Which is much, much faster. And part of how the kind of the secrets of our successes we, we is we got into kind of cultural evolution and you know, and kind of that progress of digital and that part of what we're doing with AI and LLMs is tools to help accelerate that, you know, cultural slash digital evolution, which can include like, why is everyone going to have a personal assistant? Because the personal assistant will be, I read all the texts and I can bring them to you as you're talking and trying to solve problems. So like for example, on the, you know, what are things that people should be using ChatGPT for is obviously a immediate on demand personal research assistant that today hallucinates sometimes. And you have to be aware of that and kind of understand that. But an immediate research assistant is one of the things that is obviously here already today. And, and if you don't think you need a research assistant, it's because you just haven't thought about it enough.
A
Yeah, I mean, it's incredible. It takes everything that humanity knows and gives it to you in the right context at the right time when you ask for it. And that's exactly kind of like the bottleneck of cultural evolution is getting the right information out to the edges of people that need it instead of like having it be locked up in a, in a, on the Internet or like in a library or whatever where you have to go expend resources to get it. Like all those are better than having to transmit knowledge orally, for example. But, but yeah, language models are like a profound next step. So we're, we're getting close to time. I have a couple, I have a couple of. We had a whole final section about science, but we may not be able to get to science. We'll have to maybe do a part two.
B
Yep, that'd be great. I'd be up for that. I love these, these topics and.
A
But I want to ask you a couple, a couple more things like just sort of on, on the, you know, philosophy in AI and AI front. So like, why do you think philosophers didn't come up with AI? Like, why did it, why did it come out of. I mean, I guess it came out of like sort of a computer science tradition, but also just like really sort of an engineer y. People who just were making stuff. Yeah. Talk to me about why I didn't come from philosophers.
B
Well, I do think that this is a little bit like I was gesturing at earlier, which is being disciplinarian is. I think, you know, obviously people are not idiots in doing this. They have some strengths in it, but also some weaknesses. And you know, and I think part of it is to think about like, like, well, how is it that technology is going to change our conceptions of how we use language and how we discern truth and how we argue about it and all the rest of the stuff as I think, you know, pretty central. And you know, it's kind of like, you know, how is technology as ways of knowing or ways of perceiving or ways of communicating or ways of reasoning important? And you know, philosophers will say you don't need any of that. We just, I sit down and I cogitate, you know, kind of a, you know, kind of canonically, you know, Descartes and look, I think there's a, there's a role to sitting down and cogitating, but I think there's also a role to discourse and it doesn't necessarily mean you have to be a externalist or you know, a, you know, a kind of. I don't know who the current physical materialist, you know, you know, advocates are, you know, the, the church lens and other people, you know, back in the days when I was a philosophy student were those among those who were, who were very vocal on that. But is to say that actually, in fact this notion of how do we Engage technology in our work is a very good thing to do. And if so, then maybe philosophers would have come up with it more or would have been able to participate more in it versus the, you know, computer scientists who are like, okay, I'm working on the technology side of it. What can I make with this technology? And obviously, you know, the what can I make with this technology goes well earlier than computer science. Right. I mean, you've got, you go all the way back to Frankenstein, you know, and kind of thinking about, you know, kind of imaginations about what could be constructed here or the Golem or, or Talos in Greece. And so the, the notion that things could be constructed now, could they be constructed with silicon? And it could be constructed with computer science? You know, that's the modern kind of artificial intelligence. But the, but the notion of that is, I think one of the reasons why I want philosophy to be broader in its instantiation, not just a question around. This is obviously a bit of a deliberate rhetorical slam, but trolley problems.
A
Yeah, that makes sense. Maybe a way to frame that is it's better to be asking deep philosophical questions and be a philosopher out in the world to some degree than it is to just, just be a philosopher. I don't know if you'd agree with that, but like something, something like that.
B
I chose that with my own feet.
A
Yeah, there you go. Yeah, I definitely agree with that. So we have a minute left. The last thing I want to ask you is, I assume that there are a lot of people who are listening to this, maybe have not been philosophically inclined in the past and are either like, like, wow, I could not follow any of that and I want to figure out what they said, or they're like, oh my God, like, I want to learn how to think like that. And I think for the first group of people, I would totally recommend, like, just use ChatGPT. Talk to ChatGPT about this stuff and it will tell you for sure. Yes, but I wanted to ask you, like, if people are thinking about, like, they want to get that kind of like, thinking crisply about possibilities thing that you, that you talked about so well at the beginning. Like, like, where would they start? Or what are your, like, what are your favorite kinds of philosophers or kinds of books like this to dive into?
B
Well, you know, I think the best way is to get interactive. It's part of the reason like, like study philosophy or even, you know, even for the second part of the question, Some use of ChatGPT also very helpful there because the interactive is, is, is what, what does it. And like for example, one of the, the things that I use CHAT GBD for, which is part of this is I have something that I'm arguing for, thinking about, arguing for and I right put in my argument and I say okay, CHAT gbd, give me more arguments for this. How would you, you know, argue for this differently or more or. And then also how would you argue against it? Right. What would your counter arguments be to this? And use that as kind of again, you know, the kind of thesis and synthesis trying to get the synth in this. And, and so I think that dynamic process is really important. And so you know, part of the, the way that people traditionally try to get to this is they, they go try to go through what are some of the real instances of great human thought and then try to understand that how to think that way way. So one of the things that was too much text prompting to go into impromptu but as I think very useful as another utility for you know, kind of use of CHAT GBT is, you know, like I like I'm a non mathematical college graduate, explain Girdle's theorem to me. You know, I'm at non physicist. Explain Einstein's thought experiments around relativity to me, you know, etc. And that dynamic process of getting into understanding those things is part of how you learn to think this way. And it's one of the reasons why, you know, kind of our, you know, one of the things that has helped us accelerate our cultural revolution, our cultural evolution. The secret of our success is having things like, like books, having things like universities because it's that dynamic process of engaging that's so important. And so there's not necessarily one specific book. Although by the way, if you really want to have your mind boggled, go read or reread Godel Asher Bach. It's great, right? You know, but, but, but like what are the instances of these canonical amazing pieces of thinking and then you know, kind of in that dynamic engagement process, you're internalizing similar.
A
Yeah. Be curious about great ideas and engage with them. This was, this was a great conversation. I really appreciate you coming on it. I feel like I learned a lot. Thank you so much.
B
My pleasure. Awesome.
A
Oh my gosh, folks, you absolutely positively have to smash that like button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs. About CHAT GPT. Every episode is a roller coaster of emotions, insights and laughter that will leave you on the edge of your seat craving for more. It's not just a show. It's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor, hit like Smash, subscribe and strap in for the ride of your life. And now, without any further ado, let me just say, Dan, I'm absolutely, hopelessly in love with you.
Host: Dan Shipper
Guest: Reid Hoffman
Date: December 24, 2025
In this episode, Dan Shipper interviews Reid Hoffman—renowned entrepreneur, philosopher, and co-founder of LinkedIn—about the profound philosophical implications of artificial intelligence, especially large language models like ChatGPT. The conversation dives deep into how AI intersects with age-old philosophical questions about truth, knowledge, reality, and human nature. Rather than focusing solely on practical tips, Hoffman and Shipper probe how AI is reshaping not only business and creativity but also the very frameworks through which we understand humanity.
"Philosophy is very important to this stuff because it's understanding how to think about very crisply, what are possibilities, what are theories of human nature?"
— Reid Hoffman [03:32]
"The fundamental problem is they try to frame it to get, to get an intuition... but you're positing something we never [encounter]—like, perfect knowledge. That's not real."
— Reid Hoffman [07:19]
"People don't think enough about how technology helps us change what is the definition of a human... we have this kind of imagination—that we are this pure thinking creature. And... that's not really the way it works."
— Reid Hoffman [10:34]
"Truth is the dynamic of discovery through language... and the way that's grounded is to some degree sharing a certain biological kind of form of life..."
— Reid Hoffman [20:13]
"Later Wittgenstein went to: well, it's how we're playing these language games and how we're reapplying them... It's embedded in the world by how we navigate as biological beings."
— Reid Hoffman [31:30]
"In any sufficiently robust language system, there are truths that cannot be expressed within the language system. And... what exactly it means... is mind boggling."
— Reid Hoffman [39:23]
"A major clock of our evolution is cultural evolution or knowledge—digital, etc.—which is much, much faster. And part of what we're doing with AI and LLMs is tools to help accelerate that."
— Reid Hoffman [49:34]
On the value of philosophy for business:
"A background in philosophy is more important for entrepreneurship than an MBA." — Reid Hoffman [03:32]
On the reality of thought experiments (re: trolley problem):
"The right human response to the trolley problem is: I'm going to try to break the trolley so it doesn't hit either of them." — Reid Hoffman [08:31]
On how AI changes us:
"One of the things that we think of ourselves as human beings is static. And actually, we're not static, because we are constituted by the technology that we engage and bring into our being." — Reid Hoffman [42:35]
On the power of conversational AI for learning:
"If you really want to have your mind boggled, go read or reread Gödel, Escher, Bach. It's great... But like—what are the instances of these canonical amazing pieces of thinking, and then in that dynamic engagement process, you're internalizing similar [ways of thinking]." — Reid Hoffman [57:29]
Reid Hoffman’s message:
Philosophy and technology are inseparable in shaping the future of humanity and knowledge. By combining philosophical rigor (asking deep questions, exploring possibilities) with the scalability and pattern-finding of AI, we can accelerate cultural evolution, redefine what it means to be human, and consistently sharpen how we reason, create, and connect.