
Loading summary
A
To help improve our job prospects. Really well. Right, really well, what if you can learn to trade? Really well, what if you can learn. I totally agree. So that's exciting to me. But to do that, what we need are millions of professionals, Millions of people, Tens of millions of people, Hundreds of millions of people, Billions of people all using AI for these purposes.
B
That's amazing.
Okay, guys, got Travis here. Today we're going to talk AI, one of the pioneers in space. Thanks for hopping on today.
A
Absolutely. Great to be here, Sean.
B
Yeah. The space is evolving so fast. Does it concern you at all?
A
Yeah, it concerns me for a number of reasons, but probably not the same reasons other people think. I think there's a lot of, a lot of things happening quickly and a lot of people trying to make sense of it quickly, even though there's not a lot of understanding of how it actually works.
B
Right.
A
And so there's a lot of uncertainty that can lead to confusion. So that, that probably concerns me more than anything is just that uncertainty leading to rapid action and not thoughtful action.
B
Yeah. What are the biggest concerns and red flags you're seeing right now?
A
So kind of overreaction by governments is one that concerns me. You know, people trying to pass laws, make regulations where they don't really understand what the implications of those are. So kind of ending up with rules and patterns that don't really fit what, what emerges. Yeah. So that concerns me. I think the other thing that concerns me is a lot of closed source companies just trying to own the space. You know, a whole lot of kind of realist, like a land grab, you know, oh, here's this AI space. Let's grab all the attention. Whereas I'm a really big proponent of people learning from AI and making it part of their toolbox.
B
Yeah.
A
You know, ultimately letting us become better agents for ourselves by having AI as a tool that we all can use. So there's kind of this land grab going on where a lot of information flow is happening to a few companies. So that concerns me too. I want to see AI knowledge diffuse and disperse and have lots of people use it effectively. But you know, there's a lot of money sort of advertising, promoting, you know, it's amazing how quickly people can be informed by narratives. Right. We're sort of driven by narratives. We seek out narratives and worldviews and way to think. And without critical thinking, without background, you can easily be persuaded by something that just isn't true.
B
Especially with social media these days.
A
Yeah, exactly, exactly. And so that's. So AI could be used to actually amplify that capability. People are good at it. But what if you had AI even be better at it? That's in one sense why social media has been challenging, is because it's even stupidly, not with any intent at all, but just trying to get eyeballs. AI algorithms have already been feeding people information that they want to hear. And so it reinforces cognitive bi confirmation bias and. Yeah, and the cognitive dissonance that happens all the time. So you're just basically being reinforced what you want to hear. And it kind of, it's create polarization in our society.
B
Right.
A
It's creating people. You know, people are creating, you know, enemies out of who could be friends. And that's one of the things I think I don't want AI to amplify that. I want to see how can we use AI to understand each other better and actually maybe show a little more empathy to each other and understand how, hey, you know, we're not that different, we have our differences. That's. That could be beautiful. But, but let's not emphasize them because that can lead to conflict.
B
Yeah. You said close source earlier. Could you explain what that is and which companies are closed source?
A
Absolutely. So closed source means that it's actually been the norm for a long, long time. Well, if you go way, way back, when software first came around, when hardware was instituted, the software came with the hardware because you basically people competed on, here's new hardware, here's the new machine to run your business. And the software was all open source. They didn't have the term then. It was just you could use the software as kind of the 80s and early 90s. Then people would say, oh wait, these are valuable. And the company Microsoft actually was one that said an Apple. And those companies emerged, they went, hey, there's value in the software. We can't give it away. So we can't show the code. Because if you show the code, then people can potentially take it, derive from it, build from it. So they would close the code and then application would be able to run that closed source. Now it's still a lot of software gets built that's closed source. So that's fine. It's not like that's some kind of moral evil to close the source, but it does create challenges for innovation. What Open Source was is a movement that started around the time Linux came around. You know Linux, the. You've heard of Linux? It's an operating system that is essentially why we have cloud computing today.
B
Wow.
A
It's this massive operating system that has just now runs all the servers. You know, it was, it was, it's a pretty impressive kind of movement and it's the reason AWS exists, it's the reason GCP exists. Absolutely. It's hugely impactful. So Open Source has been a.
Extremely impactful social movement. And that's probably the way to describe it because. And I started participating in Open Source in the 90s, late 90s, when I was a graduate student. Just I'm kind of a geek at heart. I'm a science geek who loves physics and loves math and loves to kind of make things. And I need software to do it and I wanted to. And so I got wrapped into this open source movement because I liked how when I did the work I could share it with others. And that's essentially a lot of us, you know, millions of people got, have been pulled into this open source ecosystem sharing their code with each other. So it's kind of this interesting world that's emerged over the past 30 years where people share code, there's places that that code can be seen and people can build from it. There's lots of movements around that code. So open Source is just this phenomenon of sharing your code. Everyone can use it. Closed sources, you got to license the code to use. Right. But open source, there's lots and lots of perils. We could have long conversations about what open source means, how, how it drives value, how to, how do you make money from it. In fact, my story and what I'm doing now really starts there.
B
Okay.
A
I loved open source. I love the engagement that it created. I love the fact that I could share, people could comment, people could work with me and I build a community. Love that. That's cool. Just, you know, because all of us need community and, and, and tribe. In fact, I think that's a critical thing to understand about human behaviors. You want to have your tribe, you want to have your community. Open Source gave a place for people to have community.
B
Yeah. So is ChatGPT open source?
A
No.
B
So, so is that the whole dilemma with them and Elon?
A
Yes, that's part of it. That's part of it. I mean, some part of it is just egos, right? But, but a big part of it is, is the fact that Elon gave them money to build open source AI.
B
Got it.
A
That's why they started was Elon was concerned about Google having all the knowledge of AI. So some of the same concerns that I'm expressing, Elon expressed years ago where he was saying, look, we need to make sure that AI as it Emerges isn't just controlled by a few hands. We have to have lots of people aware of how to use this. And he was worried that Google was actually consulting all the AI experts. And with their deep mind, they were, they were advancing very, very rapidly. So OpenAI was basically initial tranche of, hey, let's go give some money, create a foundation and have open source AI. But then, you know, things changed. There were some different opinions and I don't know Sam well enough and have, I don't know quite what drove those decisions. I can understand there's some probably good reasons and then reasons that I wouldn't agree with, but. So he pushed for kind of closed AI and then. But, you know, had the release of ChatGPT that had this phenomenal explosion in the world of people going, oh, these models that scientists have been working on for decades can do interesting things like predict words reliably, predict phrases that sound realistic, and then more than going beyond that, from just words to music and to audio and to visual, video and images.
B
Now the Google one.
A
Yes, exactly right. And actually produce a podcast.
B
And it sounds decent too.
A
It does. Yeah.
B
It's scary.
A
It is. No, there's a company I've been consulting with called Zyfra. They're out in Palo Alto, and they have a mechanism to produce speech from text that produces realistic voice models. That sounds just like somebody you clone yourself.
B
Yeah. And that model is cool to me because I'm an audio learner. Like, I love podcasts and audiobooks. So when you can do that, you can learn really fast.
A
Absolutely. So I'm more of a visual guy, but I love audiobooks and I love podcasts too.
B
Yeah.
A
So I, I understand. I like to listen at 2x speeds.
B
Sometimes 2,5.
A
Sometimes 2,5. I know the recent one can go up to 3. I'm going, some people I can listen to at 3x speed. Same.
B
Yeah, he talks too fast. Even 2x at Ben is tough. It's true.
A
You're standing there going, wait, I got to process all this information quickly. Can I?
B
Yeah, but sometimes when it's in 7 hour rogan episode, I'll do like 2x for sure.
A
Yes. Right. And that's. Oh, that was only three hours.
B
Yeah, he's had some long ones lately, man. So true. So there's AI companies coming out of China now. Who do you think has the most advanced one? We're filming this in March 2025.
A
Yeah, well, right now, the, you know, China seems to have some really cool advanced models. Deepseek showed that it's very immense. But, but um, Gemini is. Google's actually showing some advanced models. Anthropic showing advanced models. Actually some of the open source models are also getting to where they're comparable. Okay. Uh, so it, it kind of, it's no longer fortunately just a matter of who has the best models. You really have to start asking for what purpose. Got it Right. It's no longer there's one best model. It's okay. What are you trying to do with this? What's your goal? Uh, do you want to summarize text? Do you want to have clone a voice? Do you want to run a podcast? I think that's the future I'm excited about is we're getting away from this race towards the God AI now that's still there and there's still a lot of messaging about that. I'm definitely in the camp of we're not going to incrementally get to.
The artificial general intelligence or the human like intelligence. What we have is definitely a clear intelligent system that may be a part of how the human mind works, but it's not the complete thing. And so that's cool. Now we can. But really any value coming out of that is coming from a system system that's produced. So you take the model, you take some other hardware, you take some just, you know, computing capability and you stitch it together in a system that's produced. Right now as we're speaking, this Manus is all the, all the rage. Right. It just came out like within a few this in this past week and everybody's kind of going, whoa, this is amazing because it can run my business, it can do my research report, I can run a stock report, I can file my taxes if they think. I mean, but it's, it's making games. Grok has a great model too, actually. Grok, Grok 3 was just released and it's beating in a lot of measurements a lot of the other models.
B
Wow.
A
So Grok is also really a fantastic base model and they have a deep search and they have kind of additional modules around the model that they're starting to release as well that are people are going to experience with. But honestly Sean, it's, it's really early people. So there's a. That's, it's easy to kind of have these F1 race concepts, but it's not really the model that works because everyone has to kind of ask the question what am I trying to use use this for? Right. And what for me is going to be a valuable tool and that's going to be the most productive question.
B
Like for me, like on the side, I'm a chess player. AI has revolutionized the chess space. It's called. It's caused players to become a lot better. For example, I played Andrew Tate in chess yesterday and I beat him rep highs. Think about this. He played chess his whole childhood but there was no AI or computers back then. So to get better at chess was really hard.
A
Yeah.
B
Now when I play on the Chess.com phone, AI analyzes every single game and I could see where I mess up so I could get better way quicker.
A
So I love that. I think that's a fantastic use case of AI. I think it's an important one too. It's about helping humans get better. Like I'm a big advocate for natural intelligence. Like we have not optimized how humans learn. Right. In fact, I think our education system, at least in the United States is really, really bad. Really bad. It's terrible. It's and, and, and a lot systemic and schools are banning AI and that's completely a mistake.
B
Yeah.
A
Because AI needs to be used to help exactly this. It can make personalized education more possible. It can help you take an interest you have and in that moment of, of interest amplify your capability and, and the iteration ability to learn. Powerful.
B
So good.
A
Actually there's a, there's a, there's a guy, Gerald Chan, he might be a little annoyed that I announced I talk about on this podcast and he's a. He's an investor, he's somebody that, that invested. But he gave a talk at Berkeley just a few weeks ago about the role AI can have in improving education. It was actually quite inspiring.
B
I need to watch that.
A
Yeah. I don't think the video's out there, but I can send you the paper. Anybody interested? I know he's willing to let the paper be spread. It is a phenomenal discussion of something I think is a critical question. Because one of the things people are worried about appropriately is how will AI disrupt my work. People are worried about unemployment, they're worried about jobs. If AI takes away my job.
I'm not a fear based person. I think those kinds of commentary, that kind of commentary is useful, but it can be paralyzing. It's usually normally better to turn it to a. Okay, what do I need to use AI for? To help. And I think there is. We can help you use AI to help improve our job prospects. I do like how to chess really well. Right. Really well. What if you can learn to trade really well? What if you can learn? I totally agree. So that's exciting to me. But to do that, what we need are millions of professionals, millions of people, tens of millions of people, hundreds, hundreds of millions of people, billions of people, all using AI for their purposes. So do you see how we need to convert AI from being a thing somebody else does to us to a tool that we all use to better ourselves and improve our lives? That's what I'm about. That's what this open source AI foundation that I, that I recently started working with and joining is all about. It's recognizing that the same, like I just said, that Linux as an open source operating system gave rise to cloud. That same phenomena of using open source AI. It'll give rise to a future we can't expect if we keep people in charge of it and a lot of people in charge of it. Not just one or two people, not just a few thousands of people, but millions and billions of people having access to similar tools just level the playing field and help people engage with each other. Now that will a lot of people go, wait, that's going to change everything. Yeah, it could. And so I'm not all for, I'm not, I'm not all for kind of rapid disruption. How do we do this in a measured way where people are accountable and people have ways to work together and people do it in their communities and do it in their families and do it in their tribes and do it in their, their virtual groups. Like, that's, that's how we, we already are organized as humans and all these little different governance groups. AI can help us each organize better, help us relate to each other better, and it can bring about this incredible world. Ivar Eve So that's what I'm about. That's what I love to try to promote. Open source is how we got here. I've been involved in open source for a long, long time. I started as a scientist.
B
Wow.
A
Really? During my, I got a master's degree. Using satellite images to measure backscatter off the earth.
B
Holy crap.
A
Yeah, it's intense. It was intense. But it's also very, you know, it's math. I mean, I know, I realize math not for everybody.
B
Yeah.
A
But I love math and I love learning as much math as I could. And to me, math is just a tool. Tool. It's a tool that lets you get insight from data. And we did that from the satellite data backscatter. You basically have electromagnetic radiation. So you like beam a radar down to the earth, you measure what comes back and Then you try to infer what that means about the ice field, about the wind speed and direction over the ocean, about the plant vegetation. So that was the fir my first experience with large scale data processing. But I went to the medical area to try to do the same with what's images with mri, with ultrasound. And that industry could progress faster. It's a little more regulated and so there's a lot progress is slower.
B
Yeah.
A
That's another topic we could go into. But probably a different. Different data. But go ahead. Yeah.
B
Have you heard of Prula? I have not full body mri. They use AI to analyze the mri. The problem is it's expensive. So most people can't afford this. But yeah, I got it. They used AI to analyze my results. I learned a lot about my body really. And that's where I hope the future.
A
Like that is amazing medicine goes to.
B
And the same with my dentist. So there's holistic dentists now will take photos of your teeth, throw it into AI. It was finding my cavities and it was finding my gum infections.
A
Sean. I love that. Yeah. That's actually why I went to be PhD is to make. Make instruments better. Like actually. Because I think that's possible. Like and you look into why things are so expensive. There are some reasons for it but there's. We. These can be more. Made less expensive. We could easily have MRI technology at least as pervasive as dental imaging. Right. At least as pervasive as your local doctor could have one. I hope we get there too. Yeah.
B
It was like 2,500 which is a lot for an full body MRI.
A
You know it is. And some of that's the magnet is expensive but some of it's the processing. And you know the. Some of it is you can actually save money if you don't put as much effort into building a very homogeneous magnetic field.
B
Right.
A
But that requires better data processing. And so your point? If AI can help us process data better then we can have MRI more ubiquitous.
B
Yeah.
A
For less.
B
Yeah. Before that. You ought to get a doctor to manually review every result.
A
It is. And there's also a lot. It was expensive to make the field so that the processing was simple. That's the big thing. So because right now that's how MRIs work is the. The processing is relatively simple from a mathematical point of view. Because if you have the field slightly inhomogeneous and the processing is a lot harder but potentially still possible. And with AI hey maybe we can get there. I also am excited about AI just design AI helping Scientists iterate faster. Just like you said with chess, you learn quickly. What if scientists learn more quickly about how to, you know, what does this mean? What if I make this change? What does that mean? There's a big saying I've come to say all the time, which is innovation is iteration. Like the speed of iteration determines your speed of innovation.
B
Yeah.
A
Uh, yes, you need creativity. Yes, you need, you know, it's people who pull that off. But iterating is really the key to progress.
B
Yeah, yeah. I'm also a big poker fan. AI has revolutionized poker. I bet it has called them solvers, but it shows you how to play like the best strategy, the best hand and when and what to bet.
A
Two hand. I mean Texas, the Texas poker.
B
Yeah, well, it has all the different pokers, but it's just that's. People have gotten so much better at poker now.
A
I agree. It's actually a corollary of something I always say, which is, you know, for your job, it's not about being replaced, it's about being replaced with someone that knows how to use AI better.
B
Exactly.
A
Right. So you know, if you're worried about your job and AI, just turn that into motivation to learn to use AI.
B
Yeah, same with my video editors. All, a lot of them are using AI now to find clips and it's like, I love that, like I don't want to replace you.
A
Right.
B
I want you to be able to it like give me a ton of clips. Right.
A
We're still gonna need the human connection. I really am a promoter of accountability with people. Like you're not gonna have AI be accountable. In fact, that's kind of the thing that's not really the root of it. Like, oh well, even the, you know, Tesla cars can drive. You can drive you now, but yet you still have to sit in the driver's seat. I know, I know the, there are self driving cars going around cities.
B
The waymo.
A
The waymos are showing up. But, but. And a big part of that is actually liability. Who's liable if something goes wrong? Right. What if it crashes or something crashes? What if there's a problem? And so ultimately that's a real question that has to be resolved that will be resolved through accountability layers. Right. So my answer is accountability is with individuals and then you have a tool that's AI, then you're still accountable. Like all my developers, I have a few companies I've worked at, have developers that work with me and I tell them, look, use AI all you want, but the code you commit to a repository and ship to a customer. You're accountable for that code. That's your response for the code. You can't say the AI made me do it. Right. It's fine if the AI helped you totally behind that. Like you do that all day long. But then you're. If you're accountable for, you shouldn't.
B
Because you can't sue AI yet.
A
Right? Yes. And that's a different question. But we're not, we're not even close to having that conversation. Yeah, let's give that about 10 years. Yeah.
B
We're not full Terminator yet. Right, Right. Do you have fears that some models can go haywire without proper regulation?
A
I think yes, models could definitely go haywire. I think they already have in a way. In terms of how they disrupted our social contract with each other, our social connectivity. Yes, they can go haywire. The regulation question I think is I'm all for governance. I'm all for people learning those principles of governance. Every community has governance. If you don't have community without some amount of governance, I'd rather have it be at that level rather than kind of a huge scale.
B
So you don't want the federal government?
A
I don't want the federal government. I don't want the. I'm not saying involved entirely. I want them to be restricted to the things they need to care about. Right, right. Not just lay out all AI policy. That would be a really, I think an ill suited idea right now.
B
I think because it evolves so fast. By the time they lay out policies, so much could change. Then they got to keep updating it.
A
But individual departments could have policies about how they use AI in their department.
B
Right.
A
For example, like Health and Human Services could have a strategy for AI adoption and how they use it. Yeah, I think that's tr. But like just having some law about how AI. Because anyway, what do we mean by AI? The end of the day, AI is just a math program. Like it really is just a, an array. It's a multiplying numbers together and then summing them up in a nonlinear function with a little non linear nonlinearies in the middle. You end up with this. It's just. It's just math. So we're going to regulate math. Okay. How are we going to do that exactly?
B
I agree. SEC's been trying to regulate crypto for years and it just.
A
And they mess up. It's a mess. It's a mess. They actually, they do more harm than good. I think we go too. So I tend to be like I when I was younger I was, I was quite libertarian, quite open, like, get rid of all regulation. And, you know, in some future world, I might imagine that experiment being interesting. But I think ultimately I recognize that, like a value of regulation is to avoid suing each other.
B
Yeah.
A
Like, if you can. That's the thing. What you're trying to avoid. You're. You're avoiding the problem of, hey, we're debating this. And I'm, I'm mad at you because you did this. I'm not at you because you did this. You just have lawsuits of everywhere. We could slow down the whole economy because you got people suing each other with a very inefficient judicial system or a very unjust one, too, where sometimes it's not a judicial system that goes into arbitration. Anyway. There's real problems there. So I could see the value. I can see the value of regulation. I see the value of good rules. But what are the good rules? How do we know what those rules are? We can know, but only if we have enough context, enough understanding, enough experience. Right now, AI is just so new, and the rules might be different, might be different when it comes to this industry versus that industry versus another industry. It's a great point. So I think we ought to just like, let, Let people. I'm not saying we just throw caution the wind and let people hurt each other. Not saying that. Right. If you have a claim against somebody because they use AI, you have that claim. Like those rules already exist.
B
Yeah. It's going to be interesting to see how it plays out, because let's say you ask AI for stock advice. Yeah, it just gave financial advice. But can you go after them for that, you know, if you lose money?
A
Well, I guess, I think in the most of the general AI systems, a lot of terms that says, you know, you can't sue us for stuff you did with this, and that's pretty fair. But if somebody came out with a financial advisor, Right, because and then offered stock advice and you have to register, you already have to register to do that, then yeah, you potentially could. But most of those financial advisors have all kinds of, you know, things you sign over saying, I don't recognize this advice and I'm responsible. So I see it. Again, if we just focus on, we've already got systems, those systems can be improved, maybe AI can help us improve them. But let's not panic over AI. I think the thing to be concerned about is, is AI open, Is AI available, and can people actually use it for the, for their accountability? Right. That's really we want to make AI as distributed as possible.
B
Agreed. I'm hearing a lot about quantum computing as someone in the crypto space that's, that seems to be advancing rapidly. So they're, they're actually saying it's going to be so advanced they could hack into wallets in a few years.
A
That's what they're saying. I, I tend to be skeptical of those statements.
B
Yeah.
A
I, I've been on record as being quantum skeptic.
B
Okay.
A
For a long time. Not that there isn't something there. There is, there's some really cool things that happen. But we're h. We have a really hard time organizing a bunch of quantum bits together and understanding what even happened, what even that means. Quantum is one of those areas where we're still trying to figure out what, what nobody knows what it means. Like quantum mechanics is a description of nature that just gives us a way to predict what nature will do. But what does it mean? We don't know. And so it's easy to get hyped up. The other, the other metaphor is I'm a electromagnetics guy and we had optical computers back in the day. Optical computers can do really fast things like take the Fourier transform really, really quickly just by propagating light. But we don't have optical computers today. They could be useful for some things like MRI image reconstruction. We do that in an optical computer very fast. But the infrastructure of optical computers, actually building them and the whole ecosystem around it is really expensive. So I understand that why quantum computers are exciting. But quite often they kind of. They're over hyped. Right. I'm not saying they're initially a research topic and they're interesting idea, but I'm not saying nobody should ever invest in them. I think they're worth investing in. But I think most of us it's going to be a non issue. We'll not even realize what's happening with.
B
I remember when I was in high school, everyone said 3D printing was the future. You're going to print houses with 3D print.
A
Exactly.
B
Then it flopped.
A
And then it flopped. Right.
B
Yeah. I wonder if quantum's gonna be like that.
A
Quantum's kind of like that in the sense of it's really cool tech and really cool science. It. And, and you know, honestly the only thing is, okay, just make your cryptographic cryptographic house longer.
B
That's your fine see phrase is 16 words now maybe they should make it like exactly.
A
And so there's kind of quantum. So I think it's worth thinking about. But most of what I've seen is people get a little bit overhyped about it because they believe the. They believe the, the. The. All. All the rest of it. Again, I still love science and I like the work that people are doing. I don't want to dismiss the great work of the scientists. I think they're doing amazing things. I just think commercially it's not something on our. On our horizon in the next 15 years.
B
Makes sense. Travis, anything else you're working on or want to close off with here?
A
Yeah, well, I'm working on helping to make sure open source AI exists through the open source. Making source. Yes, exactly. We have a phrase, actually make open. Make AI open source again. Oh, I love make AI open source again. We, it's the whole institution behind AI. Can be better, can be awesome. If we make it open and help people own their own AI. That's a big one. People need to own their own AI Rather than send all your data to somebody else and use a closed model. Own your own AI and have the model serve you and your data. Keep your. Keep your data your own.
B
I love it. Well, we'll link all your companies below and your social media handles. Thanks for coming on, Sean.
A
Great to be with you. Thanks.
B
Check them out, guys, and I'll see you next time. All right. Take care.
Of.
Digital Social Hour with Sean Kelly
Episode #1333: AI's Future: Open Source or Closed Control?
Guest: Dr. Travis Oliphant
Release Date: April 17, 2025
This episode features Dr. Travis Oliphant, scientist, open-source advocate, and AI pioneer, in a candid conversation with host Sean Kelly. Together, they explore the rapid evolution of AI, the tension between open source and closed control, societal implications, policy debates, and Dr. Oliphant's personal motivations and projects. The dialogue dives deep into how these choices could shape the accessibility, risks, and empowerment potential of artificial intelligence for billions of people worldwide.
Concerns about Overreaction and Misunderstanding
Government Regulation Fears
Closed Source Dominance and Its Risks
Open Source as a Community Builder
AI’s Role in Social Media and Polarization
Potential for Empathy and Understanding
Transformative for Learning and Professions
Jobs: From Fear to Empowerment
Individual Accountability over Blanket Regulation
Cautions Against Regulatory Overreach
On AI polarization:
"AI could be used to actually amplify that capability. ...It kind of, it's create polarization in our society."
— Dr. Travis Oliphant (02:21)
On open source's spirit:
“Open source has been a... extremely impactful social movement.”
— Dr. Travis Oliphant (04:45)
On AI in education:
"Because AI needs to be used to help exactly this. It can make personalized education more possible."
— Dr. Travis Oliphant (11:47)
On being replaced by AI:
“It’s not about being replaced; it’s about being replaced with someone that knows how to use AI better.”
— Dr. Travis Oliphant (18:12)
On government regulation:
"I'm all for governance... I'd rather have it be at that level rather than kind of a huge scale."
— Dr. Travis Oliphant (20:31)
On quantum computing hype:
"I, I've been on record as being quantum skeptic."
— Dr. Travis Oliphant (24:03)
The episode maintains an engaging, conversational, and occasionally passionate tone. Dr. Oliphant is frank yet optimistic, offering both warnings and hopes for the AI future. Sean Kelly adds relatable, everyday use cases and keeps the conversation accessible.
Dr. Travis Oliphant urges a future where AI is a decentralized tool—open, accessible, and empowering for all, not just a privileged few. He highlights both the tremendous upsides (education, healthcare, productivity) and the social challenges (polarization, regulatory pitfalls) AI presents. The key, he argues, is community-driven, open-source innovation and a balanced, intelligent approach to governance.
“Make AI open source again. ...People need to own their own AI rather than send all your data to somebody else and use a closed model.” (26:08)