
Loading summary
Marc Maron
Hey folks, it's Marc Maron from wtf. It's spring, a time of renewal, of rebirth, of reintroducing yourself to your fitness goals. And Peloton has what you need to get started. You can take a variety of on demand and live classes that last anywhere from 10 minutes to an hour. There are thousands of peloton members whose lives were changed by taking charge of their fitness routines. Now you can be one of them. Spring into action right now. Find your push. Find your power with Peloton at 1.
Progressive Insurance
This episode is brought to you by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your home and auto policies. Try it@progressive.com Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states.
Elise Hu
This episode is sponsored by SimpliSafe. I want to tell you about SimpliSafe, a company that's revolutionizing home security. What's fascinating about SimpliSafe safe is their approach to protection. Their active guard outdoor protection doesn't just react to break ins, it aims to prevent them. With AI powered cameras and live monitoring agents, they can spot suspicious activity and intervene in real time. It's like having a vigilant security team watching over your home 247 SimpliSafe offers peace of mind without the hassle of long term contracts or cancellation fees. Plus, their monitoring plans start at just about a dollar a day. Making advanced security accessible to more people. They offer a 60 day satisfact satisfaction guarantee or your money back. Interested in learning more? Listeners can get 50 off their new SimpliSafe system with professional monitoring and their first month free@simplisafe.com Ted Talks Daily. That's S I M P L I safe.com Ted Talks Daily there's no safe like SimpliSafe. You're listening to Ted Talks Daily where we bring you new ideas and conversations to spark your curiosity every day. I'm your host Elise Hu. The transformative power of artificial intelligence is a topic we talk a lot about here on this show and for good reason. At TED 2025. Sam Altman, the CEO of OpenAI sat down with Head of TED Chris Anderson for a conversation about the fast growing field, its global consequences and where it's going next. That's coming up.
Chris Anderson
Sam, welcome to ted. Thank you so much for coming.
Sam Altman
Thank you. It's an honor.
Chris Anderson
Your company has been releasing crazy insane new models pretty much every other week it feels like.
Sam Altman
Yet the New image generation model is part of GPT4.0. So it's got all of the intelligence in there. And I think that's one of the reasons it's been able to do these things that people really love.
Chris Anderson
I mean, if I'm a management consultant and I'm playing with some of this stuff, I'm thinking, oh, what does my future look like?
Sam Altman
I mean, I think there's sort of two views you can take. You can say, oh man, it's doing everything I do, what's going to happen to me? Or you can say, like through every other technological revolution in history, okay, now there's this new tool. I can do a lot more. What am I going to be able to do? It is true that the expectation of what we'll have for someone in a particular job increases, but the capabilities will increase so dramatically that I think it'll be easy to rise to that occasion.
Chris Anderson
I mean, the writing quality of some of the new models, not just here, but in detail, is really going to a new level.
Sam Altman
I mean, this is an incredible meta answer, but there's really no way to know if it is thinking that or it just saw that a lot of times in the training set. And of course, if you can't tell the difference, how much do you care?
Chris Anderson
So that's really interesting. We don't know, isn't there though? At first glance, this looks like IP theft.
Sam Altman
I will say that I think the creative spirit of humanity is an incredibly important thing and we want to build tools that lift that up, that make it so that new people can create better art, better content, write better novels that we all enjoy. I believe very deeply that humans will be at the center of that. I also believe that we probably do need to figure out some sort of new model around the economics of creative output. I think people have been building on the creativity of others for a long time. People take inspiration for a long time. But as the access to creativity gets incredibly democratized and people are building off of each other's ideas all the time, I think there are incredible new business models that we and others are excited to explore. Exactly what that's going to look like, I'm not sure. Clearly there's some cut and dry stuff. You can't copy someone else's work. But how much inspiration can you take? If you say, I want to generate art in the style of these seven people, all of whom have consented to that, how do you like divvy up how much money goes to each one? These are like big questions, but every Time throughout history, we have put better and more powerful technology in the hands of creators. I think we collectively get better creative output and people do just more amazing stuff.
Chris Anderson
I mean, an even bigger question is when they haven't consented to it. In our opening session, Carol Cadwalader showed ChatGPT give a talk in the style of Carol Cadwalader. And sure enough, it gave a talk that wasn't quite as good as the talk she gave, but it's pretty impressive. And she said, okay, it's great, but I did not consent to this. How are we going to navigate this? Isn't there a way? Should it just be people who consented, or shouldn't there be a model that somehow says that any named individual in a prompt whose work is then used, they should get something for that?
Sam Altman
So right now, if you use our image gen thing and say, you know, I want something in the style of a living artist, it won't. It won't do that. But if you say, I want it in the style of this particular, like, kind of vibe or this studio or this art movement or whatever, it will. I. And obviously, if you're like, you know, output a song that is like a copy of the song, it won't do that. The. The question of, like, where that line should be and how people say, like, this is too much. We. We sorted that out before with copyright law and kind of what fair use looks like. Again, I think in the world of AI, there will be a new model that we figure out. But right from the point of view.
Chris Anderson
I mean, the world's full of creative people are some of the angriest people right now or the most scared people about AI and the difference between feeling your work's being stolen from you and your future is being stolen from you, and feeling your work is being amplified and can be amplified. Those are such different feelings. And if we could shift to the other one, to the second one, I think that really changes how much humanity as a whole embraces all this.
Sam Altman
Well, again, I would say some creative people are very upset. Some creatives are like, this is the most amazing tool ever. I'm doing incredible new work. But it's definitely a change. And I have a lot of empathy to people who are just like, I wish this change weren't happening. I liked the way things were before.
Chris Anderson
But in principle, you can calculate from any given prompt how much. There should be some way of being able to calculate what percentage of subscription revenue or whatever goes towards each answer. In principle, it should be possible. If one could get the rest of the rules figured out. It's obviously complicated. You could calculate some kind of revenue share.
Sam Altman
Now if you're a musician and you spend your whole life, your whole childhood, whatever, listening to music and then you get an idea and you go compose a song that is inspired by what you've heard before, but a new direction. It'd be very hard for you to say like this much was from this song I heard when I was 11, this much from when I saw.
Chris Anderson
That's right. But we're talking here about the situation where someone specifically in a prompt names someone.
Sam Altman
Yeah. So I. Well, again, right now if you try to like go generate an image in a name style, we just say that our living. We don't do it. But I think it would be cool to figure out a new model where if you say I want to do it in the name of this artist and they opt in, there's a revenue model there. I think that's a good thing to explore.
Chris Anderson
So I think the world should help you figure out that model quickly and I think it'll make a huge difference. Actually, I want to switch topics quickly. The battle between your model and open source. How much were you shaken up by the arrival of Deep seq?
Sam Altman
I think open source has an important place. We actually just last night hosted our first community session to decide the parameters of our open source model and how we want to shape it. We're going to do a very powerful open source model. I think this is important. We're going to do something near the frontier, I think better than any current open source model out there. This will not be all like, there will be people who use this in ways that some people in this room, maybe you or I don't like. But there is going to be an important place for open source models as part of the constellation here. And I think we were late to act on that. But we're going to do it really well now.
Chris Anderson
I mean, you're spending it seems like an order or even orders of magnitude more than Deepseek allegedly spent. Although I know there's controversy around that. Are you confident that the actual better model is going to be recognized or you actually like, isn't this in some ways life threatening to the notion that, yeah, by going to massive scale, tens of billions of dollars investment, we can maintain an incredible lead all day long.
Sam Altman
I call people and beg them to give us their GPUs. We are so incredibly constrained. Our growth is going like this Deep Seq launch and it didn't seem to impact it there's other stuff that's happening.
Chris Anderson
Tell us about the growth. Actually, you gave me a shocking number backstage there.
Sam Altman
I have never seen growth in any company, one that I've been involved with or not like this, like the growth of ChatGPT. It's really fun. I feel like, great, deeply honored, but it is crazy to live through. And our teams are exhausted and stressed and we're trying to keep things up.
Chris Anderson
How many users do you have now?
Sam Altman
I think the last time we said was 500 million weekly actives and it is growing very rapidly.
Chris Anderson
I mean, you told me that we doubled in just a few weeks in terms of compute or in terms of.
Sam Altman
Privately, but I guess.
Chris Anderson
Oh, I misremembered. Sam. I'm sorry, I'm sorry. We can edit that out of the thing if you really want to. And no one here would tweet it.
Sam Altman
It's growing very fast.
Chris Anderson
So you're confident you're seeing it grow? Take off like a rocket ship. You're racing incredible new models all the time. What are you seeing in your best internal models right now that you haven't yet shared with the world but you would love to hear on this stage?
Sam Altman
So, first of all, you asked about, are we worried about this model or that model? There will be a lot of intelligent models in the world. Very smart models will be commoditized to some degree. I think we'll have the best, and for some use you'll want that. But honestly, the models are now so smart that for most of the things most people want to do, they're good enough. I hope that'll change over time because people will raise their expectations. But if. If you're kind of using ChatGPT as like a standard user, the model capability is very smart. But we have to build a great product, not just a great model. And so there will be a lot of people with great models and we will try to build the best product. And people want their image gen, you know, some Sora examples for video earlier, they want to integrate it with all their stuff. We just launched a new feature called like, well, still called memory, but it's way better than the memory before, where this model will get to know you over the course of your lifetime. And we have a lot more stuff to come to build, like this great integrated product. And, you know, that's. I think people will stick with that. So there'll be many models, but I think we will, I hope, continue to focus on building the best defining product in the space.
Chris Anderson
I mean, after I saw your announcement yesterday, that you've now chatgpt will know all of your query history. I entered tell me about me chatgpt from all you know. And my jaw dropped. It was shocking. It knew who I was and all these sort of interests that hopefully mostly were pretty much appropriate and shareable. But the, but it was, it was astonishing and I felt the sense of real excitement. A little bit queasy, but mainly excitement actually at how much more that would allow it to be useful to me.
Sam Altman
One of our researchers tweeted, kind of like yesterday, this morning, that the upload happens bit by bit. It's not, you know, it's not that you plug your brain in one day, but it's you will talk to ChatGPT over the course of your life at some day, maybe if you want, it'll be listening to you throughout the day and sort of observing what you're doing and it'll get to know you and it'll become this extension of yourself, this companion, this thing that just tries to help you be the best, do the best you can.
Chris Anderson
In the movie her, the AI basically announces that she's read all of his emails and decided he's a great writer and persuades a publisher to publish him. That might be coming sooner than we think.
Sam Altman
I don't think it'll happen exactly like that. But yeah, I think something in the direction where AI, you don't have to just go to ChatGPT or whatever and say, I have a question, give me an answer. But you're getting proactively push things that help you, that make you better, whatever. That does seem like it soon.
Chris Anderson
So what have you seen that's coming up internally that you think is going to blow people's minds, give us at least a hint of what the next big jaw dropper is?
Sam Altman
The thing that I'm personally most excited about is AI for science @ this point, I think I am a big believer that the most important driver of the world and people's lives getting better and better is new scientific discovery. We can do more things with less. We sort of push back the frontier of what's possible. We're starting to hear a lot from scientists with our latest models that they're actually just more productive than they were before that's actually mattering to what they can discover.
Chris Anderson
What's the plausible near term discovery like room temperature superconductors that would be superconducting. Is that possible?
Sam Altman
Yeah, I don't think that's prevented by the laws of physics. So it should be possible, but we don't know for sure. I think you'll start to see some meaningful progress against disease with AI assisted tools. You know, physics maybe takes a little bit longer, but I hope for it. So that's like one direction. Another that I think is big is starting pretty soon, like in the coming months. Software development has already been pretty transformed. Like it's quite amazing how different the process of creating software is now than it was two years ago. But I expect like another move that big in the coming months as Agentix software engineering really starts to happen.
Chris Anderson
I've heard engineers say that they've had almost like religious like moments with some of the new models where suddenly they can do it in an afternoon. What would have taken them two years.
Sam Altman
Yeah, it's like mine. It really like that's been one of my big feel the AGI moments.
Chris Anderson
But talk about what is the scariest thing that you've seen. Because like outside a lot of people picture you as, you know, you have access to this stuff and we hear all these rumors coming out of AI and it's like, oh my God, they've seen consciousness or they've seen AGI or they've seen some kind of apocalypse coming. Have you seen, has there been a scary moment when you've seen something internally and thought, oh, we need to pay attention to this.
Sam Altman
There have been like moments of awe and I think with that is always like how far is this going to go? What is this going to be? But there's no, like we don't secretly have, you know, we're not secretly sitting on a conscious model or something that's capable of self improvement or any, anything like that. You know, I. People have very different views of what the big AI risks are going to be. And I myself have like evolved on thinking about where, where we're going to see those. But the. I continue to believe there will come very powerful models that people can misuse in big ways. People talk a lot about the potential for new kinds of bioterror models that can present a real cybersecurity challenge. Models that are capable of self improvement in a way that leads to some sort of loss of control. So I think there are big risks there and then there's a lot of other stuff which honestly is kind of what I think many people mean where people talk about disinformation or models saying things that they don't like or things like that.
Chris Anderson
Sticking with the first of those. Do you check for that internally before release?
Sam Altman
Of course, yeah. So we have this preparedness framework that outlines how we do That, I mean.
Chris Anderson
You'Ve had some departures from your safety team. How many people have departed? Why have they left?
Sam Altman
We have. I don't know the exact number, but they're clearly different views about AI safety systems. I would really point to our track record. There are people who will say all sorts of things. Something like 10% of the world uses our systems now a lot. And we are very proud of the safety track record.
Chris Anderson
But track record isn't the issue in a way, because, no, it couldn't. We're talking about an exponentially growing power where we fear that we may wake up one day and the world is ending. So it's really not about track record. It's about plausibly saying that the pieces are in place to shut things down quickly. If we see a day.
Sam Altman
Oh, yeah, yeah, no, of course, of course that's important. You can't. You don't like, wake up one day and say, hey, we didn't have any safety process in place. Now we think the model is really smart, so now we have to care about safety. You have to care about it all along this exponential curve. Of course, the stakes increase and there are big challenges, but the way we learn how to build safe systems is this iterative process of deploying them to the world. Getting feedback while the stakes are relatively low. Learning about, like, hey, this is something we have to address. And I think as we move into these agentic systems, there's a whole big category of new things we have to learn to address.
Chris Anderson
So let's talk about agentic systems and the relationship between that and AGI. I think there's confusion out there. I'm confused. So artificial general intelligence. It feels like ChatGPT is already a general intelligence. I can ask it about anything and it comes back with an intelligent answer. Why isn't that AGI?
Sam Altman
It doesn't. First of all, you can't ask it anything. That's very nice of you to say, but there's a lot of things that it's still embarrassingly bad at. But even if we fix those, which hopefully we will, it doesn't continuously learn and improve. It can't go get better at something that it's currently weak at. It can't go discover new science and update its understanding and do that. And it also kind of can't. Even if we lower the bar, it can't just sort of do any knowledge work you could do in front of a computer. I actually, even without the sort of ability to get better at something, it doesn't know yet. I might accept that as a definition of AGI, but the current systems, you can't say, like, hey, go do this task for my job. And it goes off and clicks around the Internet and calls someone and looks at your files and does it. And without that, it feels definitely short of it.
Chris Anderson
Do you guys have internally, a clear definition of what AGI is? And when do you think that we may be there?
Sam Altman
It's one of these. It's like the joke. If you got 10 OpenAI researchers in a room and asked to define AGI, you'd get 14 definitions.
Chris Anderson
That's worrying, though, isn't it? Because that has been the mission initially. We're going to be the first to get to AGI. We'll do so safely, but we don't have a clear definition of what it is.
Sam Altman
I was going to finish the answer. What I think matters, though, and what people want to know is not where is this one magic moment of we finished? But given that what looks like is going to happen, is that the models are just going to get smarter and more capable and smarter and more capable and smarter and more capable on this long exponential. Different people will call it AGI at different points, but we all agree it's going to go way, way past that to whatever you want to call these systems that get much more capable than we are. The thing that matters is how do we talk about a system that is safe through all of these steps and beyond, as the system gets more capable than we are, as the system can do things that we don't totally understand. And I think more important than a when is AGI coming? And what's the definition of it? It's recognizing that we are in this unbelievable exponential curve. And you can say, this is what I think AGI is. You can say, you think this is what you think AGI is. Someone else can say, superintelligence is out here, but we're going to have to contend and get wonderful benefits from this incredible system. And so I think we should shift the conversation away from what's the AGI moment to a recognition that, like, this thing is not going to stop. It's going to go way beyond what any of us would call AGI. And we have to build a society to get the tremendous benefits of this and figure out how to make it safe.
Chris Anderson
Well, one of the conversations this week has been that the real change moment is. I mean, AGI is a fuzzy thing, but what is clear is agentic AI, when AI is set free to pursue projects on its own and to put the Pieces together. You've actually, you've got a thing called operator which starts to do this. And I tried it out. You know, I wanted to book a restaurant and it's kind of incredible. It kind of can go ahead and do it, but this is what it said. There's a, you know, it was an intriguing process and you know, give me your credit card and everything else. And I declined on this case to go forward. But I think this is, this is the challenge that people are going to have. It's kind of like it's an incredible superpower. It's a little bit scary. And Yoshua Bengio, when he spoke here, said that agentic AI is the thing to pay attention to. This is when everything could go wrong as we give power to AI to go out onto the Internet to do stuff. I mean, going out onto the Internet was always in the sci fi stories, the moment where escape happened and potential things could go horribly wrong. How do you both release agentic AI and have guardrails in place that it doesn't go too far.
Sam Altman
First of all, obviously you can choose not to do this and say, I don't want this. I'm going to call the restaurant and read them my credit card over the phone.
Chris Anderson
I could choose, but someone else might say, oh, go out chatgpt onto the Internet at large and rewrite the Internet to make it better for humans or whatever.
Sam Altman
I mean, the point I was going to make is just with any new technology, it takes a while for people to get comfortable. I remember when I wouldn't put my credit card on the Internet because my parents had convinced me someone was going to read the number and you had to fill out the form and then call them. And then we kind of all said, okay, we'll like build anti fraud systems and we can get comfortable with this. I think people are going to be slow to get comfortable with agentic AI in many ways. But I also really agree with what you said, which is that even if some people are comfortable with it and some aren't, we are going to have AI systems clicking around the Internet. And this is, I think, the most interesting and consequential safety challenge we have yet faced. Because AI that you give access to your systems, your information, the ability to click around on your computer. Now those you know, when the AI makes a mistake, it's much higher stakes. It is the gate on. So we talked earlier about safety and capability. I kind of think they're increasingly becoming one dimensional. Like a good product is a safe product. You will not Use our agents if you do not trust that they're not going to empty your bank account or delete your data or who knows what else. And so people want to use agents that they can really trust that are really safe. And I think we are gated on our ability to make progress, on our ability to do that. But it's a fundamental part of the product.
Chris Anderson
In a world where agency is out there and say that maybe its open models are widely distributed and someone says, okay, AGI, I want you to go out onto the Internet and spread a meme however you can, that X people are evil or whatever it is. It doesn't have to be an individual choice. A single person could let that agent out there and the agent could decide, well, in order to execute on that function, I've got to copy myself everywhere. And you know, like, are there red lines that you have clearly drawn internally where you know what the danger moments are and that we cannot put out something that could go beyond this?
Sam Altman
Yeah. So this is the purpose of our preparedness framework and we'll update that over time. But we've tried to outline where we think the most important danger moments are or what the categories are, how we measure that and how we would mitigate something before releasing it. I can tell from the conversation you wish AI. You're not a big AI fan.
Chris Anderson
No, actually, on the contrary, I use it every day. I'm awed by it. I think this is an incredible time to be alive. I wouldn't be alive any other time and I cannot wait to see where it goes. But we've been holding. I think it's essential to hold. You can't divide people into those camps. You have to hold a passionate belief in the possibility but not be over seduced by it because things could go horribly wrong.
Sam Altman
No, no, what I was going to say is I totally understand that. I totally understand looking at this and saying, this is an unbelievable change coming to the world and maybe I don't want this, or maybe I love parts of it, maybe I love talking to ChatGPT, but I worry about what's going to happen to Art and I worry about the pace of change and I worry about these agents clicking here, clicking around the Internet. And maybe on balance, I wish this weren't happening, or maybe I wish it were happening a little slower, or maybe I wish it were happening in a way where I could pick and choose what parts of progress were going to happen. And I think, I think the fear is totally rational, the anxiety is totally rational. We all have a Lot of it too. But A, there will be tremendous upside. Obviously you use it every day, you like it. B, I really believe that society figures out over time, with some big mistakes along the way, how to get technology right. And C, this is going to happen. This is like a discovery of fundamental physics that the world now knows about and it's going to be part of our world. And I think this conversation is really important. I think talking about these areas of danger are really important. Talking about new economic models are really important. But we have to embrace this with caution but not fear, or we will get run by with other people that use AI to be better.
Chris Anderson
You've actually been one of the most eloquent proponents of safety. You testified in the Senate. I think you said basically that we should form a new safety agency that licenses any effort that is it will refuse to license certain efforts. Do you still believe in that policy proposal?
Sam Altman
I have learned more about how the government works. I don't think this is quite the right policy proposal.
Chris Anderson
What is the right policy proposal?
Sam Altman
I do think the idea that as these systems get more advanced and have legitimate global impact, we need some way, maybe the companies themselves put together the right framework or the right model for this, but we need some way that very advanced models have external safety testing and we understand when we get close to some of these danger zones, I very much still believe in that.
Chris Anderson
It struck me as ironic that a safety agency might be what we want and yet agency is the very thing that is unsafe. There's something odd about the language there. But anyway, I asked.
Sam Altman
Can I say one more thing? Yes, please. Yeah. I do think this concept of we need to define rigorous testing for models, understand what the threats that we collectively society most want to focus on and make sure that as models are getting more capable, we have a system where we all get to understand what's being released in the world. I think this is really important and I think we're not far away from models that are going to be of great public interest in that sense.
Chris Anderson
So, Sam, I asked your 01 Pro Reasoning model, which is incredibly.
Sam Altman
Thank you for the $200.
Chris Anderson
$200 a month. It's a bargain at the price. What is the single most penetrating question I could ask you? It thought about it for two minutes. Two minutes. You want to see the question?
Sam Altman
I do.
Chris Anderson
Sam. Given that you're helping create technology that could reshape the destiny of our entire species, who granted you or anyone the moral authority to do that? And how are you personally responsible, accountable? If you're Wrong. It was good. That was impressive.
Sam Altman
You've been asking me versions of this for the last half hour. What do you think?
Chris Anderson
But what I would say is this here's my version of that question, but no answer. What was your question for me? Yeah.
Sam Altman
How would you answer that one in your shoes? Yeah. Or what do you, as an outsider?
Chris Anderson
I don't know. I am puzzled by you. I'm kind of awed by you because you built one of the most astonishing things out there. There are two narratives about you out there. One is you are this incredible visionary who's done the impossible and you shocked the world with far fewer people than Google. You came out with something that was much more powerful than anything you've done. It is amazing what you've built. But the other narrative is that you have shifted ground, that you've shifted from being open AI, this open thing, to the allure of building something super powerful. And, you know, you've lost some of your key people. There's a narrative out there. Some people believe that you're not to be trusted in this space. I would love to know who you are. What is your narrative about yourself? What are your core values, Sam, that can give us, the world confidence that someone with so much power here is entitled to it?
Sam Altman
Look, I think, like anyone else, I'm a nuanced character that doesn't reduce, well, to one dimension here. So, you know, probably some of the good things are true and probably some of the criticism is true. I. In terms of OpenAI, our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think, by all accounts, we have done a lot in that direction. Clearly, our tactics have shifted over time. I think we didn't really know what we were going to be when we grew up. We didn't think we would have to build a company around this. We learned a lot about how it goes and the realities of what these systems were going to take from capital. But I think we've been in terms of putting incredibly capable AI with a high degree of safety in the hands of a lot of people and giving them tools to sort of do whatever amazing things they're going to do. I think it'd be hard to give us a bad grade on that. I do think it's fair that we should be open sourcing more. I think it was reasonable for all of the reasons that you asked earlier, as we weren't sure about the impact these systems were going to have and how to make them safe, that we acted with Precaution. I think a lot of your questions earlier would suggest at least some sympathy to the fact that we've operated that way. But now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world. If you invite me back next year, you will probably yell at me for somebody who has misused these open source systems and say, why did you do that? That was bad. You should have not gone back to your open roots. But we're not going to get. There's trade offs in everything we do. And we are one player in this, one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way. But we've definitely made mistakes. We'll definitely make more in the future on the whole, I think we have over the last almost decade, it's been a long time now, we have mostly done the thing we've set out to do. We have a long way to go in front of us. Our tactics will shift more in the future. But adherence to this mission and what we're trying to do, I think very strong.
Chris Anderson
You posted this. Well, okay, so here's the Ring of Power from Lord of the Rings, your rival, I will say, not your best friend at the moment, Elon Musk claimed that he thought that you'd been corrupted by the Ring of Power. An allegation that, by the way. An allegation.
Sam Altman
Hi Steve.
Chris Anderson
An allegation that could be applied to Elon as well, you know, to be fair. But I'm curious, people. You have.
Sam Altman
I might respond, I'm thinking about it, I might say something.
Chris Anderson
What is in everyone's mind as we see technology CEOs get more powerful, get richer, is can they handle it or does it become irresistible? Does the power and the wealth make it impossible to sometimes do the right thing? And you just have to cling to tightly to that ring. What do you think? I mean, do you, do you feel that ring sometimes?
Sam Altman
How do you think I'm doing relative to other CEOs that have gotten a lot of power and changed how they act or done a bunch of stuff in the world? How do you think.
Chris Anderson
You have a beautiful. You are not a rude, angry person who comes out and says aggressive things to other people. You do not do that.
Sam Altman
That's my single vice.
Chris Anderson
No, I think in the way that you personally conduct yourself, it's impressive. I mean the question some people ask is, is that the real you or is there something else going on, but I'm, but I'm just taking the feedback. You have scenes, you put up the.
Sam Altman
Sauron ring of power or whatever that thing is. So I'll take the feedback. What is like something I have done where you think I've been corrupted by power?
Chris Anderson
I think the fear is that just the transition of OpenAI to A for profit model is, you know, some people say, well, there you go, you got, you got corrupted by the desire for wealth. You know, at one point there was going to be no equity in it. It'll make you fabulously wealthy, by the way. I don't think that is your motivation personally. I think you want to build stuff that is insanely cool. And what I worry about is that the competitive feeling that you see other people doing it and it makes it impossible to develop at the right pace. But you tell me if you don't feel that so few people in the world have the kind of capability and potential you have, we don't know what it feels like. What does it feel like?
Sam Altman
Shockingly the same as before. I think you, you can get used to anything step by step. I think if I were like transported from 10 years ago to right now all at once, it would feel very disorienting. But anything does become sort of the new normal. So it doesn't feel any different. And it's strange to be sitting here talking about this, but like, you know, the monotony of day to day life, which I mean in the best possible way feels exactly the same.
Chris Anderson
You're the same person.
Sam Altman
I mean, I'm sure I'm not in all sorts of ways, but I don't feel any different.
Chris Anderson
This was a beautiful thing you posted. Your son, I mean that last thing you said that I've never felt love like this. I think any parent in the room so knows that feeling, that wild biological feeling that humans have and AIs never will of you're holding your kid. And I'm wondering whether that's changed how you think about things. Like if, you know, say here's a red box, here's a black box with a red button on it, you can press that button and you give your son likely the most unbelievable life, but also you inject a 10% chance that he gets destroyed. Do you press that button?
Sam Altman
In the literal case, no. If the question is do I feel like I'm doing that with my work? The answer is I also don't feel like that. Having a kid changed a lot of things and by far the most amazing thing that has ever happened. To me, like everything everybody says is true. The thing my co founder Ilya said once is, I don't know, this is a paraphrase, something like, I don't know what the meaning of life is, but for sure it has something to do with babies. And it's like unbelievably accurate. It changed how much I'm willing to spend time on certain things and the kind of cost of not being with my kid is just crazily high. But I really cared about not destroying the world before. I really care about it now. I didn't need a kid for that part. I mean, I definitely think more about what the future will be like for him in particular. But I feel like a responsibility to do the best thing I can for the future, for everybody.
Chris Anderson
Tristan Harris gave a very powerful talk here this week in which he said that the key problem in his view was that you and your peers in these other models all feel basically that the development of advanced AI is inevitable, that the race is on, and that there is no choice but to try and win that race and to do so as responsibly as you can. And maybe there's a scenario where your super intelligent AI can act as a brake on everyone else's or something like that, but the very fact that everyone believes it is inevitable means that that is a pathway to serious risk and instability. Do you think that you and your peers do feel that it's inevitable? And can you see any pathway out of that where we could collectively agree to just slow things down a bit, have society as a whole weigh in a bit and say, no, let's, you know, we don't want this to happen quite as fast. It's too disruptive.
Sam Altman
First of all, I think people slow things down all the time because the technology is not ready, because something's not safe enough, because something doesn't work. There are, I think, all of the efforts, hold on things, pause on things, delay on things, don't release certain capabilities. So I think this happens. And again, this is where I think the track record does matter. If we were rushing things out and there were all sorts of problems, either the product didn't work as people wanted it to, or there were real safety issues or other things there. And I will come back to a change we made. I think you could do that. There is communication between most of the efforts, with one exception. I think all of the efforts care a lot about AI safety. And I think that people obviously not going to say and, and I think that there's really deep care to get this right. I think the caricature of this is just like this crazy race or sprint or whatever misses the nuance of people are trying to put out models quickly and make great products for people. But people feel the impact of this so incredibly that I think if you could go sit in a meeting in OpenAI or other companies, you'd be like, oh, these people are really kind of caring about this now. We did make a change recently to how we think about one part of what's traditionally been understood as safety, which is with our new image model, we've given users much more freedom on what we would traditionally think about as speech harms. If you try to get offended by the model, will the model let you be offended? In the past, we've had much tighter guardrails on this. But I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides. So if you ask the model to depict a bunch of violence or something like that, or to sort of reinforce some stereotype, there's a question of whether or not it should do that. And we're taking a much more permissive stance now. There's a place where that starts to interact with real world harms that we have to figure out how to draw the line for. But I think there will be cases where a company says, okay, we've heard the feedback from society. People really don't want models to censor them in ways that they don't think make sense. That's a fair safety negotiation.
Chris Anderson
But to the extent that this is a problem of collective belief, the solution to those kinds of problems is to bring people together and bring meet at one point and make a different agreement. If there was a group of people, say, here or out there in the world, who were willing to host a summit of the best ethicists, technologists, but not too many people. Small but. And you and your peers to try to crack what agreed safety lines could be across the world. Would you be willing to attend? Would you urge your colleagues to.
Sam Altman
I'm much more interested in what our hundreds, millions of users want as a whole. I think a lot of the room has historically been decided in small elite summits. One of the cool new things about AI is our AI can talk to everybody on earth and we can learn the collective value preference of what everybody wants rather than have a bunch of people who are blessed by society to sit in the room and make these decisions. I think that's very cool and.
Elise Hu
And.
Sam Altman
I think you will see us do more in that direction. And when we have gotten things wrong because the elites in the room had a different opinion about what people wanted for the guardrails on ImageGen than what people actually wanted. And we couldn't point to real world harm, so we made that change. I'm proud of that.
Chris Anderson
I mean, there is a long track record of unintended consequences coming out of the actions of hundreds of millions of people.
Sam Altman
And there are people also, 100 people in a room making decisions.
Chris Anderson
And the hundreds of millions of people don't have control over. They don't necessarily see what the next step could lead to.
Sam Altman
I am hopeful that that is totally accurate and totally right. I am hopeful that AI can help us be wiser, make better decisions, can talk to us. And if we say, hey, I want thing X rather than the crowd spin that up, AI can say, hey, totally understand that's what you want. If that's what you want at the end of this conversation, you're in control. You pick. But have you considered it from this person's perspective or the impact it'll have on this people? I think AI can help us be wiser and make better collective governance decisions than we could before.
Chris Anderson
Well out of time, Sam. I'll give you the last word. What kind of world do you believe, all things considered, your son will grow up into?
Sam Altman
I remember it's so long ago now. I don't know when the first iPad came out. Is it like 15 years, something like that? I remember watching a YouTube video at the time of a little toddler sitting in a doctor's office waiting room or something. And there was a magazine, like one of those old glossy cover magazines, and the toddler had his hand on it and was going like this and kind of angry. And to that toddler, it was like a broken iPad. And she never thought of a world that didn't have touchscreens in them. And to all the adults watching this, it was this amazing thing because it was like, it's so new, it's so amazing, It's a miracle. Of course, magazines are the way the world works. My kid, my kids, hopefully will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable. They will never grow up in a world where computers don't just kind of understand you. And do you know, for some definition of whatever you can imagine, whatever you can imagine, It'll be a world of incredible material abundance. It'll be a world where the rate of change is incredibly fast and amazing new things are happening. And it'll be a world where like, individual ability, impact, whatever is just so far beyond what a person can do today. I hope that my kids and all of your kids will look back at us with some pity and nostalgia and be like, they lived such horrible lives. They were so limited. The world sucked so much. I think that's great.
Chris Anderson
It's incredible what you've built. It really is. It's unbelievable. I think over the next few years you're going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any, any human in history, pretty much. You should know that everyone here will be cheering you on to do the right thing.
Sam Altman
We will do our best. Thank you very much.
Chris Anderson
Thank you for, thank you for coming to ted. Thank you, Sam.
Sam Altman
Thank you.
Chris Anderson
Thank you.
Sam Altman
Thank you very much. Enjoyed it.
Chris Anderson
Thanks.
Elise Hu
That was Sam Altman in conversation with Chris Anderson at TED 2025. If you're curious about TED's curation, find out more@ted.com curationguidelines and that's it for today's show. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Greene, Lucy Little, Alejandra Salazar and Tonsika Sarmarnivon. It was mixed by Christopher Faizy Bogan. Additional support from Emma Tobner and Daniela Balaurazo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.
Hannah Berner
If you're anything like us, you love attention. And my favorite way to get all eyes on me is with next level, shiny glossy hair.
Sam Altman
Which is why we're so excited to tell y'all about the new Lamellar Gloss collection from the girlies at Tresemme.
Hannah Berner
And Gigglers, we've got you too because Tresemme partnered with us to bring you 1-800-gloss, a special bonus episode of Giggly Squad where Hannah and I give advice on all things hair and giving gloss. Check out the episode and grab the Lamellar Gloss collection today because I'm officially declaring this spring gloss season.
Giggly Squad
At Schwab, how you invest is your choice, not theirs. That's why when it comes to managing your wealth, Schwab gives you more choices. You can invest and trade on your own. Plus get advice and more comprehensive wealth solutions to help meet your unique needs. With award winning service, low costs and transparent advice, you can manage your wealth your way at Schwab. Visit schwab.com to learn more.
Progressive Insurance
Everyone has a reason to change growing old Heartbreak, A fresh start. Whatever it may be, Peloton is here to get you through life's biggest moments with workouts that challenge and motivation that keeps you coming back. Peloton's Tread and All Access membership helps you set your goals, track your progress and get stronger, making your fitness goals a reality. Find your push, Find your power. Peloton visit1peloton.com.
Podcast Summary: OpenAI's Sam Altman on the Future of AI, Safety, and Power | TED Talks Daily
Episode Title: OpenAI's Sam Altman talks the future of AI, safety and power — live at TED2025
Host/Author: TED
Release Date: April 15, 2025
In this compelling episode of TED Talks Daily, Sam Altman, CEO of OpenAI, engages in an in-depth conversation with TED’s Head, Chris Anderson, at TED2025. The discussion delves into the rapid advancements in artificial intelligence, the ethical considerations surrounding AI development, the balance between innovation and safety, and the broader societal implications of increasingly powerful AI systems. This summary captures the essence of their dialogue, highlighting key insights, debates, and forward-looking statements made by Altman.
Rapid Evolution of AI Models Sam Altman begins by addressing the swift pace at which OpenAI has been releasing new AI models. He emphasizes that the latest image generation model is integrated within GPT-4.0, leveraging its extensive intelligence to deliver impressive results.
Sam Altman [02:54]: “The new image generation model is part of GPT-4.0. So it's got all of the intelligence in there. And I think that's one of the reasons it's been able to do these things that people really love.”
Impact on Various Professions Altman discusses the dichotomy in perceptions among professionals, such as management consultants, regarding AI's role in their future.
Sam Altman [03:14]: “Through every other technological revolution in history, okay, now there's this new tool. I can do a lot more. It is true that the expectation of what we'll have for someone in a particular job increases, but the capabilities will increase so dramatically that I think it'll be easy to rise to that occasion.”
Balancing Creativity with Ethical Use A significant portion of the conversation revolves around AI's role in creative industries and the tension between inspiration and intellectual property (IP) theft. Altman underscores the importance of enhancing human creativity rather than replacing it.
Sam Altman [04:13]: “I believe very deeply that humans will be at the center of that. I also believe that we probably do need to figure out some sort of new model around the economics of creative output.”
Revenue Sharing and Consent Altman acknowledges the complexities in compensating original creators when their styles or works inspire AI-generated content. He proposes potential revenue-sharing models contingent upon artists' consent.
Sam Altman [08:22]: “If you say, I want to generate art in the style of these seven people, all of whom have consented to that, how do you like divvy up how much money goes to each one?”
Open Source's Role in AI Advancement When questioned about OpenAI’s stance on open-source models amidst competitors like Deep Seq, Altman reveals OpenAI's commitment to releasing powerful open-source models while acknowledging the challenges of ensuring their safe and ethical use.
Sam Altman [08:57]: “We're going to do a very powerful open source model. I think this is important. We're going to do something near the frontier, I think better than any current open source model out there.”
Competitive Edge and Investment Addressing concerns about maintaining a lead in the AI race, Altman discusses the substantial investments OpenAI is making to stay ahead and the inherent challenges of competing with open-source initiatives.
Sam Altman [10:26]: “I have never seen growth in any company...like the growth of ChatGPT. It's really fun. I feel like, great, deeply honored, but it is crazy to live through. And our teams are exhausted and stressed and we're trying to keep things up.”
Exponential User Growth Altman shares astonishing figures regarding ChatGPT's user base, highlighting unprecedented growth rates and the platform’s widespread adoption.
Sam Altman [10:47]: “I think the last time we said was 500 million weekly actives and it is growing very rapidly.”
Enhanced Features and User Experience He elaborates on new features like advanced memory capabilities that personalize user interactions, aiming to create a more integrated and intuitive AI experience.
Sam Altman [11:30]: “We've just launched a new feature called...memory, but it's way better than the memory before, where this model will get to know you over the course of your lifetime.”
AI as a Catalyst for Scientific Discovery Altman expresses enthusiasm for AI's potential to drive significant scientific breakthroughs, anticipating advancements in areas like room-temperature superconductors and disease research.
Sam Altman [14:34]: “The thing that I'm personally most excited about is AI for science...progress against disease with AI assisted tools.”
Transformation in Software Development He predicts a paradigm shift in software engineering, where AI agents can autonomously handle complex tasks, significantly accelerating development processes.
Sam Altman [16:01]: “Software development has already been pretty transformed...another move that big in the coming months as Agentix software engineering really starts to happen.”
Potential Dangers of Advanced AI Altman does not shy away from discussing the inherent risks associated with powerful AI systems, including bioterrorism, cybersecurity threats, and the loss of human control over AI.
Sam Altman [16:41]: “There are big risks...models that are capable of self improvement in a way that leads to some sort of loss of control.”
Safety Measures and Preparedness Framework He outlines OpenAI’s commitment to a preparedness framework to evaluate and mitigate potential risks before releasing new models, emphasizing iterative deployment and real-world feedback.
Sam Altman [17:53]: “We have this preparedness framework that outlines how we do that.”
Clarifying AGI vs. Current AI Capabilities When discussing Artificial General Intelligence (AGI), Altman clarifies that current AI models, including ChatGPT, do not qualify as AGI. He delineates AGI as systems capable of continuous learning, self-improvement, and autonomous task execution beyond current capabilities.
Sam Altman [19:46]: “It doesn't continuously learn and improve. It can't go get better at something that it's currently weak at... it can't just sort of do any knowledge work you could do in front of a computer.”
Undefined Nature of AGI Altman admits the lack of a unified definition of AGI within OpenAI, highlighting the diverse perspectives among researchers and the ongoing debate about its exact nature.
Sam Altman [20:49]: “If you got 10 OpenAI researchers in a room and asked to define AGI, you'd get 14 definitions.”
Iterative Safety Approach Emphasizing an iterative approach, Altman discusses how OpenAI continuously improves safety measures based on deployment feedback, ensuring that safety evolves alongside AI capabilities.
Sam Altman [18:00]: “The way we learn how to build safe systems is this iterative process of deploying them to the world...”
Agentic AI Safety Concerns The conversation shifts to agentic AI—AI systems with the ability to perform actions autonomously. Altman acknowledges the profound safety challenges posed by such systems, likening them to granting AI unchecked autonomy.
Sam Altman [24:21]: “AI that you give access to your systems, your information, the ability to click around on your computer...”
Balancing Innovation with Parental Perspectives Altman shares a personal anecdote about his son, reflecting on how parenthood influences his perspective on AI’s future. He emphasizes the responsibility to ensure that advancements benefit future generations.
Sam Altman [38:52]: “Having a kid changed a lot of things...I really care about not destroying the world now.”
Maintaining Humanity Amidst Power In response to concerns about the ethical implications of immense power, Altman assures that his personal values remain unchanged despite OpenAI's significant influence.
Sam Altman [37:33]: “Shockingly the same as before...I don't feel any different.”
Evolving Views on Safety Regulation While initially advocating for a new safety agency to oversee AI development, Altman now believes that a more nuanced approach is necessary, involving external safety testing and broader societal input.
Sam Altman [29:20]: “I have learned more about how the government works. I don't think this is quite the right policy proposal.”
Public-Driven Safety Standards Altman champions the idea of leveraging AI to gauge collective societal preferences rather than relying solely on elite summits. He envisions AI facilitating more inclusive and representative decision-making processes.
Sam Altman [44:51]: “Our AI can talk to everybody on earth and we can learn the collective value preference of what everybody wants...”
Vision for the Future Altman concludes with an optimistic vision of a future where AI fosters unprecedented material abundance, rapid innovation, and enhanced human capabilities. He hopes future generations will view current limitations with nostalgia.
Sam Altman [46:10]: “It'll be a world of incredible material abundance... I hope that my kids and all of your kids will look back at us with some pity and nostalgia and be like, they lived such horrible lives.”
Commitment to Responsible Stewardship He reaffirms OpenAI’s dedication to responsibly stewarding AI technology, balancing enthusiasm for innovation with a deep commitment to safety and ethical considerations.
Sam Altman [48:22]: “We will do our best.”
On Creativity and AI:
“Humans will be at the center of that. I also believe that we probably do need to figure out some sort of new model around the economics of creative output.”
[04:13]
On Safety Risks:
“There are big risks...models that are capable of self improvement in a way that leads to some sort of loss of control.”
[16:41]
On AGI:
“If you got 10 OpenAI researchers in a room and asked to define AGI, you'd get 14 definitions.”
[20:49]
On Personal Responsibility:
“Having a kid changed a lot of things...I really care about not destroying the world now.”
[38:52]
On Future Vision:
“It'll be a world of incredible material abundance... I hope that my kids and all of your kids will look back at us with some pity and nostalgia and be like, they lived such horrible lives.”
[46:10]
Sam Altman's conversation at TED2025 offers a nuanced perspective on the trajectory of AI development. Balancing excitement for AI's potential with a clear-eyed awareness of its risks, Altman underscores OpenAI’s commitment to fostering innovation responsibly. The dialogue highlights the complexities of integrating AI into creative fields, the challenges of defining and achieving AGI, and the imperative of establishing robust safety frameworks. As AI continues to evolve, this discussion serves as a critical reminder of the collective responsibility to guide its development toward benefiting humanity while mitigating inherent risks.