
Loading summary
Host
OpenAI President Co founder Greg Brockman joins us to discuss OpenAI's newest model, Spud,
Interviewer
aka GPT 5.5, and where it leaves OpenAI competitively. That's coming up right after this.
Host
This week, I'm live at Knowledge 2026, ServiceNow's annual conference in Las Vegas, where enterprise AI moves from promise to production. I'm sitting down with ServiceNow's president and CPO Amit Zaveri on the platform strategy. Powering it all the people and technology leaders on what AI means for the workforce, the engineering team behind ServiceNow's Nvidia media partnership on what it really takes to ship AI at Scale and Ultra Beauty on deploying AI across 1300 stores. These are the conversations you won't hear anywhere else. And new episodes are dropping. This week on my YouTube page, we've all heard the stat 95% of AI initiatives fail. It's not because the technology isn't ready. It's because you don't have the right process or the right partner. Meet a board. Aboard is your partner for AI transformation, which means they listen, use their very own powerful software tools, and deliver exactly what your company needs to thrive in the age of AI working with big and small clients. Aboard always delivers in weeks, not months. Your AI revolution is just beginning. Visit aboard.com to get your AI rollout right. Welcome to Big Technology Podcast. Today we have an emergency episode with OpenAI president and co founder Greg Brockman, all about GPT 5.5, the famous spud model.
Interviewer
Looking at what it does and what it means for OpenAI. Greg, great to see you. Welcome back to the show.
Greg Brockman
Thank you for having me. Hope it's not too much of an emergency.
Interviewer
Well, I am definitely recording in in a Vegas hotel room, so more emergency than our last conversation, but we had some time to prepare, so it's great to be on with you.
Host
So let's, let's just start with, with this.
Interviewer
Can you confirm GPT 5.5 is spud?
Greg Brockman
Yes.
Interviewer
Okay. What is GPT 5.5?
Greg Brockman
Well, it's an amazing model. I think in many ways it is a step towards a new way of getting work done with a computer. It's a new class of intelligence. It's extremely useful at things like programming and all the different aspects of debugging and solving very hard, gnarly problems. Just being very proactive and really being able to solve problems end to end with little instruction. But the thing that's to me, most remarkable is not necessarily the fact that it got better at coding like That I think is what everyone kind of expects. But the fact that it's now really crossed the threshold of usefulness for general kinds of applications. And so it's much better at creating slides, spreadsheets, much better at computer use, using your browser, being able to kind of click through applications that are otherwise hard to have an AI operate. And so I think that we're really seeing the emergence of this new way of using a computer. And it starts with this kind of intelligence at the core.
Interviewer
When we spoke last, you mentioned that this was effectively the culmination of a two year research process. So was this planned two years ago? Is that how far back OpenAI plans?
Greg Brockman
I would say that yes, we do have very long horizons for how we plan now. One note is that we stack together many research ideas and bets on a variety of timescales. And so the way to think about it is that we are making constant progress across every single part of the stack. And so what GPT 5.5 represents is not an endpoint in many ways, it's a beginning point. It's really a step towards the kinds of models that we see coming over even just upcoming months. And I think that you should expect that we are going to have even larger improvements in the capability across a wide variety of these aspects of what the model can do. And that's something that I think will be very exciting. And we're just always thinking about how can we make what we're producing more useful for real world use, for real users and real applications.
Interviewer
Can you share specifically what those aspects are that we should be looking out for over the next few months? If this is the beginning, what is it the beginning of?
Greg Brockman
Well, I think that the big vision we have, and you can see it reflected in many things, not just the models, but the kind of, you know, you think about the models as the brain. You can think about the systems and the harnesses like Kodaks and the applications like the super app as almost the body around it to make it into a useful AI. And that's really what's happening is a shift from language models being the thing that is produced by labs like ourselves to. To an AI that's actually useful, that's actually an assistant that's out there trying to solve your goal, that's really operating according to your instruction. And you can see right now Codex is becoming this app that's not just for the coders, it's really for anyone using a computer. And that it's not perfect, right, that there are still some tasks where it should be able to do it. And it doesn't quite get it right. Sometimes the personality isn't quite what you wanted, right? That it doesn't quite, you know, it's like extremely powerful and out there doing a lot of really amazing things. But the way it communicates back to you that you have to still spend some time really trying to read through, okay, exactly how did it solve this problem? And so these aspects, we know exactly how to make them much better. And I think we've already had a pretty remarkable improvement from 5.4 to 5.5. I think we're going to have even more remarkable improvements across every single aspect of what makes these models useful. And one thing to know internally is that we think a lot about the end application. Like that is one thing that changed for us over the past 12, 18 months. Something like that is that we used to really just be focused on, let's improve on the benchmarks, let's make these models more cerebrally capable. But we now are really focused on, let's bring them to real world application. Let's think about finance, sales, marketing, every single function that someone uses a computer. How can we help with their computer work? How can we actually make the model have not just the theoretical capability to help, but has actually experienced those kinds of tasks that's actually been able to see what good looks like. And I think that the place we're going is one where you as a person doing work, that you are the overseer, you are the CEO of almost this autonomous corporation or this fleet of agents perhaps is the way to say it, and that they are operating according to your goals now, you are still accountable, right? You're still in the driver's seat. You're still the person who thinks about, well, is this what I actually wanted? Was this work up to standard, but that the details of exactly what buttons were clicked and exactly the kind of code that was written, exactly how the formula and the spreadsheet works, that you can abstract yourself from those if they're not important to the evaluation of whether or not something was what you wanted. And so I think it's like increasing leverage for every worker.
Interviewer
Okay, let me take my best guess as to what's happening and you tell me how close I am. I mean, I'm thinking about this. This is like a, like you mentioned, a culmination of two years of work. There's two different types of, I mean not to tell you know this, but for our audience, two different types of AI training. There's the pre training or at least the ones that have been pertinent for these models. The pre training where you just make the model generally smart by having it predict the next word. And then the reinforcement training where you have it like go out and actually take, you know, try to accomplish different tasks and you reward it when it does a good job with those tasks and effectively it sort of teaches or it learns how to, how to do those tasks is what you're saying basically that like this is the first result that we're seeing where OpenAI has just loaded a ton of reinforcement learning on task specific stuff into this model and that's what's producing the results you're talking about?
Greg Brockman
Well, I would actually say it a little differently. I would say that there's many steps to the pipeline, right? That there's pre training, mid training, reinforcement learning, there's, you know, the data collection, there's like a lot of these different things that all come together to produce the end result and the way in which it's connected to the world that's also very key to making it useful. And the thing that I'm really saying is we have been investing on every single one of these and have a repeatable, we have like a team, right, that it's not just about individuals working on these pieces but, but a team that really comes together and looks across the whole stack to say how do we make this more useful for real world applications? And so it's not really any one thing that we do. It's really about the overall effort of trying to like if you think about, if you're building a car, right, that there's, it's not just about you have like a better engine, right, you could build a great engine, but if the rest of the car is not up to the quality level of the engine, it's not going to matter. And so I think that that is the real innovation. It's really the end to end co design and all coming together in a repeatable fashion to make these models better and better for our users.
Interviewer
You were on a media call earlier today with myself and a number of members of the press and one of the interesting things that you said, or basically I think you said this right off the bat, is that the model more intuitively knows what you want and you don't have to spell it out exactly as you, as you would in the past. Here's a tweet from from Run. There are early size of 5.5. Being a competent AI research partner, several researchers let 5.5 run variations of experiments Overnight, given only a high level algorithmic idea, waking up to find a completed sweep, dashboards and samples, never having touched the code or terminal at all. Just if you can answer briefly on this, a two parter, how do you do that? And does that mean prompt engineering is dead?
Greg Brockman
Number one, I think it really comes down to when we say there's a new class of capability, a new class of intelligence. That's really what we mean, right? The models are becoming much more intuitive to use because they have deeper understanding of what it is you're asking of them, right? That they really look at the context, try to understand and puzzle out what am I being asked to do. And it really makes you realize, you know, to the second part is prompt engineering dead? Which I actually think the prompt engineering in some ways may be even more vibrant than before. But you spend so much time right now trying to explain to your computer what you even want, try to pack in this context and be like, well, here's what's going on, here's the situation, here's the thing I want from you. And you're just like, why do I have to explain this to my computer, right? Like the whole thing is the computer should be doing the work to help me. I don't want to have to be sort of breaking down the task, trying to explain to it step by step how to do things. I want to point it in a direction and I want it to be able to take care of the details and to get me the result again in a way that I can observe and kind of provide feedback along the way. But I want it to be the driver of those low level execution. And so I think that in some ways where prompt engineering is going to go is it's going to be about, you can get so much more out of these models with so much less effort. But with the same amount of effort, you still have a multiplier. Think about how much more you could even get. And I think that we're just at the leading edge right now of seeing the ceiling of what is capable, what even today's models are capable of.
Interviewer
Okay, let me briefly speak with you about the economics of building a model like this. There's been this pattern where these big massive models, now you're not saying how much money or compute you've used to train this, but I think we can be safe in assuming it was a lot. And there's been this pattern where these massive models come out, they get distilled by open source model makers and then open source is just a Couple months behind the leading foundational models. And I guess when the investment was smaller, being a couple months ahead mattered a lot. But I'm curious, now that the investment is so big and the models are, the capabilities are increasing fairly dramatically as you go, how is this defensible in the long term if you're just going to have that pattern repeat over and over?
Greg Brockman
Well, I look at it a little differently. I think that the real investment that we are making isn't that end to end co design of having a system, a system of people who are producing this technology. Right. A way of working together. And some of this is about how you leverage these massive supercomputers to produce these models. Now it is also the case that it's not as simple as you can take the outputs of these models and distill and you have exactly the model, the same capability. It's just smaller and could run fast. If that were the case, we would just do that and then we would also have a model that would be much more easy to serve in many ways. And of course there's a lot of art behind distillation. There's a lot of great things there. But the point that I'm getting at is that the real thing that we are investing in is the machine that makes the machine. Now the, at the deployment side, we think a lot about safeguards, we think a lot about mitigations. And we do that for many, many different aspects of how these models could be misused in real situations. And that's something that we have been investing in for many years. And we think about that across areas like cyber, or thinking about that in areas like bio, that we have a longstanding effort that you can see in our preparedness framework, which is public, about how we approach these kinds of uses of the model and how we try to maximize the benefits, mitigate the risks. And so I think it's a real motion that every piece of what we do needs to connect to the question of how do we continue to make progress, but also how do we make these models broadly available? Because that's something that we really believe in, that we believe this technology empowers people and that we want it to benefit people and lift everyone up.
Interviewer
Yeah, but just to go back on that, the pricing on this model is I think double the last model, GPT 5.4. And so from an economics or the business standpoint, the question would be, you know, let's say you keep on progressing, but because there's been all this infrastructure, the infrastructure that's been put towards training the models. If open source can deliver not as good performance, but almost as good and do it cheaper, how do you handle that threat?
Greg Brockman
Well, again, I look at it a little differently. So first of all, if you look at our history, which really is not driven by anything in competition, it's just like our own sort of progress and desire. We have dropped prices on the same level of intelligence year over year, sometimes by literally a factor of 100, right? It's like at least in order of magnitude, year over year, sometimes literally a hundred. But the thing that keeps happening, it's real Jevons paradox, where it's like, you know, lower the cost of something, way more activity happens, right? And I think that what we keep seeing is that there are returns to intelligence, right? That for the kinds of tasks that these models are now capable of doing, that a little bit more intelligence goes a long way. And I think that is the story of 5 5, that in some ways you can almost look at it as like, oh, there's just an incremental improvement in intelligence, but I think there's going to be a massive improvement in terms of what people use it for. And by the way, I actually think that incremental is actually very much an understatement for this model relative to 5.5, you know, it's a 0.1 improvement in some ways. But I think that, that, that actually really undersells the magic that we see within this model and that, that our early testers have, have really seen in their practical work.
Interviewer
So if people see these numbers and they say there's IPO pressure on, on OpenAI, and therefore the, you know, we've been getting a great deal on intelligence and the free ride is over. You would argue against that.
Greg Brockman
I. Yeah, look, the way I think about this is that we have a very simple business in some ways, right? We rent, build, buy, compute, and we resell it with some positive margin. And as long as it's, you know, positive operating margin, and as long as there's scalable demand for intelligence, which I think is true, as long as there's problems to solve, like, no one's going to run out of problems to solve, and we've seen this at every step, that demand outstrips our supply, then we can scale that, compute all day. And I think that in my mind, that's the main directive that I ask of the team is just like, just think about, we need to add value on top of the raw compute and make sure that we are at positive operating margin on it. And that is something where it's actually not even about like the different competition in the marketplace. It's just a question of can you have compute that gets turned into intelligence? And that's just how you know that it does that at a slightly improved value coming out relative to the cost going in. And I think that that is something where again, we're always trying to make more efficient models, but then we just want more of them and then we want the more intelligent models. And regardless of where they're coming from, it's kind of all the same compute that's going in. And so I think that it's actually a great, like, competition in this marketplace has been great for innovation, but I think that it's actually something where it's driving more usage and more overall spend in the ecosystem. And you can see that in the revenue numbers of us and, you know, others in this industry.
Interviewer
Okay, I want to take a quick break and come back and talk to talk with you about cybersecurity, trust and whatever else we can get to in our time on this emergency show. We'll be back right after this.
Host
Most leaders know how work is supposed to happen, but when it comes to how it actually gets done day to day across tools, teams and handoffs, they're mostly guessing. That's exactly the problem Scribe Optimize was built to solve. Trusted by over 80,000 enterprises, including nearly half of the Fortune 500, it gives leaders a live view into how work is really happening across approved business apps without interviews, manual process mapping or extra effort from the team. And because it's continuously analyzing real workflow activity, the insights stay current instead of going stale the moment a process changes. You can see which workflows are happening, where time is going, and which tools are involved. It automatically surfaces top issues, explains why they're happening, and even recommends ways to fix them with estimated time savings. And importantly, it's built with privacy in mind. So activities only captured in admin approved business apps and user level data is anonymized by default. The kind of visibility that used to take months. Now it's just always on. If you're ready to stop guessing and start seeing, Visit scribe. How BigTech. That's S C R I B E How BigTech look, if you have a kid in school right now, you know the drill. What you take 20 minutes of homework, ends up taking two hours and usually ends in tears. And every good tutor, well, they're fully booked for months. This episode is brought to you by Brainly. Brainly is an AI powered personal tutor built by educators, not a general purpose chatbot. It doesn't just give your kid the answer, it walks them through step by step explanations so they actually understand the material. It learns how your child learns, diagnoses when they're struggling, and builds a personalized learning path in under three minutes. Available 24 7. There's no scheduling headaches and it's just a fraction of the cost of a private tutor.
Interviewer
Finals are coming.
Host
Build your teen study plan now. It only takes minutes. Go to brainly.com bigtech to get 50% off your first Brainly subscription with my code Big Tech that's B R A I N L Y.com BigTech
Ad Voiceover
insurance isn't one size fits all, and shopping for it shouldn't feel like squeezing into something that just doesn't fit. That's why drivers have enjoyed Progressive's name your price tool for years. With the name your price tool, you tell them what you want to pay and they show you options that fit your budget enough. Hunting for discounts, trying to calculate rates, and tinkering with coverages. Maybe you're picking out your very first policy, or maybe you're just looking for something that works better for you and your family. Either way, they make it simple to see your options. No guesswork, no surprises. Ready to see how easy and fun shopping for car insurance can be? Visit progressive.com and give the name your price tool a try. Take the stress out of shopping and find coverage that fits your life on your terms. Progressive Casualty Insurance Company and affiliates Price and coverage match limited by state law.
Interviewer
And we're back here on Big Technology podcast with OpenAI President and Co founder Greg Brockman. Greg, let me ask you about the cybersecurity implications here. Two very different approaches between OpenAI and Anthropic. Anthropic's latest massive model, Mythos, is not released to the public. This one, you know, spud or 5.5 is released to the public.
Host
I mean, let me just ask you straight up. Is there a chance that releasing this
Interviewer
powerful model into the, you know, into the public without this like step by step practice could lead to some major cyber attacks?
Greg Brockman
Well, I actually have a different view on the premise of the question. So the thing to understand is that we have been investing in cyber safeguards and cybersecurity as a part of our preparedness framework for years. Right. This is something we have invested in far ahead of having the kinds of capabilities we see coming. And so we have been taking a very deliberate step by step approach. You can see even just over the past couple weeks where we've expanded our trusted access for cyber program. And in general, we believe in ecosystem resilience. Right. That we think that you do want to go step by step, that these models are getting continuously better. We have line of sights and even more capable ones and that you want to be able to put these models in the hands of defenders to make sure that you're able to protect critical infrastructure. And we believe in, in that resilience of as you can bring these models into people's hands, that, that then they're able to explore in ways that you would not be able to without that kind of access. And so you kind of want this graduated approach and to make sure that you are moving down that pipeline as you can bring in additional safeguards in order to make sure that you can maximize the benefits and mitigate the risks. And so we've really taken a deliberate approach. I think our team has been working incredibly hard to think through the cyber implications of this model. We also believe in iterative deployment. That's part of this really bringing the models as they continuously get better. And we believe in democratic access, that we believe that ultimately the goal of creating this technology is to empower people to ensure that it does benefit all of humanity. And so we are constantly trying to solve for how do we safely and responsibly bring this technology to bear in the world in a broad way.
Interviewer
Right. And I think suffice it to say that your team hasn't been fans of the way that anthropic is deployed. Mythos. That's a quote from Sam. It's clearly incredible marketing to say we have built a bomb, we're about to drop it on your head. We will sell you a bomb shelter for 100 million to run all your, to run across all your stuff, but only if we pick you as a customer. Let me talk through the other case and then get your response. The other case would be you can't account for everything and there are clearly going to be some vulnerabilities that will only be found by people or entities deploying this and looking for them. So maybe it makes sense to start with a trusted group of testers before you deploy it, before you deploy it broadly, what do you think?
Greg Brockman
Well, I believe the correct answer here is subtle and I think it is rooted in the, the technical specifics of what you have in front of you and many, many factors. Right. You need to think about how are the models progressing. Right. Not just your own capabilities, but others in the ecosystem. You need to Think about what kind of benefit do you get from having a small group that has access and are able to have, you know, are they able to have high leverage by being able to find and produce patches? But then how do you actually coordinate the disclosure of those across an industry? And so there's a lot of factors that go into it. I think that the true answer is like, if either extreme is not quite right, there are tools that can be applied to a specific situation. And I think that this is not the first time we've had to think about this problem. It's not the last time we will have to think about it. But one thing to note is that we have had our model in the hands of defenders for some time, that we've been building up our trusted access program, that the model that we're releasing is actually not cyber permissive. Right. That it actually has a number of safeguards built into it and that you can then have a gap between what you're privately sharing, testing those kinds of things. And so I think my, my short answer is like, it's. There's definitely these different schools of thought in terms of values, of is the value that you want to get these models into people's hands and empower them, or is the value that you kind of want them to be centralized and controlled and that you don't want them in people's hands? That is something that is a maybe underlying tension in some of these debates. But I think that the tactics, right, you know, that those almost flow from the details and that they can be informed by these values. But either extreme, reflexively, I don't think will yield the best outcome for the world.
Interviewer
Okay, I want to ask you about agents. Back to agents. If we could.
Host
These agents work, work the best if
Interviewer
you sort of let them have a high degree of autonomy, I mean, sort of makes sense. So I'm just kind of curious to hear your perspective. As we get more agents that can do more things and access more files and work across programs, what is the proper amount of trust to put into agents right now?
Greg Brockman
So I think that right now, actually agents tend to be quite reliable and even things like prompt injections, I think that there's still holes there, but that we're patching them and that the models are becoming much more resilient. But I also think that the flip side is that as these models are given increasing responsibility and access to more important context that you need to have some answer for. Just like if you have employees, you know, if you have a team of five employees, they're all kind of trustworthy, fine. But if you have 500,000 of the same employees that some somehow those numbers, right, just like that, there's a lot of large numbers that you start to worry about, okay, how do I have good governance and oversight?
Ad Voiceover
Right.
Greg Brockman
And so this is something where as we're investing in these capabilities and but making the super app more accessible not just to coders, but to any person doing work with a computer, we're also investing in governance and oversight. And you can see this very concretely in Workspace Agents, which we released recently. So that's within your enterprise. You can now define agents. So you get a hosted Codex harness in the cloud, you can hook up tools, you can hook it up to your slack and it's doing work. It's like awesome. A lot of people use it. It's been very cool to see how sort of viral it goes within an organization. When you see you use someone else's agent, you're like, wait, I could build one of these too. And you can just fork it and do your own thing. And then that's an opportunity to have great governance in that you can see that's baked into the product where your IT organization can see all the agents that have been created, that for an agent, you can see the conversations it's had and that you can think about exactly what the guardrails are around it. So I think that the short answer is like you want to ramp the responsibility entrusted with the agent and the diversity of things that agents are doing together with, with security, safety, observability, oversight. And if you're not doing those hand in hand, then I think that that's a little bit out of balance. And I think it's important to think about both sides.
Interviewer
Yeah, basically go ahead, but be careful.
Greg Brockman
But you and really lean in. Right. I think it's like as you scale, like you can prototype and it's just the nature of scale that starts to bring in the. Do you still have the ability to oversee what's going on? So you need to kind of make sure at each step. Do you feel like you're calibrated, you understand what your teams are up to?
Interviewer
Greg, let's end with this. You call this a compute powered economy? What does that mean?
Greg Brockman
Well, I think we are heading to a world where the more compute is poured into a problem, the faster that problem will be solved. And that the ceiling of problem that can be solved depends on how much compute is available. And you think about things like drug discovery, right. Being able to Solve complex diseases like those are. Solving complex diseases like Alzheimer's is kind of outside of humanity's reach right now. We've never really done it, but imagine a world where you can take a gigawatt data center and have it just think about how to solve Alzheimer's for a month, for a year, however long it takes. And it may not be literally just cerebrally solving this problem, but it may have to consult with world experts. Maybe it has to suggest experiments that get run in a wet lab. But if you can actually solve such a problem, that would be such a transformatively positive thing for humanity. And I think we're heading to a world where that is how important problems get solved and that is how tasks in your daily life can also be solved. Whether it's having an agent that knows you, that has your personal context, that is trustworthy, that you can ask for advice on health and you get back trustworthy information. And that's just a thing. That's a smartphone that's in your pocket. Right. You can just talk to and it'll be out there doing things and proactively knows what are your goals, what are your interest and how it can help you. And I think that big and small compute is going to be the resource that shows how much computers can be used to help people, to do work on behalf of people. And I think we're heading to that world and it's one that we're all building collectively.
Interviewer
Yeah. And that I think would explain the massive investments that you've led. Making these big infrastructure bets still not enough.
Greg Brockman
We're going to feel the scarcity. We're going to feel it, we're feeling it already. You can sense it right now on people who are trying to use these agents and just simply cannot, you know, hitting the rate limits. So we're working on behalf of our customers, on behalf of, of everyone who wants to use these agents to ensure that there is enough. And I don't think we're going to get there, we're going to do our best. But I think that we are headed to a world of compute scarcity and, but again I think this is something where we can all contribute to trying to help. There just be more availability of this in the world.
Interviewer
Greg, busy day, Always appreciate your time. Always great to speak with you. Thanks again for coming on.
Greg Brockman
Likewise. Great chatting.
Bloomberg Promo Voiceover
Some follow the noise, Bloomberg follows the money. Because behind every headline is a bottom line, whether it's the funds fueling AI or crypto's trillion dollar SM wings. There's a money side to every story. And when you see the money side, you understand what others miss. Get the money side of the story. Subscribe now@bloomberg.com.
Episode Date: April 23, 2026
Host: Alex Kantrowitz
Guest: Greg Brockman, President & Co-founder, OpenAI
Main Theme: The rollout and implications of OpenAI’s new “Spud” (GPT-5.5), including its technological leap, business model, competitive positioning, cybersecurity considerations, and the future of agentic AI.
This emergency episode, recorded live from Knowledge 2026 in Las Vegas, features a deeply insightful conversation with OpenAI President Greg Brockman. The discussion centers on the release of OpenAI’s latest AI model, GPT-5.5 (“Spud”), exploring its significance in the AI landscape, how it pushes the boundaries of productivity, questions of competitive advantage, pricing, cybersecurity, and the future economy powered by compute. Brockman provides candid responses to both technical and strategic challenges, paints a vision for agent-based work, and articulates OpenAI’s stance on trust, safety, and society-wide impact.
[01:54 - 02:56]
“I think that we're really seeing the emergence of this new way of using a computer. And it starts with this kind of intelligence at the core.”
—Greg Brockman [02:39]
[03:07 - 06:51]
“We now are really focused on, let's bring [the models] to real world application... How can we actually make the model have not just the theoretical capability to help, but has actually experienced those kinds of tasks, that's actually been able to see what good looks like.”
—Greg Brockman [05:09]
[06:51 - 10:57]
“I want to point it in a direction and I want it to be able to take care of the details and to get me the result again in a way that I can observe and kind of provide feedback along the way… Prompt engineering in some ways may be even more vibrant than before.”
—Greg Brockman [09:36]
[10:57 - 15:09]
“We have dropped prices on the same level of intelligence year over year, sometimes by literally a factor of 100… But what keeps happening, it's real Jevons paradox…lower the cost of something, way more activity happens.”
—Greg Brockman [13:57]
[15:09 - 16:56], [27:13 - 29:27]
“We're going to feel the scarcity. We're going to feel it, we're feeling it already… There just be more availability of this in the world.”
—Greg Brockman [29:00]
[20:00 - 24:38]
“We have been taking a very deliberate step by step approach…We believe in, in that resilience of as you can bring these models into people's hands, that, that then they're able to explore in ways that you would not be able to without that kind of access.”
—Greg Brockman [20:37]
“I believe the correct answer here is subtle and it is rooted in the technical specifics of what you have in front of you and many, many factors.”
—Greg Brockman [22:59]
[24:38 - 26:56]
“You want to ramp the responsibility entrusted with the agent and the diversity of things that agents are doing together with, with security, safety, observability, oversight. And if you're not doing those hand in hand, then I think that that's a little bit out of balance.”
—Greg Brockman [26:52]
On the paradigm shift:
“We're really seeing the emergence of this new way of using a computer. And it starts with this kind of intelligence at the core.”
—Greg Brockman [02:39]
On prompt engineering’s future:
“I want to point it in a direction, and I want it to be able to take care of the details and to get me the result… so much more you could even get. … we're just at the leading edge...”
—Greg Brockman [09:36]
On economics and investment:
“The real thing that we are investing in is the machine that makes the machine.”
—Greg Brockman [12:25]
On iterative model release and resilience:
“We have been taking a very deliberate step by step approach… we believe in democratic access, that the goal of creating this technology is to empower people to ensure that it does benefit all of humanity.”
—Greg Brockman [21:41]
On compute as the new fuel of progress:
“The more compute is poured into a problem, the faster that problem will be solved. The ceiling…depends on how much compute is available.”
—Greg Brockman [27:19]
| Timestamp | Segment/Topic | |---------------|---------------------------------------------------------------------| | 01:54–02:56 | What is GPT-5.5 “Spud”? | | 03:07–06:51 | OpenAI’s shift to real-world applications & agentic vision | | 07:42–10:57 | Technical pipeline and evolution of prompt engineering | | 11:49–16:56 | Competitive positioning, open source discussion, pricing/economics | | 20:00–24:38 | Cybersecurity, ethical deployment, OpenAI vs. Anthropic approaches | | 24:38–26:56 | Scaling agents, trust, oversight, enterprise governance | | 27:13–29:27 | The compute-powered economy and infrastructure scarcity |
The conversation is candid and technical, characterized by both urgency (emergency episode) and optimism. Greg Brockman is methodical but open about challenges, skeptical about easy answers, and focused on empowerment and practical utility. The tone is energetic, future-focused, and deeply engaged with both the technical and ethical complexities of advanced AI deployment.
This episode is essential listening for anyone tracking the frontier of artificial intelligence. Brockman’s openness about OpenAI’s methods, vision, competitive strategy, and concerns about cybersecurity and trust provides a nuanced, insider view on the present—and possible futures—of AI as a general-purpose work assistant, and the infrastructure race shaping the next era of technology.