Loading summary
A
Foreign.
B
Hello and welcome to the Nvidia AI Podcast. I'm your host, Noah Kravitz. Today we're looking back on the year in AI 2025. But before we begin, if you're enjoying the AI podcast, please take a moment to follow us on Apple, Spotify or wherever you're listening.
A
Thanks.
B
Our year began with Nvidia's Mingyu Lu talking about the importance of world foundation models to advancing physical AI in episode 240. Forty conversations later, Jacob Lieberman introduced us to the future of enterprise storage AI data platforms in episode 281. Along the way were advances in AI models and the infrastructure that they run on like like the rise of agentic AI and the AI factory. We heard firsthand from pioneers in healthcare, higher education, life sciences, marketing and other industries about how they're using AI to advance their fields and make work better for the people doing it. And we talked to everyone from researchers to roboticists about the dawn of physical AI, where intelligence moves from our screens into the robots building our cars, assisting our surgeons and walking among us. 2025 was quite the ride. Let's dive in.
This year in AI began as last year ended, with lots of talk about agents and agentic AI. So what exactly is an AI agent? An evolution in the way people use generative AI. Agentic AI is a move away from simple call and response style chatbots towards systems that have true agency.
Chris Covert from InWorld AI breaks this evolution down phases in episode 243. Moving from simple conversation to an adaptive partner and finally full autonomy.
A
We have this, you know, first again is that conversational AI phase and I'll use a gaming analogy right this, the conversational AI phase gives avatars, gives agents, I'll use them interchangeably today. Extremely little agency in doing anything other than speaking right. It may be able to respond to my input if I ask it to do something, but it's not going to physically change the state of something other than the dialogue. It's going to tell me back is an adaptive partner face where the AI is observing and responding to changes on its own. It's not micromanaging every decision, but it feels like you're collaborating with an agent or a unit that has just enough context to make smart decisions on its own. Like an evolution of a recommendation engine being driven by, you know, a cognition engine here. So it's not just learning, but it feels like it's learning what we need even before we ask it again. I think that's phase three. I think there's Still a phase four and I think that's a fully autonomous agent. And that stage, you know, again continuing our analogy, is a player two right where player three is, it's adapting to us. Stage four is, hey, this thing is an agent all on its own. It feels like I'm playing against another human. It is making decisions that feel optimal to its own objectives, alike to mine or not.
B
The immediate payoff of this capability is freeing human workers from repetitive, error prone, non creative tasks, what we often refer to as toil. But here's the key. We don't need the agents to be perfect, to be valuable. In fact, we don't even need them to do all of the work for us. As Nvidia's Bartley Richardson points out in.
A
Episode 258, if it gets you 75, 80% of the way there, that's fantastic.
B
That's great.
A
Because what's, you know, I'm sure you know, you do your fair share of writing, right? Like the hardest part for me about writing is that blank page.
B
That blank page totally right.
A
And if I can get something that's 80% of the way there, it's great.
B
AI runs on data, and agentic AI is no different, which is good considering the sheer velocity of information creation in today's enterprise. Data growth is creating a widening gap between the data we have and the insights we can actually extract. Citoreason's Shai Shen Orr describes this challenge in the Life sciences in episode 276, comparing the struggle to keep up with data growth to the red queen effect from Alice in Wonderland.
A
You can think about it like data is exponential, insight is linear, everyday percent data utilized to give insight is lower. Right. The analytical side of this and the AI solutions for this have been missing. The field is still largely a manual field where you give people some data, they sit in front of their computer, you know, they try to figure out they make some value and insight from this. And I figured that's not a sustainable solution. This field needs to move to ultimately build much larger integrative solutions that bring in many different angles of machine learning, AI, statistics and so forth. To ultimately bridge this, the data insight gap keeps growing. So you basically are constantly in a game in which you need to make it faster. It's actually what's called an evolution. And then remember Alice in Wonderland? The red queen, sure, right. Where she said to Alice, you have to run just to stay in place. But the red green effect. So this need for us to continuously run is a huge driver for automation, acceleration and I would even say the cognitive meta analysis that we as humans need to do to somehow describe to a machine how we make decisions so that we can automate them, right?
B
To handle this exponential growth, we need massive compute resources. AI factories provide the infrastructure to support enterprise scale systems. But traditional ways of approaching storage had a major flaw. Data gravity. The data was heavy and moving. It created security risks. Here's Nvidia's Jacob Lieberman in episode 281.
A
So far, in order to do AI, you've had to send your data out to some kind of AI factory with a gpu, do all your processing and copy it back, right? So the data has gravity. And it turns out that instead of sending all your data to the gpu, you can actually send your GPU to the data. And what that looks like is actually putting a GPU into your traditional storage system on that same storage network and letting it operate on the data in place where it lives without copying it out. And the advantage of generating these AI representations with the source of truth data is that if the source of truth.
B
Changes, you can immediately propagate those changes to the representations. The new version reverses this. Instead of shipping data out, we bring the GPU compute to the data. This evolves the AI factory from a distant processing plant into a unified, efficient pipeline. Sarah Laszlo from Visa explains what this modern factory approach looks like in episode 256.
A
What that means to me is a single pipeline that goes from a data scientist with an idea about a model they want to build all the way to the model running in production. It's interesting because I had. I hadn't really thought much about this AI factory terminology. Like I had heard it, but I hadn't really thought it was what I was doing until I came here to GTC and I started hearing other people talking about it. And then I realized, oh, that's what my platform does. So my platform, recently we've adopted what we call the RAY Everywhere strategy. So we use Anyscale's RAY ecosystem to do the whole thing, the whole shebang. So data conditioning, model training, and model serving, we do all in ray. And it is an. It is intentionally trying to be more of this factory concept where it's just, it's. There's not a whole bunch of distinct parts or distinct tools that are living in different places that work differently. It's just one unified, consistent pipeline from start to finish.
B
This shift isn't just about efficiency. It is critical for data sovereignty. Countries and companies need to ensure their sensitive intelligence stays in their Buildings and on their own soil. In episode 247, Karen Hilson from Norwegian telecom operator Telenor explains why they built a sovereign AI factory in Oslo.
A
So, like Hive Autonomy, for example, I mean, they work with logistics, robotics, so they are actually innovating. I say a lot of industries, whether it's ports, as I said, sort of factories or this sort of in their operations and have efficiency cases. So they have very specific customer needs that they are trying to solve. But they are the reason, sort of why they were very interested in coming to the ad factory is that they're sitting with sensitive data. So it was very keen. They wanted it to be really on Norwegian soil.
B
Right.
A
The Telenor brand sort of represents security, you know, so there's sort of a gain that really helps them. And then the sustainability part is super key. So that was sort of. It was the combination of these three. Capgemini is also a customer of ours. They are developing products of doing voice to voice translation. And we can say, yes, that can be done, but these are for sensitive dialogues. Not all dialogues can go out in the cloud somewhere. These are very sort of sensitive dialogues if you think, you know, within the health sector, within the police. So not so much on Prem, but again, in sort of a safe, secure environment and that sort of really key. And another customer is working a lot with the municipalities in Norway and again with sort of sensitive cases that they sort of really would like their data to be secured.
B
To build trust in these factories, openness is essential. Jonathan Cohen from Nvidia explains in episode 278 how open models like Nvidia's Nematron family allow for the customization required by sovereign projects.
A
If you say Nvidia trains a model, a Nematron model, and it's great, but since you've disclosed all your training data and look at your training data, for whatever reason, we have some policies where this data we can't use and we can say that's fine. Everything you need to reproduce what we did is there. You can train your own model excluding that data. Or you say, well, I like the data, but the mix is wrong. I don't know. I'm a sovereign project and it really needs to be very good at speaking this language and understanding this culture. And that data wasn't as represented in your training set as I want it to be. Everything that we did is transparent and so you can make these modifications yourself.
B
Since our first episode back in 2016, the AI podcast has told stories about the real world impact of artificial intelligence. 2025 was no different. A big area of impact this year was once again in healthcare, where AI is helping with everything from drug discovery to reducing physician burnout. Here's Anne Ostwat of Moon surgical from episode 272 explaining how their maestro system supports surgeons.
A
Physician fatigue is absolutely real. You know, it's interesting. We did our first in human study in Brussels in Belgium with a surgeon and he used the system over 50 cases. And he told us after a few weeks, hey, when I get back home in the evening, my wife tells me that I'm a lot nicer than before. So like, what's going on? And you know, I mean, he attributed that to his own fatigue level. Right. He's like, you know, I end my day in a way that is a lot more relaxed. It's about both the physical and the mental load.
B
With AI in healthcare, safety is the number one priority. Hippocratic AI has tackled this by building a constellation architecture using multiple AI models that constantly double check each other. CEO Moonjal Shah describes how it works in episode 262.
A
We literally have multiple models double checking each other.
B
Right.
A
And what people don't realize is a lot of the models now they say you can give a lot of input tokens to them. Now just put it all in there, it'll figure it out. And what Gemini is like what a million, I think it is now, million tokens. So it's like, oh, okay, no problem. But it can't reason across it all.
B
Yeah.
A
They'll show you examples of what are called needle in haystacks, where it'll be like, okay, it'll find that one thing. Yeah, I mean, grapping for a word is not that hard in computer science. Like we can find a word, but what you're really trying to do is reason across it. So I'll give an example. If you ask your care manager, can I have ibuprofen? And they say, sure, you can have ibuprofen, but don't take too much, that's fine. Right. Because it's an over the counter medication. Unless you have chronic kidney disease stage three or four, then it'll kill you.
B
Right.
A
Well, if you put the rules for ibuprofen and CKD into GPT4 and then ask it, it'll do great. If you put in all the rules for all condition specific over the counter medications and ask, it'll still do pretty good. It'll start missing some sometimes, which is still not okay because you could kill people. Fine. Have you put in the patient's medical history, the patient's last 10 conversations with you, all of those rules for over the counter medication disallowance and the current checklist for what you're supposed to follow with that patient and maybe a few other things and then ask it. Yeah, good luck. And what it is, is we have an attention span problem. But if you have multiple models, we have these other models only focused on checking one thing at a time. So there's an overdose engine and it listens to every turn of the conversation. It's like, are we talking about drugs? Are we talking about drugs? Yes, we're talking about drugs. Okay. And then it's like, well, okay, did somebody just say a number that's an overdose relative to their prescription or relative to max toxicity of what you can have of that drug? Okay, it did. And it may not seem that hard. Four pills versus two pills. But when you're talking about creams and injectables, it gets quite hard. I took a whole bunch of my testosterone cream and I rubbed it on my hand. Was that an overdose? Right. I don't know how much cream was in your hand.
B
Right. What's a little bit? What's in your hand?
A
What's a little bit? Was it a pea size? Was it a cherry tomato size? Was it an apple size? RLM knows how to ask all these questions and knows how to navigate assessing whether it's actually an overdose. And you cannot have. If a patient shares an overdose information with a care manager in a clinical setting, you need to do something.
B
AI is also changing health care from a totally different angle by transforming agriculture. Paul Mikesell of Carbon Robotics explains in episode 270 why his company's approach to weed control swaps chemical herbicides for AI guided lasers.
A
I've also learned a lot about the quality of our food system and I know that there's lots of discussion about this now. We are becoming more aware of it that different herbicides are being banned in Europe, United States, et cetera. We are learning about more of the long term negative health effects. Again, the ones who really suffer from it over the lifetime is the farmers who get exposed to this stuff in much higher doses than the consumer. But even the consumer, you know, even you right now, are participating in some form of a multi decade, maybe multi generational science experiment. We all have glyphosate in our system at any. And so if you were, if you take everybody listening to this podcast right now, if we all went and did a urine sample, you would find about 90% of us would have glyphosate in our system right now. What's glyphosate? It's the active ingredient in Roundup.
B
Right.
A
We know that it's carcinogenic. Like any carcinogenic, it's only a question of exposure over time. So we should be able to, with the kinds of technology that are available today, with the things that AI can do, we should be able to take a step back and say, do we really need to be spraying this stuff on our food in order to grow it and survive as a population?
B
Yeah.
A
My answer to that question, I think, is, no, we don't need to do that. And we should be able to do things like Laser waiter.
B
Yeah. Beyond the healthcare benefits, carbon robotics robots help farmers operate more efficiently and sustainably. And they look really cool, too. Speaking of cool, in the world of marketing and media, agents are fundamentally changing the relationship between brands and consumers. Firsthand's John Heller joined episode 242 to describe a shift where AI agents curate the web specifically for the user's intent.
A
I'd been working in that, in the gaming world and some of the generative AI abilities for gaming assets, when language models really. And something struck us, something very powerful, which is, and this is a metaphor for the math inside. But AI now understands the ideas and intents, needs you may have from what you're reading, what you're watching, what you might ask it outright and it can go find the right response or take the right responding action and everything is presented to you in a very natural human way. And if you back up a step and think of that happening all the way through a consumer's use of the digital world, from when they're searching and becoming aware of things they might need, when they do some investigation and read up on products or services, when they go to browse or shop, when they buy, all of those modes change pretty fundamental. They're not replaced. We think they get enhanced. Because instead of the world of the past where I maybe did a search, got some directions and a link, went to a place, read up on something, browsed for something, went to, saw an ad, maybe went to another place to try to find the version I want, those are all sort of separate hops, right? The Internet, where it's, you know, the same content everybody sees, AI instead is going to understand and learn at each moment what it is you need. And as with most things, AI data is the core. The people who have the most and best data about a product or service are the brands, they are the retailers and the people who Sell it so they can create brand agents, which means your experience on the Internet at all of those moments in the journey from first learning about it to figuring out what the right configuration is and comparing and browsing and buying is going to adapt on the fly through these agents. So it doesn't replace the web, but it changes things from you looking at stuff someone wrote to something that's partially adapting to what you actually need, understanding your needs. But the agents that are doing that for you are from the retailers and brands themselves, because it's their data that is what you need. And that sort of changes the Internet into kind of your Internet for both parties.
B
While software agents are transforming the digital world, a massive shift is happening in the physical world as well. This is the dawn of physical AI, where AI models don't just generate text or images, they control things that move, like the aforementioned farm machinery. According to Sonia Fiddler, VP of AI Research at Nvidia, the scale of this opportunity is staggering. Here's sonia in episode 249.
A
At the end of the day, robots need to operate in the physical world, in our world. And this world is three dimensional and conforms to the laws of physics. And there's humans inside, right. That we need to interact with. You know, we typically hear the term such AI that operates in a real physical world as physical AI. So I'll maybe use that term quite a lot. Right?
B
Yeah.
A
The physical AI is really kind of the upcoming big industry, very likely larger than generative agentic AI, you know? You know, Jensen typically says everything that moves, all devices that move will be autonomous. Right. So that's kind of the vision. So a robot to operate in the real world obviously needs to understand the world. What am I seeing? What is everything I'm seeing doing? How is it going to react to my, my action? Right. So understanding it needs to act.
B
But there is a catch. You can't train a physical robot the same way you train a chatbot. If a chatbot makes a mistake, you might get a typo. If a robot makes a mistake, it'll probably break something. To solve this, researchers like Mingyu Lu are building World foundation models, AI that understands physics and spacetime, allowing robots to simulate thousands of futures before they take a single step in reality, as Mingyu explains in episode 240.
A
So I think World foundation models is important to physical AI developers. You know, physical AI are systems with AI deployed in the real world. Right. And different to digital AI, these physical AI systems that interact with the environment can create damage. Right. So this could be Real harm. Right, right, right.
B
So a physical AI system might be controlling a robotic arm or some other piece of equipment changing the physical world.
A
Yeah, I think there are three major use cases for physical AI. Okay. It's all around simulation. The first one is, you know, when you train a physical AI system, you train a deep learning model. You have a thousand checkpoints. Do you know which one you want to deploy? Right. And if you deploy individually, going to be very time consuming. And now it's bad, it's going to damage your kitchen. Right. So with a wall model, you can do verification in the simulation so you can quickly test out these partners in many, many different kitchens. And before.
You deploy in the real kitchen and after this verification step, you may be narrowed down to three checkpoints and then you do the real deployments so you can have an easier life to deploy your physical AI.
B
Once these brains are trained safely in digital worlds, they need bodies. And while we may see many form factors in factories, there is a massive surge in humanoid robotics going on outside those factory walls. Yashraj Narang from Nvidia's Seattle robotics lab explains in episode 274 how this isn't just an aesthetic choice. It's a practical requirement for robots that need to work alongside us.
A
You know, there's a group of people, you know, forward thinking people, Jensen very much included, this is near and dear to his heart, that felt that the time is right for this dream of humanoid robotics to finally be realized. Right. You know, let's, let's actually go for it. And, you know, this, this begs the question of why, why humanoids at all? You know, why have people been so interested in humanoids? Why do people believe in humanoids? And I think that the most common answer you'll get to this, which I believe makes a lot of sense, is that the world has been designed for humans. You know, Right. We have built everything for us, for our form, factors, for our hands. And if we want robots to operate alongside us in places that we go to every day, you know, in our home, in the office and so on, we want these robots to have our form. And in doing so, they can do a lot of things. Ideally that we can, you know, we can go up and down stairs that were really built for the dimensions of our legs. We can open and close doors that are located at a certain height and have a certain geometry because they're easy to, for us to grab. Humanoids could manipulate tools like hammers and scissors and screwdrivers and pipettes. If you're in a Lab. These sorts of things which were built for our hands.
B
As AI moves from the screen to the physical world, it is also fundamentally changing our creative and professional lives. In episode 265, Canva's Danny Wu talks about AI and creative superpowers.
A
Kind of see like the magic of Canva is integrating all the different steps and different parts of design into to a simple page, as I like to call it. And so we really invested in our content library, in millions of templates that make it easy to start. And what we saw and got really excited about AI was that firstly we had an offer. All the amazing high quality content for people to use when the user might want something. They might want to have an idea that didn't necessarily exist, maybe has actually never been created in the world. AI just gives us this superpower and ability to actually create things on demand, specifically for what someone has in mind or in mission and just kind of turn that idea, turn that search term or prompt into something they can use to express themselves.
B
But as these systems become more widespread, we must focus on inclusivity. We need to ensure that the data feeding these models represents everyone. Angel Bush, founder of Black Women in AI, reminds us of the goal of true equity in episode 250.
A
One of the things that I've always said to people is I want black women in artificial intelligence to be so successful that it no longer has to exist. We're really not looking for members. We're looking for people to be a part of a movement and really understand and trust that vision of the movement. That we're going to make sure that you have all the tools you need in order to be a part of the AI economy in order to pivot into your career.
B
And in education, leaders like Dr. Cynthia Teniente Mattson at San Jose State University are teaching students that no matter how powerful the tool, the human element remains essential. Here's Dr. Teniente Mattson in episode 275.
A
There are some students who are using the tools that I've talked to for study guides. There are some students that are using the tools for first drafts. I think however we use the tools, it's important if we're going to be writing about things or communicating that we're citing references and saying, you know, this, this was co developed based on whatever sort of information they might have retrieved from the instrument and also to validate it because no these hallucinations exist. But as time goes on, the hallucinations are diminishing, especially if you're building your own custom GPTs. It doesn't mean mistakes aren't going to happen.
B
Sure.
A
But that's as I say to students regularly, Noah, and faculty and staff, you are still the human in the loop. We're not trying to replace the human in the loop. Have the tool be your co pilot or your assistant that you're directing.
B
So looking back on 2025, what's our best piece of guest given advice for the year to come? It's simple. Start now. As Derek Slager of Imperity puts it in episode 271, if you're still on the sidelines when it comes to artificial intelligence, it's high time to get in the game.
A
I would say the one piece of advice, and I give this advice a lot, is start now. It's so important. It's so important because like, it's early, right? We're still figuring out the patterns and the practices. You know, like as an industry we're learning a lot about kind of how to, you know, put these incredible new technologies together in ways that really, you know, move the needle. And you know, right now you just have a choice, right? You can be, you can be a doer who's in that learning loop or you can be an observer and kind of, you know, wait and see. And I think, you know, we talk a lot about this here. Like, you know, speed's the only thing that matters. And so I don't think it's viable in, in the current market to be outside that learning loop, right? And the good news is it's early, right? And so you're not too late. But it's getting to the point where pretty soon you'll too late. And so I think we're certainly past the point. And again, this is something that's changed in the last six months. We're past the point where people are like, well, I'm. We'll see if this AI thing play plays out or not. Like, it's just, it's overwhelmingly obvious where things are going. And so, yeah, get off the sidelines, get in there, try stuff, learn. It's easier than ever, you know, to do that. There's more information out there. And of course, you know, AI feeds itself, right? AI can also help you figure out where to start and how to get through. And so, yeah, start now and go really fast. That's the path to success.
B
We are moving toward a future of collaboration where human creativity is amplified by silicon capability. Nvidia's Jacob Lieberman leaves us with this final thought on the partnership between people and agents in episode 249.
A
There will be teams composed of carbon people and silicon agents, and they're collaborating on TAs, and at various times the humans will be conducting the orchestra, and at other times the orchestra will be conducting itself, and that might be the most efficient way to get the work done. Human judgment is critical, human strategizing is critical, and there's always room for that. So it's a way to complement the things that we're very good at with some of the things where we could use some help.
B
Yeah.
2025 was an incredible year for AI, and all signs point to 2026 being full of more breakthroughs and transformations in artificial intelligence and how we use it to change the ways we live and work. Follow the Nvidia AI Podcast wherever you get your podcasts to stay up with the latest in the industry as told by the people creating it, and browse the complete archive of episodes@AI-podcast.invideo.com thanks for listening, Sam.
Host: Noah Kravitz
Published: December 10, 2025
This episode of the NVIDIA AI Podcast is a sweeping retrospective on AI developments in 2025. Host Noah Kravitz curates the year’s most significant themes, expert insights, and impactful stories, ranging from the evolution of agentic AI to the rise of AI factories, and the advance of AI-powered robotics. The episode distills wisdom from dozens of guests working at the frontier of healthcare, agriculture, enterprise infrastructure, and education, highlighting how AI is reshaping both digital and physical realities.
“If it gets you 75, 80% of the way there, that’s fantastic.”
— Bartley Richardson (NVIDIA, Ep. 258, 03:28)
Timestamps:
Notable Quote:
“Data is exponential, insight is linear, every day percent data utilized to give insight is lower… you have to run just to stay in place.”
— Shai Shen Orr (Ep. 276, 04:12)
Timestamps:
Notable Quotes:
“Instead of sending all your data to the GPU, you can actually send your GPU to the data.”
— Jacob Lieberman (NVIDIA, Ep. 281, 05:50)
“My platform... is intentionally trying to be more of this factory concept... one unified, consistent pipeline from start to finish.”
— Sarah Laszlo (Visa, Ep. 256, 06:51)
“These are very sort of sensitive dialogues... Not all dialogues can go out in the cloud somewhere.”
— Karen Hilson (Telenor, Ep. 247, 08:44)
Timestamps:
Notable Quote:
“Everything we did is transparent and so you can make these modifications yourself.”
— Jonathan Cohen (NVIDIA, Ep. 278, 09:53)
Timestamps:
Notable Quotes:
“My wife tells me I’m a lot nicer than before... I end my day in a way that is a lot more relaxed.”
— Anne Ostwat (Moon Surgical, Ep. 272, 10:53)
“If you were... to do a urine sample, you would find about 90% of us would have glyphosate in our system right now... We should be able to take a step back and say, do we really need to be spraying this stuff on our food?”
— Paul Mikesell (Carbon Robotics, Ep. 270, 14:22)
Timestamps:
Notable Quote:
“It changes things from you looking at stuff someone wrote to something that’s partially adapting to what you actually need, understanding your needs.”
— John Heller (Firsthand, Ep. 242, 16:10)
Timestamps:
Notable Quotes:
“Physical AI is really kind of the upcoming big industry, very likely larger than generative agentic AI.”
— Sonia Fidler (NVIDIA, Ep. 249, 18:49)
“Why humanoids?... The world has been designed for humans.”
— Yashraj Narang (NVIDIA, Ep. 274, 21:56)
Timestamps:
Notable Quotes:
“AI... gives us this superpower and ability to actually create things on demand.”
— Danny Wu (Canva, Ep. 265, 23:26)
“I want black women in artificial intelligence to be so successful that it no longer has to exist.”
— Angel Bush (Black Women in AI, Ep. 250, 24:31)
“You are still the human in the loop. We’re not trying to replace the human in the loop.”
— Dr. Cynthia Teniente Mattson (San Jose State, Ep. 275, 25:55)
Timestamps:
Notable Quote:
“Start now... Get off the sidelines, get in there, try stuff, learn. It’s easier than ever... AI feeds itself, right?”
— Derek Slager (Imperity, Ep. 271, 26:32)
Timestamps:
Notable Quote:
“There will be teams composed of carbon people and silicon agents... humans will be conducting the orchestra, and at other times the orchestra will be conducting itself... Human judgment is critical.”
— Jacob Lieberman (NVIDIA, Ep. 249, 28:07)
Timestamps:
On Simplifying the Blank Page:
“If I can get something that’s 80% of the way there, it’s great.” — Bartley Richardson, 03:41
On the Red Queen Effect in Data:
“You have to run just to stay in place.” — Shai Shen Orr, 04:12
On Robots in the Human World:
“The world has been designed for humans.” — Yashraj Narang, 21:56
On Urgency:
“Start now...try stuff, learn. It's easier than ever...AI can also help you figure out where to start.” — Derek Slager, 26:32
2025 marked a year of profound transformation as AI expanded beyond chat interfaces into autonomous agents, unified AI-powered factories, and the very machinery of our physical world. Key insights revolve around the importance of collaboration—across humans, machines, sectors, and nations. The stakes (and opportunities) are high, but so is the call to action: start experimenting, stay human, and prepare for a future where our creativity and judgment are augmented, not replaced, by AI partnership.
For more stories, expert interviews, and future insights, follow the NVIDIA AI Podcast—wherever you listen.