
Loading summary
Mike
The goal of AI should be to.
Paul Raetzer
Unlock human potential, not replace it. But we have to be proactive and intentional about pursuing that outcome.
Mike
Welcome to the Road to AGI and.
Paul Raetzer
Beyond, a special miniseries from the Artificial Intelligence Show. I'm your host, Paul Raetzer, founder and CEO of SmartRx and marketing AI Institute. Artificial General Intelligence, or AGI, has long been a goal of leading AI research labs. But how close are we really? What breakthroughs are shaping its path, and what risks and responsibilities come with pursuing.
Mike
And eventually achieving AGI?
Paul Raetzer
My goal for this series is to see around the corner to figure out what happens next, what it means and what we can do about it, or at least to consider the possible outcomes we should be preparing for. Through interviews with leading experts, this series dives into how smarter, more more generally capable models will impact businesses, the economy, the workforce, educational systems and society. The future is unknown. Let's explore what might come next together. Welcome to episode 141 of the Artificial Intelligence show and episode one of our new series, the Road to AGI and Beyond. I'm your host, Paul Paul Raetzer. I figured for the first edition of the series we would start at the.
Mike
Beginning and lay the foundation for what.
Paul Raetzer
Comes next for our longtime loyal listeners. You may recall back on episode 86 in early March 2024 so just over a year ago, we had shared a Sam Altman quote about AGI, or artificial general intelligence, that hadn't been previously reported. The quote came from Adam Brotman and Andy Sack, who interviewed Altman for Chapter.
Mike
One of their forthcoming book, A AI First.
Paul Raetzer
Brotman is the former chief Digital Officer at Starbucks who is pivotal in the.
Mike
Development of the coffee giant's mobile payment.
Paul Raetzer
And loyalty programs, while SAC is a legendary tech visionary and former advisor to Microsoft CEO Satya Nadella. Their story starts with an interview with.
Mike
Sam in October 2023, one month before.
Paul Raetzer
He was fired from OpenAI and then rehired.
Mike
During that meeting, Sam had talked about AGI multiple times and they said, when.
Paul Raetzer
You say AGI, what do you mean?
Mike
Sam replied to that said, that's a fair question. And I would say it's when AI will be able to achieve novel scientific.
Paul Raetzer
Breakthroughs on its own. The chapter goes on to say Brotman.
Mike
And SEC replied, okay, well that's sort of wild.
Paul Raetzer
Not sure exactly what that means, but.
Mike
What do you think AGI will mean for us and for consumer brand marketers.
Paul Raetzer
Trying to create ad campaigns and the like to build their companies?
Mike
To which Sam replied, oh, for that it will mean that 95% of what marketers, use agencies, strategists and creative professionals.
Paul Raetzer
For today will easily, nearly instantly and.
Mike
At almost no cost be handled by the AI.
Paul Raetzer
And the AI will likely be able.
Mike
To test the creative against real or synthetic customer focus groups for predicting results.
Paul Raetzer
And optimizing again, all free, instant and nearly perfect images, videos, campaign ideas. No problem.
Mike
Adam and Andy, the authors, were admittedly new to the concept of AGI, so.
Paul Raetzer
At this point, they were basically speechless.
Mike
They said that about when do you.
Paul Raetzer
Think AGI will be a reality? They asked Sam.
Mike
And Sam replied, five years, give or.
Paul Raetzer
Take, maybe slightly longer.
Mike
But no one knows exactly when or.
Paul Raetzer
What it will mean for society. And that was basically the end of their interview with Sam. So we shared this on episode 86, and there the quote kind of went everywhere, I would say went viral to some degree. And so we got a ton of feedback about this and lots of questions and lots of people becoming worried that we were only a few years away from AGI appearing and taking everyone's jobs, basically weekly. So the following week, I was on a flight to Miami on a Monday morning. So Mike and I record the weekly episodes on Monday mornings. And so I'm on my flight to Miami and I realized, like, we need to build on this. Like, we have to talk about this some more.
Mike
And so on that flight, I created.
Paul Raetzer
What I called an incomplete AI timeline. It was just like a starting point for the discussion. And when I got to Miami, got to my hotel room, jumped on the.
Mike
Call with Mike, and I said, all.
Paul Raetzer
Right, I'm just gonna talk. I'm just gonna, like, share some thoughts.
Mike
Of that I had on this flight here.
Paul Raetzer
And so what I said as I opened episode 87 then was that I don't like the futurist stuff. I'm not big on trying to make predictions. I don't pretend like I have some insane inside knowledge about everything going on.
Mike
Within these AI labs.
Paul Raetzer
And honestly, I'm pretty convinced that most.
Mike
Of them don't actually know what's going.
Paul Raetzer
To happen with their own models 18.
Mike
To 24 months from now.
Paul Raetzer
I think they have a pretty good.
Mike
Concept of what they think they're going.
Paul Raetzer
To be able to do in the next 12 months. But I think that it's really important all of us try and interpret what's.
Mike
Going on at these labs and what.
Paul Raetzer
These leaders are saying so we can.
Mike
Understand the story arc a little bit better and begin to take action.
Paul Raetzer
So the week prior to episode 87, so in that kind of week between 86 and 87, I had listened to.
Mike
A series of podcast episodes with Demis Hassabis, the CEO of Google DeepMind, Jan LeCun, the chief AI scientist at Meta, Sam Altman and a number of others.
Paul Raetzer
And they were all kind of talking.
Mike
About this AGI concept and these ideas behind the timeline sort of accelerating.
Paul Raetzer
And so I started considering those interviews in the context of other recent reports and articles and interviews with like Dario Amadei of Anthropic and Mustafa Salomon, who.
Mike
At the time was with Inflection, his company he'd started.
Paul Raetzer
He, he, you know, soon thereafter would move on to become the CEO of AI at Microsoft. Ilya Sutsky, who at the time was.
Mike
At OpenAI and would soon move on and start his own safe superintelligence company. Shane Legg of DeepMind, one of the co founders of DeepMind and a bunch.
Paul Raetzer
Of other AI leaders. And when we look back over the last, like 70 some years, so a lot of people kind of think that.
Mike
AI just emerged in the last few.
Paul Raetzer
Years, when in reality, this idea of.
Mike
Pursuing human, like general intelligence has been going on, or at least theorized since the 1950s.
Paul Raetzer
So for more than 70 years, these researchers pursued this idea and they were.
Mike
Driven by this belief that we could.
Paul Raetzer
Give machines the ability to think, reason, understand, create and take actions in the digital and physical worlds.
Mike
But progress was often slow.
Paul Raetzer
We would hit these, what are called AI winters, where it would seem like it just wasn't going to work.
Mike
There was some breakthroughs around 2011, 2012.
Paul Raetzer
Where we started to see that this idea of deep learning might actually work.
Mike
And everything just sort of escalated from there, leading to the ChatGPT moment in.
Paul Raetzer
November 2022 when everything sort of changed and when generative AI found its way.
Mike
Into society that we all, all of.
Paul Raetzer
A sudden had these machines that could create, they could generate images, they could generate text, and you and I could.
Mike
Experience them through a simple application or website.
Paul Raetzer
So for me, I, I began researching AI in 2011. It started for me with IBM Watson winning on Jeopardy. That was sort of my inflection point where I became curious enough to go.
Mike
Figure out what this technology was. And at the time I owned my marketing agency and I was thinking about.
Paul Raetzer
The practicality of could this sort of technology be applied to my agency? Could we use it to help better.
Mike
Develop strategies for client campaigns and run campaigns more effectively?
Paul Raetzer
And so I started following the space closely. But back in 2011, there wasn't anybody talking about artificial intelligence that wasn't in the field, like the researchers themselves, the technologists. And so I had to spend a.
Mike
Lot of my time just trying to.
Paul Raetzer
Decipher what they were talking about and.
Mike
What it actually meant. And one of the hardest things for.
Paul Raetzer
Me in the early years was just.
Mike
Arriving at a definition of artificial intelligence.
Paul Raetzer
That made sense to me and that I could eventually explain to other people. And anyone who's like, listened to my talks or heard you've been listening to the podcast for a while, knows my.
Mike
Favorite definition of artificial intelligence is the science of making machines smart.
Paul Raetzer
And that actually came from Demis Asabas. I think it was an interview he did with Rolling Stone magazine the that.
Mike
I first heard that definition.
Paul Raetzer
So I've been following this space for.
Mike
A really long time, listening to every interview, reading every article, blog post, research.
Paul Raetzer
Report from top AI researchers, labs and entrepreneurs. I first wrote about AI in my 2014 book, the Marketing Performance Blueprint, where I actually theorized this idea of building.
Mike
A marketing intelligence engine to drive marketing.
Paul Raetzer
Strategy and campaigns and performance. I started my marketing AI institute in 2016, sold my agency in 2021 to focus on AI, because by spring of.
Mike
2021 I'd become convinced we were arriving.
Paul Raetzer
At a tipping point that everything was about to change. I didn't know it was going to be chatgpt. I didn't know that that was right around the corner.
Mike
But I knew the labs were working on language generation and understanding and they.
Paul Raetzer
Had made a lot of progress by that point. But it was actually Cade Metz's book Genius Makers that became the real kind.
Mike
Of forcing function for me.
Paul Raetzer
When I read that book, I started to connect the dots of, of kind.
Mike
Of what had happened since 2011 in.
Paul Raetzer
This deep learning movement, the pursuit of AGI by these leading labs and sort of why it hadn't been adopted yet within enterprises like I assumed it would have been by that point.
Mike
And so everything just started making sense.
Paul Raetzer
And I actually decided on a walk, I was on spring break with my family, that I was done.
Mike
I was going to sell agency and.
Paul Raetzer
I was going to focus exclusively on.
Mike
Trying to figure out the story of AI.
Paul Raetzer
And by early 2023, then what I.
Mike
Had noticed was that the tone and positioning on AGI from the top AI labs had changed.
Paul Raetzer
They were no longer talking about AGI as something that might be possible in.
Mike
A decade or more.
Paul Raetzer
They were conveying increasing confidence that there was a clear path to achieving AGI.
Mike
Within three to five years, which would.
Paul Raetzer
Put it in the 2026 to 2028 range. That was a very short time period, in my opinion. So I had become convinced that they.
Mike
These labs were intent on pursuing and achieving AGI.
Paul Raetzer
And yet when I looked around, no.
Mike
One was talking about what that meant.
Paul Raetzer
No one was game planning.
Mike
Well, what if they're right? What are the possible scenarios to businesses.
Paul Raetzer
And the economy and educational systems? So when you started to look around, you would see this pursuit of AGI. And by 2023, 2024, they were becoming.
Mike
Much more vocal about it.
Paul Raetzer
So I wanted to highlight for you a few of the key ways that these leaders are talking about this. So we have Elon Musk. Elon Musk started XAI, I think it was end of 2023, early 2024, something like that, in the last two years.
Mike
And this is his attempt to build.
Paul Raetzer
His own research lab. So again, if you've listened to the podcast for a long time, you know the backstory.
Mike
Elon Musk and Sam Altman Co founded.
Paul Raetzer
OpenAI with a collection of other researchers.
Mike
They had a falling out around 2019, and now Elon is suing Sam and.
Paul Raetzer
OpenAI for trying to become a for profit company. And so there's a whole messy history here. But Elon created his own AI research lab called xai.
Mike
And so Elon is on record as saying the overarching goal of XAI is.
Paul Raetzer
To build a good AGI with the.
Mike
Overarching purpose of just trying to understand the universe. Mark Zuckerberg so meta made their big switch from the metaverse to focusing on AI.
Paul Raetzer
Now, meta and Facebook have been a.
Mike
Major player in AI for well over.
Paul Raetzer
A decade, but they weren't solely focused on it the way they are now.
Mike
So they spent like $10 billion trying.
Paul Raetzer
To make the metaverse come to life.
Mike
And then sometime around 2023, early 2024.
Paul Raetzer
Zuckerberg realized that they needed to go much more aggressively into AI. And so Zuckerberg said, we've, quote, we've come to view that in order to.
Mike
Build the products that we want to build, we need to build for general intelligence.
Paul Raetzer
Satya Nadella last year on CNBC said.
Mike
Quote, our mission is to empower every person and every organization on the planet to achieve more. I think we have the best partnership in tech. He was referring to OpenAI. And I'm excited for us to build AGI together. Google DeepMind on their about page says in the coming years, AI and ultimately artificial general intelligence has the potential to drive one of the greatest transformations in history. Now, they don't specifically state that it's.
Paul Raetzer
Their mission to build it, but actually.
Mike
If you dig into it, their stated mission is to build AI responsibly. To benefit humanity.
Paul Raetzer
But make no mistake about it, their goal is to build AGI. So in their vision statement on Google DeepMind, it says in the coming years.
Mike
AI and ultimately AGI's potential to drive one of the greatest transformations in history.
Paul Raetzer
As I said then, it goes on to say we're a team of scientists.
Mike
Engineers, ethicists and more working to build.
Paul Raetzer
The next generation of AI systems safely and responsibly. By solving some of the hardest scientific.
Mike
And engineering challenges of our time.
Paul Raetzer
We're working to create breakthrough technologies that could advance science, transform work, serve diverse communities and improve billions of people's lives. Now, Demis Asabas, who's the CEO of Google DeepMind and the co founder, he.
Mike
Has said multiple times that this is the whole focus, that his whole mission in life is to solve the problem of intelligence and then solve everything else. That he sees AGI as the path to solving the most challenging problems in the world.
Paul Raetzer
So he and his colleagues have been.
Mike
Working on this grander ambition of AGI.
Paul Raetzer
By building machines that can think, learn and solve humanity's toughest problems.
Mike
Hassabas has said he believes it'll be an epoch defining technology like harnessing, like the harnessing of electricity that will change the very fabric of human life.
Paul Raetzer
So we know that they're all thinking about it.
Mike
In many cases it's actually their mission. Whether it's stated or not stated as.
Paul Raetzer
The mission, it is what they are.
Mike
Setting out to do. The problem in recent years is that.
Paul Raetzer
The definition has become quite uncertain. We don't know how they actually define AI AGI and they keep changing the definition so it's like become this moving target.
Mike
So we'll go through a few of.
Paul Raetzer
The definitions just to sort of level set for everyone. So OpenAI who has changed this multiple times and continues to evolve it. One of the pages on their site.
Mike
Where planning for AGI and beyond, they say AI systems that are generally smarter than humans. There's a Google DeepMind paper called Levels.
Paul Raetzer
Of AGI we'll talk about.
Mike
In that paper they say AGI is.
Paul Raetzer
An AI system that is at least.
Mike
As capable as a human at most tasks.
Paul Raetzer
Demis Hassabis, who we just talked about.
Mike
He has multiple definitions but they're roughly similar.
Paul Raetzer
So in one example in New York Times he said able to do pretty.
Mike
Much any cognitive task that humans can do. And then in another recent interview he said it's a system that is capable of exhibiting all the cognitive capabilities that humans have. Now Google Cloud has a page dedicated to AGI. So we'll Explore for a moment how.
Paul Raetzer
Google Cloud thinks about AGI. So they define it as a hypothetical.
Mike
Intelligence of a machine that possesses the ability to understand or learn any intellectual.
Paul Raetzer
Task that a human being can. It is a type of AI that.
Mike
Aims to mimic the cognitive abilities of the human brain.
Paul Raetzer
Now that page goes on to say.
Mike
In addition to the core characteristics mentioned.
Paul Raetzer
Earlier, AGI systems also possess certain key traits that distinguish them from other types of AI.
Mike
One is generalization ability. AGI can transfer knowledge and skills learned.
Paul Raetzer
In one domain to another, enabling it to adapt to new and unseen situations effectively. Now, I'll pause for a minute on, on the definitions from Google Cloud and.
Mike
Add some context here. So what this means is historically we.
Paul Raetzer
Have had narrow AI. We've had AI that learned how to.
Mike
Generate images or understand images, or generate.
Paul Raetzer
Voice, or create text or play chess. So we had AI that was trained to do a specific thing. What we are looking for and what.
Mike
AGI promises is the same AI that learns how to play chess at a.
Paul Raetzer
Super human level could flip over and play Pokemon or play Super Mario. It could play other games, it could play checkers, it could play Uno because it's actually able to generalize its knowledge.
Mike
And apply it to other domains.
Paul Raetzer
That's how humans work. Humans learn very quickly how to go.
Mike
From one game to the next and.
Paul Raetzer
Can develop moderate capabilities in those areas rather quickly. That's not how AI traditionally has worked.
Mike
And so generality is a really important.
Paul Raetzer
Concept to understand artificial general intelligence. We want these generally capable cognitive abilities that spread across domains. The second part, going back to Google Cloud's overview, is common sense knowledge. So they say AGI has a vast.
Mike
Repository of knowledge about the world, including.
Paul Raetzer
Facts, relationships and social norms, allowing it to reason and make decisions. Based on this common understanding. The pursuit of AGI Google continues. Google Cloud continues involves interdisciplinary collaboration among fields such as computer science, neuroscience and cognitive psychology.
Mike
Advancements in these areas are continuously shaping our understanding and the development of AGI.
Paul Raetzer
Currently, AGI remains largely a concept and.
Mike
A goal that researchers and engineers are working towards.
Paul Raetzer
So again, that was Google Cloud. Now, in all of these definitions I've.
Mike
Tried to arrive at, what do I think it is? I've read all of them.
Paul Raetzer
I've studied the space for however many years. This is now 13, 14 years. It's like what do I feel is a reasonable definition?
Mike
And so what I've landed on.
Paul Raetzer
And again, like some of these AI leaders, like, I may change this as time goes, but I define it as a system, an AI system that is generally capable of outperforming the average human.
Mike
At most cognitive tasks.
Paul Raetzer
Now, I want to unpack this for.
Mike
A moment because there's a couple of.
Paul Raetzer
Really important phrases in here. One is generally capable, and two is average human. So the generality part comes from what we've already discussed.
Mike
It needs to be able to learn.
Paul Raetzer
And perform across multiple domains. The key, though, is what often is missing from these definitions from AI leaders is what are we talking about in terms of human capability? Are we talking about PhD level, you know, superhuman? Are we talking about average human?
Mike
And so when I think about the impact of AGI and I'm trying to plan for my own business, I'm trying.
Paul Raetzer
To plan for economic impact. I'm trying to plan for, like, where my kids are going to go to school and what they're going to study. Like, I'm trying to think about the realities here. And the reality is most businesses are filled with average workers, people who do what they need to do to get the job done. They are not always filled with a talent.
Mike
They're not filled with the best of the best, the top 1%, the top 10%.
Paul Raetzer
And so there's a lot of average work done.
Mike
And so for me to think about.
Paul Raetzer
The impact of AGI or anything close to it, my thought is it just.
Mike
Needs to be able to do the.
Paul Raetzer
Work that the normal human would do. And if the normal human does average work, then we have much bigger things.
Mike
To worry about, way faster.
Paul Raetzer
If the definition is more like what.
Mike
Elon Musk Musk calls it, which is AI that is smarter than the smartest.
Paul Raetzer
Human, well, that's a whole different level we have to get to. But if you look at your business.
Mike
Look at your team and say, okay.
Paul Raetzer
Let'S force rank here. Here's our A players, here's our B players, here's our C players, the question.
Mike
Basically becomes, when is the model at B player level?
Paul Raetzer
And quite honestly, there's a lot of.
Mike
Tasks right now that it's already there.
Paul Raetzer
And so when you start stacking those.
Mike
And you start looking at a single model that can perform across marketing and sales and service and accounting and operations and HR and finance and legal, a.
Paul Raetzer
Single model that is at least average.
Mike
Human level at all of those things.
Paul Raetzer
You all of a sudden start to see how this could get very complicated very quickly with managing this in business and society. So back to Elon Musk's definition again. When he was asked about AGI, this.
Mike
Is how he defined it. Smarter than the smartest human.
Paul Raetzer
And he said, I think it's probably.
Mike
Next year or within two years.
Paul Raetzer
Now anyone who follows Tesla and Elon Musk knows that Musk tends to over exaggerate timelines quite dramatically.
Mike
He's been promising full self driving since like 2016. Now he usually ends up being right that something is technically possible, but he.
Paul Raetzer
He is very aggressive in his timelines, let's say. So this idea though, the thing I.
Mike
Want to focus on was his definition.
Paul Raetzer
Is this smarter than the smartest human? Because that leads us to well, what's.
Mike
The beyond AGI part?
Paul Raetzer
So we go back to the title of this series.
Mike
It's the road to AGI and beyond. Well, what's beyond AGI?
Paul Raetzer
That's pretty significant already. Well, what's beyond AGI is artificial superintelligence or ASI.
Mike
So there's a paper and I'll, I'll link to all of these things in the show notes.
Paul Raetzer
Our team will make sure we put.
Mike
All the links in here.
Paul Raetzer
So if you want to spend time.
Mike
And really drill into this stuff, I.
Paul Raetzer
I welcome you to do it. There's a paper that came out in.
Mike
2024, this is May of 2024 from Google DeepMind called Levels of AGI for Operationalizing Progress on the Path to AGI.
Paul Raetzer
So this report was written in September 2023. So if we rewind back to September 2023, GPT4, which was the most powerful.
Mike
Model in the world for almost two.
Paul Raetzer
Years, was six months old. So the paper comes out May 2024.
Mike
One of the lead authors is Shane Leg, who I mentioned earlier. He's one of DeepMind's co founders.
Paul Raetzer
He's also actually credited with coining the.
Mike
Term AGI around 2002. So Shane Legg releases this paper co.
Paul Raetzer
Authored by eight researchers.
Mike
The paper starts by considering nine examples of AGI definitions from prominent AI researchers.
Paul Raetzer
And organizations and reflects on their strengths and limitations.
Mike
So they're doing the same thing I.
Paul Raetzer
Was just trying to do what, what are we even talking about here? Can we agree on what AGI is so we can therefore know how to.
Mike
Measure it and know when we get there?
Paul Raetzer
Because right now we have no idea if we're going to, if we are there, if we will be there in a year or two. So you have to come to some level of understanding and agreement on the definition.
Mike
So according to the authors, quote, the concept of AGI has grown from a.
Paul Raetzer
Subject of philosophical debate to one which also has near term practical relevance.
Mike
Some experts believe that Sparks of AGI, quote, sparks of AGI. It's referring to a paper called Sparks.
Paul Raetzer
Of AGI are already present in the.
Mike
Latest generation of large language models.
Paul Raetzer
Again, we're talking about fall spring 2023 to 2024.
Mike
So some researchers believed that there were.
Paul Raetzer
Already sparks of AGI in the early.
Mike
Form of large language models.
Paul Raetzer
We were seeing like a GPT4. Back to the papers, quote.
Mike
Some predict AI will broadly outperform humans.
Paul Raetzer
Within about a decade.
Mike
Some even assert that current LLMs are AGIs. So the Google DeepMind team proposed a framework for classifying the capabilities and behaviors.
Paul Raetzer
Of AGI or models and their precursors.
Mike
The framework introduces levels of AGI based.
Paul Raetzer
On performance, generality and autonomy, meant to.
Mike
Provide a common language that compares models, assesses risks and measures progress along the path to AGI.
Paul Raetzer
So I'll come back to two of these factors.
Mike
Performance, in their mind, refers to the.
Paul Raetzer
Depth of an AI system's capabilities, how.
Mike
It compares to human level performance for a given task.
Paul Raetzer
Generality, as we've already discussed, is about the breadth of an AI's capabilities or the range of tasks for which an.
Mike
AI system reaches a target performance threshold.
Paul Raetzer
They argue that it is critical for the AI research community to explicitly reflect.
Mike
On what we mean by AGI and.
Paul Raetzer
Aspire to quantify attributes like levels of.
Mike
AGI, performance, generality, autonomy. Now their levels are level zero, no.
Paul Raetzer
AI, so just traditional software. Level one is emerging and they, they.
Mike
Classify that as equal to or somewhat better than an unskilled human. Level 2 is competent, that is at least 50th percentile of skilled adults.
Paul Raetzer
So again, we're getting into this average human basically. So at level two, like let's say you take ChatGPT and it can do.
Mike
Marketing, sales, service operations, HR, finance, legal.
Paul Raetzer
IT management, if it could do all.
Mike
Of those things, single model, do all.
Paul Raetzer
Of those things at the 50th percentile of skilled adults, they're arguing it is now a form of AGI, what they would call competent AGI. So it's actually a spectrum.
Mike
So this is the real key concept with this paper. We don't have binary.
Paul Raetzer
It is or isn't AGI.
Mike
They're saying this is a form, this is a competent AGI. This is an early form, it is.
Paul Raetzer
On the spectrum, 50th percentile, basically. And so this is where we start to get into my definition. Like if we get to the point where an AI model is at or above the average skilled adult at most.
Mike
Cognitive tasks within a business, within knowledge.
Paul Raetzer
Work, we, we are at a point of AGI that society is not prepared to handle. So at the after level two Comes level three, which is expert, at which again their classification, at least 90th percentile of skilled adults.
Mike
Level four is virtuoso, at least 99th percentile of skilled adults.
Paul Raetzer
And then level five is superhuman, which.
Mike
Outperforms 100% of humans.
Paul Raetzer
Take the smartest humans in the world and it can outperform all of them at basically any cognitive task. So that is where we would find super intelligence.
Mike
That is what we basically are defining.
Paul Raetzer
It as, that you take Google, Gemini, ChatGPT, Anthropic, Claude, and you take the smartest human in every domain and it.
Mike
Outperforms all of them, single model better than every human, smartest humans that have.
Paul Raetzer
Ever lived at every domain. So that's a pretty weird thing to think about. But yet again, if you listen to our podcast regularly, go back to episode.
Mike
129 where we spent like 20 minutes on this idea of super intelligence.
Paul Raetzer
And so not only are the AI labs convinced AGI is near, when you.
Mike
Look at what's being talked about, what's being written, most of them sure seem.
Paul Raetzer
To think super intelligence is within reach as well. So let's walk through a couple of those examples.
Mike
We had situational awareness, a research report, or a series of articles from Leobold, Leopold aschenbrenner.
Paul Raetzer
So episode 102, we talked about this one. This was June 12, 2024 when we talked about it. So in that series of papers, he.
Mike
Claims that all the signals he's seeing as one of a few hundred AI.
Paul Raetzer
Insiders say that we will have super.
Mike
Intelligence in the true sense of the word by the end of the decade.
Paul Raetzer
And that AGI by 2027 is strikingly plausible. He goes on to say AI progress won't stop at human level.
Mike
Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic.
Paul Raetzer
Progress into one year, five orders of.
Mike
Magnitudes in his world.
Paul Raetzer
Improvement in a single year. We would rapidly go from human level.
Mike
To vastly superhuman AI systems. The power and the peril of super.
Paul Raetzer
Intelligence would be dramatic.
Mike
June 19, 2024, we had the formation of a company called Safe Superintelligence by Ilya Sutskova, who was one of the co founders and the Chief Scientist of.
Paul Raetzer
OpenAI and considered one of probably the top three AI researchers in the world, if not the top researcher in the world. So he's built a company that's on.
Mike
A straight line to superintelligence, has zero intentions of any products or any revenue until they achieve superintelligence. They just secured funding, 2 billion in funding at a 30 billion valuation.
Paul Raetzer
At the beginning of March, we also had the Intelligence Age, an article by Sam Altman. This is September 23, 2024.
Mike
In it, he wrote, here is one narrow way to look at human history. After thousands of years of compounding scientific.
Paul Raetzer
Discovery and technological progress, we have figured.
Mike
Out how to melt sand, add impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy.
Paul Raetzer
Through it, and end up with systems.
Mike
Capable of creating increasingly capable artificial intelligence.
Paul Raetzer
He goes on to say, this may.
Mike
Turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days.
Paul Raetzer
It may take longer, but I'm confident we'll get there.
Mike
How did we get to the doorstep of the next leap in prosperity?
Paul Raetzer
In three words, deep learning.
Mike
Deep learning worked. In 15 words, deep learning worked.
Paul Raetzer
Got predictably better with scale, and we dedicated increasing resources to it. That's really it.
Mike
Humanity discovered an algorithm that could really, truly learn any distribution of data, or really the underlying rules that produce any.
Paul Raetzer
Distribution of data to a shocking degree of precision. The more compute and data available, the.
Mike
Better it gets at helping people solve hard problems.
Paul Raetzer
I find no matter how much I spend time, I spend thinking about this.
Mike
I can never really internalize how consequential this is.
Paul Raetzer
Then January 3rd, 2025, we had a tweet that we reported on from Steven McClure, who is a research researching agent safety at OpenAI. And he tweeted, I kind of missed.
Mike
Doing AI research back when we didn't.
Paul Raetzer
Know how to create super intelligence. Sam Altman shows up again January 5.
Mike
2025 with an article called Reflections.
Paul Raetzer
And he says, we are now confident.
Mike
We know how to build AGI as.
Paul Raetzer
We have traditionally understood it. We believe that in 2025 we may.
Mike
See the first AI agents join the workforce and materially change the outputs of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great broadly distributed outcomes. We are beginning to turn our aim beyond that to superintelligence in the true.
Paul Raetzer
Sense of the word. We love our current products, but we.
Mike
Are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery.
Paul Raetzer
And innovation beyond what we are capable of doing on our own, and in.
Mike
Turn, massively increase abundance and prosperity. So now let's talk about setting the stage for AGI and beyond. How does OpenAI define this? We went through some basic definitions but.
Paul Raetzer
How do they think about the stages of artificial intelligence?
Mike
So in July 2024, Bloomberg was first to report stages of artificial intelligence that.
Paul Raetzer
Were OpenAI's internal ways of thinking about this. This has since been verified that these are indeed the ways that OpenAI looks at this.
Mike
So in their world, level one is chatbots or AI with conversational language.
Paul Raetzer
That is what we got with ChatGPT in November 2022.
Mike
Level two is reasoners, human level problem.
Paul Raetzer
Solvers, level three agents, systems that can.
Mike
Take actions, level four innovators, AI that can aid in invention. And level five organizations, AI that can.
Paul Raetzer
Do the work of an organization. Of an organization. Right now we, we had level one in fall 2022.
Mike
We were introduced to reasoning models in September 2024. The 01 model from OpenAI was the first.
Paul Raetzer
We now have half dozen of them or so that we're aware of from major labs.
Mike
Everybody's building reasoning into them.
Paul Raetzer
We just got Gemini 2.5 Pro yesterday. That is a reasoning or thinking model. And then agents. We'll talk a lot about agents in a minute. But we are now able to make.
Mike
Smarter agents because they have reasoning capabilities.
Paul Raetzer
And then that should pretty quickly lead.
Mike
Us to innovators, which is where like.
Paul Raetzer
Demis Hassabis would consider AGI achieved when we have true innovation, you know, creation.
Mike
Of original scientific breakthroughs. And then level five organizations, which could.
Paul Raetzer
Be, I don't know, like after AGI, before super intelligence, we could get to the organization level. And that's basically the AI is, is an autonomous organization. It just, you give it a goal and it, it runs everything by itself. That's a weird concept. We'll come back to that one. So what we know is the models are getting smarter, they're getting more generally capable, and that AI leaders speak with.
Mike
Increasing confidence that the path is clear. As we have heard, they all seem to be pursuing the same potential variables to unlock AGI. So we talked about this on a.
Paul Raetzer
Recent podcast episode that all the labs.
Mike
Have a general idea of what needs to happen.
Paul Raetzer
They, they, they, they often talk about like needing one or two major breakthroughs to get the AGI and beyond, but.
Mike
They all seem to kind of be.
Paul Raetzer
Pursuing the same basic ideas. What happens though is there is a scarcity of computer chip or the compute.
Mike
Chips like the Nvidia chips, and there's.
Paul Raetzer
A scarcity of energy that prevents them from trying everything at the same time.
Mike
So they have to serve the models they already have.
Paul Raetzer
And then they need to train these.
Mike
New models and they need to run experiments to Figure out which research direction.
Paul Raetzer
To go in to unlock the next breakthrough that's needed.
Mike
And so when you look across what's.
Paul Raetzer
Happening, I'll just highlight a few of the possibilities. So if you're Google or Anthropic or OpenAI or Cohere or Mistral or XAI or Meta, you basically have all these.
Mike
AI researchers, all super smart people. You have some finite ability of Nvidia.
Paul Raetzer
Chips to do training runs on and to do your experimentations on. They all generally look at these possibilities agentic, giving these things the ability to.
Mike
Take actions, computer use.
Paul Raetzer
We'll talk a little bit more about this later.
Mike
But the ability for these things to.
Paul Raetzer
See and use applications and content on your device as your screens the same way you and I would like. We would use a keyboard and a mouse.
Mike
Context windows, expanding the context window, meaning.
Paul Raetzer
I can give it 50 PDFs and it'll know everything within there, be able.
Mike
To search and remember things within the context.
Paul Raetzer
And then when it gives me outputs they become more accurate because it's actually doing it within the context window of the information I've provided it. So context windows are known to be.
Mike
A great way to improve the accuracy.
Paul Raetzer
And reliability of these models. Continual learning, these things forget, they, they.
Mike
They won't remember something you talked about 10 threads ago.
Paul Raetzer
And so this idea of the, the model learning and then when they retrain a new model that it doesn't forget everything it previously learned, which is what they do now each time it's like a reset button. Emotional intelligence, memory, multimodality.
Mike
We'll talk about reasoning, recursive self improvement where these things improve themselves. Vision, voice, world models being understand the.
Paul Raetzer
World around being, understand physics, reproduce the.
Mike
Laws of physics basically in the outputs.
Paul Raetzer
Of its videos and images.
Mike
Any of these could be on locks.
Paul Raetzer
To the next breakthrough.
Mike
And there's probably others. They're aware of all of these. They have to figure out which ones.
Paul Raetzer
To make the bets on.
Mike
So what's happening is some of the labs have billions of dollars to play with.
Paul Raetzer
So like OpenAI, Google, Meta, XAI, Anthropic in particular, they have billions of dollars to keep just pushing for the biggest.
Mike
Smartest, most generally capable models.
Paul Raetzer
They buy hundreds of thousands of chips from Nvidia. They take data that they have rights to and don't have rights to, like you know, pirated books. They take all this data in and then they just try and train these massive models and that's what gets us to, you know, Gemini 4. 5 or Gemini 2. 5 GPT 4. 5. They just keep building bigger and bigger models. Other approaches are like Cohere, Mistral, writer.
Mike
And also the big labs, they, they.
Paul Raetzer
Also have these smaller projects going.
Mike
They're trying to unlock smaller, more efficient.
Paul Raetzer
Models through algorithmic techniques, reinforcement learning, more fine tuned data, more, you know, proprietary data trained in specific areas. So there's this effort to build the biggest, most generally capable models and then there's these efforts to build the smaller, more efficient models that can run on device, basically. And so as I've said on episode 140 of the podcast, this isn't all noise and hype. This is what an emerging trend looks like.
Mike
You see and hear similar threads from.
Paul Raetzer
All these different leaders, all these different AI labs. You have a lot of the top.
Mike
AI researchers who bounce around between these labs. They're seeing and hearing everything.
Paul Raetzer
They go to the same parties in San Francisco. Like they talk to each other all the time.
Mike
They're all seeing the same things across the labs.
Paul Raetzer
And when you start to piece it.
Mike
Together, you realize that either they're all.
Paul Raetzer
Wrong or AGI is coming and it's coming really fast.
Mike
Faster than we're preparing for in the.
Paul Raetzer
Business world, in the economy, in educational systems, in society. And so that's my belief. And the whole purpose behind this series is we have to start considering that they're right and that within like two.
Mike
To three years, the world is going to start changing in a very dramatic.
Paul Raetzer
Way that we are not prepared for. And so I for one don't want to sit back and wait. I would much rather accept that they might be wrong, I might be wrong, and we don't get there in two years. Maybe it's five, maybe it's seven, maybe it's never. But it sure seems like the probability.
Mike
Is high enough that we should be.
Paul Raetzer
Doing more, that we should be considering the implications on ourselves, on our companies, on our industries, on our educational systems. And so that's what I want to do. Because when I go back to November 2022, when the emergence of ChatGPT, like we knew something like that was coming.
Mike
Like in our book, the Marketing Artificial.
Paul Raetzer
Intelligence book that came out in like spring of 22, there's a whole section.
Mike
Titled what happens when machines can write like humans?
Paul Raetzer
Like, we were already at GPT3 level. I think when we wrote the book, we knew this was gonna be unlocked. We didn't know it would be through something called ChatGPT, but the signs were obvious.
Mike
Sam Altman had written his Moore's Law for Anything for Everything post in March of 2021. Telling us that models that could think, reason, understand, create, were coming.
Paul Raetzer
They'd already seen them in their labs. And yet most business leaders, the vast.
Mike
Majority of business leaders, had done nothing.
Paul Raetzer
They had no idea that this stuff was coming. And that's how I feel about AGI today. There's still so many business leaders who.
Mike
Don'T even comprehend the current capabilities of.
Paul Raetzer
AI, more or less be thinking about what happens when AGI shows up. And I don't want people to arrive at that point, whether it's two years from now or three years from now, or five years from now, where they did nothing, they had no contingency plans whatsoever.
Mike
So that's my goal here, is to.
Paul Raetzer
Try and lay out what happens next.
Mike
And that brings us to AI timeline, version two. So again, in episode 87 of last year, March 2024, I laid out what.
Paul Raetzer
I called an incomplete AI timeline. So the whole premise was I don't actually know and I, I'm convinced none of these AI labs actually know what happens, but they talk enough about it.
Mike
And you read enough and see enough and see the research reports as hints of where they're going. You can piece together what they're working on. And so that's what I'm trying to do with this timeline, is piece together.
Paul Raetzer
What are they saying, what do they.
Mike
Believe is going to happen?
Paul Raetzer
Like, where are we now, what's going to happen next with these models, and then most importantly, what can we do to prepare? So the way I go about this is I actually keep an AGI journal. So Mike and I, in our weekly podcast, we, you know, I'll curate 40ish articles, podcast interviews, research reports, tweets, like.
Mike
All these things I have, my private.
Paul Raetzer
Conversations I have with companies and AI labs, my own observations of what's going on, presentations, we watch, courses we take.
Mike
All this stuff, we curate across all.
Paul Raetzer
AI related topics every week.
Mike
The stuff that's related to AGI, I have a separate journal for, for.
Paul Raetzer
And so I basically try and keep.
Mike
Track of what's going on, what people are saying, and then at any given point I can go into there and.
Paul Raetzer
Kind of like try and piece it together. And so that's what I did for today was I went back through my.
Mike
Journal since March of last year and.
Paul Raetzer
Tried to piece together what are they saying. So what I'm going to do is.
Mike
Walk you through now what has sort.
Paul Raetzer
Of been happening, what has become apparent.
Mike
To me over the last 12 months.
Paul Raetzer
Of journaling AGI, and then I'll actually.
Mike
Create a visual of this. I'LL share it on my LinkedIn account and then we'll put it in the.
Paul Raetzer
Show notes as soon as I have it done. I'm hoping it's done when this goes live on on Thursday, March 27, but it's Wednesday, March 26 at 1pm right now, so hopefully it'll be it'll be live for you, but stay tuned for that in the next couple days. So what has become apparent over these last 12 months? The the key for me is the timeline's accelerating. So there's a lot of things that I had sort of projected last year that have stayed very true. Like there's actually nothing in the timeline when I went back and revisited it that I would change that, I just got completely wrong.
Mike
Just a lot of new things emerged that evolved the timeline and convinced me that AGI is actually coming sooner than.
Paul Raetzer
I had originally kind of projected it might. So let me go through a few things to add context here. There's a phenomenal podcast series called DeepMind. The podcast.
Mike
I would highly recommend you check it out.
Paul Raetzer
I think they've done three seasons now. It's with Hannah Fry, Professor Hannah Fry. She's amazing and she has inside access to Everybody at Google DeepMind, so she does all these incredible interviews with the leaders there.
Mike
So in episode one of season three.
Paul Raetzer
August 2024, Demis Asaba said, I think.
Mike
It'S still under hyped or perhaps underappreciated even now. What's going to happen when we get to AGI and post AGI?
Paul Raetzer
I still don't feel like people have.
Mike
Quite understood how enormous that's going to.
Paul Raetzer
Be and therefore the responsibility of that.
Mike
March 10, 2025 Just two weeks ago, Shane Legg, again co founder of Google DeepMind, said AGI will soon this is a tweet.
Paul Raetzer
AGI will soon impact the world, from.
Mike
Science to politics, from security to economics, and far beyond. Yet our understanding of these impacts is still very nascent.
Paul Raetzer
Now that is very informational for people who haven't been following along at home. These labs have no idea what happens. Like they're very direct about that.
Mike
They are not the ones that are.
Paul Raetzer
Going to figure this out for you. They are not going to think about what happens in your industry, what happens to your job. They don't see that as their responsibility. They're focused on building the smartest technology they can build and they'll work with people who want to do research on this stuff. But they are not going to come and tell you what's going to happen to your job.
Mike
As a result of this stuff, they're.
Paul Raetzer
Just going to build it and let us figure it out. Dario Amadei, the co founder and CEO of Anthropic, on a Lex Friedman interview podcast interview November 2024. He said, Some of the new models that we develop, some reasoning models that.
Mike
Have come from other companies, they're starting.
Paul Raetzer
To get to what I would call.
Mike
The PhD or professional level. We've seen similar things in graduate level math, physics and biology from models like OpenAI's 01, which was their first reasoning.
Paul Raetzer
Model in September of 24. He said, so if we just continue.
Mike
To extrapolate, extrapolate this in terms of skill that we have, I think if.
Paul Raetzer
We extrapolate the straight curve, within a few years we will get to these.
Mike
Models being above the highest professional level in terms of humans.
Paul Raetzer
So again, go back to like open AI's levels or I'm sorry, Google DeepMind's levels. They're talking about like that PhD level and beyond. They're talking about the smartest humans.
Mike
When asked about his timeline for achieving Artificial General Intelligence, or powerful AI as he prefers to call it, he hedged based on variable variables that could arise.
Paul Raetzer
But said if you just kind of eyeball the rate at, at which these.
Mike
Capabilities are increasing, it does make you.
Paul Raetzer
Think we'll get there by 2026 or 2027. 2026 again is next year. So he's putting a one to two year timeline on these AIs that are smarter than the PhD level humans at everything. OpenAI then recently published, I think this was early March, a paper called How We Think About Safety and Alignment. In that article it says the Post or the Post states, as AI becomes.
Mike
More powerful, the stakes grow higher.
Paul Raetzer
The exact way the post AGI world.
Mike
Will look is hard to predict. The world will likely be more different from today's world than Today's is from 1500s.
Paul Raetzer
But we expect the transformative impact of.
Mike
AGI to start within a few years.
Paul Raetzer
Again, they're not going to figure out what it means, they're just going to tell you it's going to look different than 500 years ago. So yeah, then we had another one.
Mike
From earlier this year called Superintelligence Strategy. This is a report from Dan Hendricks, who's the director of the center for AI Safety and an advisor to Elon Musk's XAI and Scale AI.
Paul Raetzer
Scale AI is a big player in training these models.
Mike
They provide the data to train the.
Paul Raetzer
Models and I'm sure a host, host of other things.
Mike
Alexander Wang, who is the CEO and.
Paul Raetzer
Founder of Scale AI and we actually had a whole podcast episode, a main.
Mike
Topic where we featured Alexander Wang.
Paul Raetzer
I don't remember what episode that was, but our team will drop it in the show notes. So if you want to go back and learn about him, we profiled him.
Mike
And then Eric Schmidt, who is the.
Paul Raetzer
Former Google CEO and executive chairman.
Mike
So in these three authors the co published Superintelligence in the opening paragraph it says Superintelligence or AI vastly better than.
Paul Raetzer
Humans at nearly all cognitive tasks is now anticipated by AI researchers.
Mike
Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We then had and this was a.
Paul Raetzer
Fun one to talk about on the podcast.
Mike
Ezra Klein New York Times opinion writer and host of the Ezra Klein show.
Paul Raetzer
On March 4, 2025, he interviewed Ben.
Mike
Buchanan, who is the former special advisor for AI in the Biden White House. Klein starts the episode and his opinion.
Paul Raetzer
Piece in the New York Times by saying for the last couple months I.
Mike
Have had this strange experience. Person after person from AI labs, from government has been coming to me saying.
Paul Raetzer
It'S really about to happen.
Mike
We're about to get artificial general intelligence.
Paul Raetzer
What they mean is that they have.
Mike
That they have believed for a long.
Paul Raetzer
Time that we are on a path.
Mike
To creating transformational artificial intelligence capable of doing basically anything a human being could do behind a computer, but better.
Paul Raetzer
They thought it would take somewhere from.
Mike
Five to 15 years to develop, but.
Paul Raetzer
Now believe that it is coming in two to three years. If you.
Mike
He continues, if you've been telling yourself this isn't coming, I really think you need to question that.
Paul Raetzer
It's not Web three, it's not vaporware. A lot of what we're talking about is already here right now.
Mike
I think we are on the cusp of an era in human history that is unlike any of the areas we eras we have ever experienced before. And we're not prepared in part because.
Paul Raetzer
It'S not clear what it could mean to prepare.
Mike
We don't know what this will look.
Paul Raetzer
Like, what it will feel like. We don't know how labor markets will respond.
Mike
We don't know which country is going to get there first. We don't know what it will mean for war.
Paul Raetzer
We don't know what it will mean for peace. And while there's so much else going.
Mike
On in the world to cover, I.
Paul Raetzer
Do think there's a good chance that.
Mike
When we look back on this era.
Paul Raetzer
In human history, AI will have been.
Mike
The thing that matters. And then finally, before I get into the timeline, Kevin Roose, who is a technology columnist and co host of the.
Paul Raetzer
New York Times tech podcast Hard Fork.
Mike
Recently published an article called Powerful AI Is Coming We're Not Ready. In the post he starts out, here are some things I believe about artificial intelligence.
Paul Raetzer
I believe that over the past several.
Mike
Years, AI systems have started surpassing humans.
Paul Raetzer
In a number of domains, math, coding, medical diagnosis, just to name a few. And they're getting better every day.
Mike
I believe that very soon, probably in.
Paul Raetzer
2026 or 2027, but possibly as soon.
Mike
As this year, one or more AI companies will claim they've created an AGI.
Paul Raetzer
Which is usually defined as something like a general purpose AI system that can do almost any cognitive task a human can do. He continues, I believe that when AGI is announced, there will be debates over.
Mike
Definitions and arguments about whether or not it counts as real, quote, unquote AGI.
Paul Raetzer
But that these mostly won't matter because the broader point that we are losing.
Mike
Our monopoly on human level intelligence and transitioning to a world with very, with.
Paul Raetzer
Very powerful AI systems in it will be true. Now, when I read Kevin's article and I recommend, I recommend you read the whole article. We'll put the link in the show notes I tweeted I'm 100% aligned with everything he says, like everything he writes in that article, I agree with completely. And it, it echoes many of the things that we've said on the podcast. All right, so where does that bring us to? And as we kind of get into the AI timeline, what I'm going to do is walk through these 1, 2, 3, 4, 5 different kind of components of the timeline. And like I said, I'll put the.
Mike
Slides up to this so you can.
Paul Raetzer
Visualize this as well in the coming day. But I'm going to walk through each of these.
Mike
So the first is large language model or LLM advancements. So on last year's timeline, I had.
Paul Raetzer
That, you know, 2024, 2025, that's continuing. So what LLM advancements consist of continued advancements and potential leaps in accuracy, context, windows, decisioning, emotional intelligence, memory, multimodal personalization, planning, search tool use, and reasoning. Again, these go back to those, those different variables that the labs are pursuing.
Mike
To try and figure out which thing.
Paul Raetzer
Is going to unlock the next thing. And so they're all kind of continuing along there.
Mike
We're going to see some leaps forward.
Paul Raetzer
And we again just saw Yesterday with.
Mike
Gemini 2.5 Pro, made some leaps forward.
Paul Raetzer
It'S now number one on the leaderboard across basically everything.
Mike
The other thing, and this is new.
Paul Raetzer
This year, this was not in last year's timeline.
Mike
Commoditization of frontier models, proprietary data productization and distribution become key differentiators. This was a big open question early last year.
Paul Raetzer
How long would OpenAI's GPT4 model maintain its lead? Because they got out there first and for that roughly two year stretch, they were it, they were the dominant model. And so the question became like, do they have some secret sauce? Like is there something OpenAI is doing that's just going to always keep them to have everybody else? What we have learned is, no, that's not actually what's going to happen.
Mike
While you used to be able to.
Paul Raetzer
Have like 12 to 18 month lead times, what seems to be happening now is it's like three months, maybe six months max. So the leaderboards change seemingly weekly right now. And oftentimes what happens is these major labs test models under like stealth names. They don't tell you they're from Google or OpenAI and you'll have these new models that are shown up at the top of the leaderboard. And then all of a sudden Google says, oh yeah, that was our 2.5 Pro model. So these models are just leapfrogging each other every three months. And so what seems to be the differentiators are going to be the data. Your ability to productize These models like OpenAI has done a phenomenal job obviously of doing that, creating probably over 10 billion in revenue this year through productization and then distribution. Meaning if you're Google, you have what.
Mike
Seven platforms with over a billion users?
Paul Raetzer
Seven different platforms and systems with over a billion users, that's, that's some pretty solid distribution. So if you can have a model.
Mike
On par with the best models, but.
Paul Raetzer
You have 7 billion people using your, your technology, that's pretty good. So that, that was new this year.
Mike
Another thing that was new this year.
Paul Raetzer
Is traditional scaling laws. So this was the big, big mystery. If we give them more data, if.
Mike
We buy more Nvidia chips, build bigger.
Paul Raetzer
Data centers, connect more energy to them, can we just keep building smarter models? Do they just continually get smarter, more generally capable? What we have found as of fall.
Mike
Of last year is they still work, but those laws are slowing.
Paul Raetzer
They're not getting the same level of improvement. But the big labs are continuing to push on those traditional scaling models laws.
Mike
What happened in fall of 2024, that's also new on the timeline this year.
Paul Raetzer
Is test time, compute or thinking scaling laws emerged and are accelerating now.
Mike
What that means is they found that.
Paul Raetzer
A new, they found a new scaling law, basically that the more time at.
Mike
Inference, which is when you and I.
Paul Raetzer
Use the tool, so you go into Google Gemini and you put in your prompt, that's inference, you're now like they're going to draw on their power to.
Mike
Give you a response.
Paul Raetzer
Um, and so what they found was.
Mike
When the model takes its time to.
Paul Raetzer
Think what would be called System 2, thinking that they actually get smarter and more accurate.
Mike
And so we have these traditional scaling.
Paul Raetzer
Laws and then we have these test time compute scaling laws and those are accelerating. Another new thing in the timeline this year is model evaluations.
Mike
So the way they determine how good.
Paul Raetzer
These models are, they're starting to get more focused on practical applications and use cases rather than pure IQ tests. So traditionally when these models get dropped.
Mike
It'S like, how good is it math?
Paul Raetzer
How good is it at biology? How good is it at all these different complex tasks that you and I.
Mike
Don'T generally care about?
Paul Raetzer
Cause they don't affect our day to day work. What's going to start to happen, and we're starting to see it recently is more and more evals or evaluations where it's like, what does it do to like a lawyer's job?
Mike
What does it do to a marketer's job? So they're starting to figure out ways.
Paul Raetzer
To do this and I think more.
Mike
Industries and associations will likely probably pick this up and start applying evals to.
Paul Raetzer
Their own jobs within their industries if I hope that starts happening. Rapid expansion of valuable use cases in business. Again, we're still in the LLM advancements phase here. Wide scale adoption of generative AI continues a multi year curve despite pockets of disillusionment. I meet with enterprises every day who are at the starting line. Like if you're listening to this and.
Mike
This is all crazy to you and.
Paul Raetzer
You think you're so far behind, you're not like most companies are still trying.
Mike
To figure out how to do this stuff.
Paul Raetzer
So we are not at the wide scale adoption phase in my opinion yet. We might be at the nearing the wide scale piloting phase where more businesses are starting to test it and try and figure it out. But we are definitely not at the phase where they've solved this and they're.
Mike
Scaling it and they're doing change management and internal education and all the things.
Paul Raetzer
They should be doing. That's not happening at a wide scale yet. Stories of layoffs due to AI within certain industries will happen this year, but I don't think it'll be widespread. I think it'll be exaggerated by the media. But I also think a lot of tech companies are hiding their layoffs that.
Mike
Are due to AI under other terms.
Paul Raetzer
So I actually think there's quiet AI layoffs happening that people aren't admitting that's.
Mike
Why they're doing it.
Paul Raetzer
But I think by the end of this year they'll probably start admitting that's what they're doing. On a more positive note, some new.
Mike
AI roles will emerge.
Paul Raetzer
You're going to start seeing AIOps chief AI officers, AI trainers. For me, like we're hiring right now and I actually have AI agent management built into every job description, basically saying like, hey, your part of your job is going to be to figure out.
Mike
To understand what agents are capable of.
Paul Raetzer
Especially as they continue to improve, and.
Mike
To figure out ways to infuse them.
Paul Raetzer
Into your job to be more efficient, productive, creative, innovative, and then like to focus the things that you're uniquely capable.
Mike
Of, the high impact, high human level stuff.
Paul Raetzer
That's what I want you doing. And let's like find ways to infuse AI agents as we go. And so I want the responsibility of AI integration to be bidirectional. I want to, as a CEO, push it down when I see opportunities to do something across the organization or across a team. But I want the ideas brought up from the practitioners who are actually doing the work and I want them to.
Mike
Have the freedom to do that, to feel like they can come to the.
Paul Raetzer
Table with new ways of doing things or new tools. So large language models are going to continue to advance. You know, traditionally they were text in, text out.
Mike
You gave them a text prompt, they.
Paul Raetzer
Gave you text out. If you wanted to do image generation.
Mike
You had to go to a different model. Like it wasn't built into the same model.
Paul Raetzer
So traditional large language models powered these chatbots, basically the text chatbots. The key though is these language models.
Mike
Were always the foundation for what comes next.
Paul Raetzer
They the, the AI labs never set.
Mike
Out to build tools to write your articles and your emails and do your plans for you and write your email.
Paul Raetzer
Or your ad copy or do your.
Mike
Social media posts or do your financial reports.
Paul Raetzer
That's not what they set out to do. They set out to solve language understanding and generation because they thought that was.
Mike
The key to unlocking general intelligence.
Paul Raetzer
And so always these language models were just the basis for what comes next. And you'll often hear me say this, especially if you hear me do talks like this is the dumbest form we're ever going to have. Like every day you're working with the.
Mike
Dumbest form of AI in human history.
Paul Raetzer
Tomorrow like we just, just had Yesterday, we got 2.5 Pro from Google and we got 4O image generation from OpenAI within 2 hour span and they appear to be like state of the art, like best in, in in the world right now. And that just happened yesterday. So every day somebody's going to do something that pushes the frontier forward and.
Mike
That leads to the second phase we'll.
Paul Raetzer
Talk about in the timeline which is multimodal AI explosion. And I have this as like 2025, 2026 range. So what's happening here is these language.
Mike
Models that originally were just text are now getting built from the ground up to do more than text, multiple modalities. So images, video, audio, code.
Paul Raetzer
This is how the Gemini models are being built.
Mike
So they're being trained on multiple modalities.
Paul Raetzer
And they're being enabled to output multiple modalities. So I don't have to bounce between models, I can just talk to this model. And it's a ground up system that is built on these different modalities. We're going to see rapid improvements in text to video capability. So you put a prompt in, you get video out.
Mike
Right now there's a bunch of players.
Paul Raetzer
In this space like VO2 from Google. DeepMind is a, is a great example here. Go watch their demo video. It's awesome. You can play around with Sora from OpenAI. There's a bunch of Runway ML comes to mind. There's a bunch of players here, but they have limitations. Like one, it's massively compute intensive to do these things. They historically can't keep coherence from like frame to frame. So you may start like with a person in your, you know, your video and by like seven seconds in that person all of a sudden looks different than they were than they started. So they can't keep like that control there. The output length is limited, like Maybe it's like 7 seconds, 10 seconds and then it starts to like lose its capabilities, realism, render times, like these are.
Mike
All flaws that are going to get solved.
Paul Raetzer
Most of it is, well a good chunk of it is related to computing power and like the cost of it to do it. But you're going to see major advancements there. You're going to see continued advancements in.
Mike
Voice technology, making voices sound more human.
Paul Raetzer
Like natural, accurate, customizable, multilingual AI generated images.
Mike
Video invoices will become indistinguishable from reality.
Paul Raetzer
Again, go play with the new four zero image generation model. It just came out yesterday afternoon, so I haven't had time to test it myself yet. But I've looked at a bunch of threads online of people what they've been doing on on X. I've seen it and it's remarkable. And basically OpenAI is taking the guardrails off of it. Historically, you know, these AI labs have been trying to be conscious of misuse of these things.
Mike
And I think we're just done with.
Paul Raetzer
That phase of AI. They're just basically throwing these out there and saying, yeah, they're going to do things that you might consider harmful or offensive and sorry, like, just don't use them for that yourself if that's the problem for you. So we're sort of removing filters and guard rails and letting people use the.
Mike
True power of these models, which traditionally.
Paul Raetzer
These labs have held back. Now, making them indistinguished from reality is going to create all kinds of problems.
Mike
Because society isn't ready for this. They are unaware of largely that images.
Paul Raetzer
And videos can, you know, be generated that look and feel like reality. And that's going to be messy. The frontier models.
Mike
So these labs that are spending the.
Paul Raetzer
Billions on the training runs, they're going to make models that are ten to.
Mike
A hundred times more powerful.
Paul Raetzer
So we're going to keep following those original scaling laws, but smaller, faster, more.
Mike
Efficient models are also going to probably.
Paul Raetzer
Become way more prevalent. The models are going to develop some element of like a worldview to like, actually understand. So you can use Project Astra from Google as an example here, or if you go into ChatGPT and click on voice, you can then click on a.
Mike
Video and it'll see the world around you.
Paul Raetzer
You can also use visual intelligence on Apple Intelligence. And so we're starting to see the early forms of this where the AI can see the world and in theory start to actually understand it, understand the.
Mike
Physics of the world.
Paul Raetzer
We're not sure how exactly that's going to occur, and there's differing opinions about whether or not it's actually understanding physics at all. But there's a lot of efforts being made around this through synthetic data and simulations and things like that. And then one of the other questions I have in the multimodal AI explosion phase is how dominant of an interface voice becomes like, is it a generational thing? But I could see where people really start to just interact with their AI.
Mike
And their devices through their voice.
Paul Raetzer
You're just talking all the time to them. And so the answer you get back is the answer. You're not going on Google and searching for things. You're.
Mike
You're just talking to your AI that you trust to provide this information to you.
Paul Raetzer
So I mentioned a couple times 4o.
Mike
Image generation and Gemini 2.5 pro.
Paul Raetzer
Mike and I will go in depth.
Mike
On both of those on episode 142 next week, which would be.
Paul Raetzer
I don't know when that is. March or April 1st, maybe. Okay, so the next phase is AI agents explosion. Uh, this is 2025 to 2027. I'm gonna stop for a second, take a sip of water. I wasn't sure how long this was gonna go, actually. I told the team right before I started recording this today. I was like, this might be two hours. I'm. I'm honestly not sure. So, yeah, it looks like we're gonna get done in under two hours, but we're in about an hour now. All right, so AI agents explosion 2025 to 2027. So agents is a really weird space. There's. Again, if you listen to the podcast regularly, you've heard me kind of on my soapbox about this. I feel like a bunch of the tech companies just started branding everything as AI agents and they sort of just bastardized the term. Like, it became this really fuzzy thing of like, well, what exactly is an agent? The way I think about it, just to like, level set here. And then I'll get into like, the.
Mike
Components of the AI Agents explosion is traditional automation.
Paul Raetzer
You know, we could set rules that. That the machine or the software did what we told it to do. And this has been around forever. So you can just write some rules and it does the thing, but it does exactly what you tell it to do.
Mike
That is deterministic, meaning it's just going to follow instructions.
Paul Raetzer
When you have AI agents and auto, in theory, this automation, or the ability for them to take actions, they are.
Mike
Probabilistic in part, meaning sometimes they figure.
Paul Raetzer
Stuff out on their own.
Mike
They're not just following your rules anymore.
Paul Raetzer
And so I think of AI agents as AI systems that can take actions, and then you can continue that definition with varying levels of autonomy, varying levels.
Mike
Of tool use, varying levels of memory.
Paul Raetzer
Like, so they're not binary.
Mike
Again, they're.
Paul Raetzer
They exist on this spectrum of all these different variables.
Mike
So again, the problem came in in.
Paul Raetzer
2024 that all these tech companies just started talking about these things. Like, they're just these autonomous things that are just going to do your job and people freak out and they don't understand what that means. So I think about this very similar to like a Tesla which supposedly has full self driving, but then they put in parentheses supervised.
Mike
In a Tesla as of now, you.
Paul Raetzer
Still need a steering wheel and you still need a human that can take control of that steering wheel at any, any given moment.
Mike
So a Tesla is not autonomous.
Paul Raetzer
It is on the spectrum of autonomy in some situations, but it still has.
Mike
To be overseen by a human.
Paul Raetzer
So the question is always, well, what's the human's role?
Mike
What is the in the car case.
Paul Raetzer
What'S the driver do? In the case of an AI agent.
Mike
Working in your marketing or sales or.
Paul Raetzer
Customer success system, what's the human's role? Is the human this there to make.
Mike
Sure it doesn't go off the rails?
Paul Raetzer
Does the human check in on it once a week or is the human approving everything it does? So the whole point here is they're not, it's not this clean definition. They exist on this spectrum. So back to the timeline in 2025, AI agents that can take actions are.
Mike
Marketed heavily by leading tech companies.
Paul Raetzer
But the confusion remains in the market.
Mike
About what exactly they are, how they work and the impact they will have. Current AI agents often require a lot of manual human work to plan, integrate and manage them. There are however, powerful early forms of.
Paul Raetzer
These semi autonomous agents, including, and one of my favorite things right now, deep.
Mike
Research tools from OpenAI and Google. And when you use these tools, if you haven't go go test them, they're incredible.
Paul Raetzer
You begin to understand how these AI agents will be able to drive adoption and value. Because when you see them applied in.
Mike
This sort of narrow instance of conducting.
Paul Raetzer
Research, you can start to imagine when they're built to do all these other things, things I do think that adoption.
Mike
In enterprises is going to be slow.
Paul Raetzer
Largely due to one, they don't really work the way they're advertised to work. But more importantly, privacy and security risks.
Mike
Especially related to this idea of computer use. So in fall 2024, Anthropic was first.
Paul Raetzer
To market with a preview of computer use, which is something OpenAI was working on back in like 2016. And Google now has a version of this in Chrome as well. What it does is it enables the AI to take over your keyboard and mouse basically and perform tasks for you on your computer. Now to do that, it sees everything on your screen. In theory, it remembers the majority of it the way Microsoft was doing it. I'm not sure if this is how the product still works, is they're basically taking screenshots of your screen every like one and a half to three seconds. And then it would just search those.
Mike
Screenshots to like find things. But it can see, remember and interact with things on your device.
Paul Raetzer
The content, the applications, could be your work computer, could be your phone. And so I can tell you as.
Mike
A CEO, that's unnerving.
Paul Raetzer
Like the thought that employees may have.
Mike
Agents that using computer use, like just.
Paul Raetzer
Watching everything on their screen all day long.
Mike
I have major questions about the privacy.
Paul Raetzer
And the, and the security risks related to that. And I can imagine big enterprises with, you know, big legal teams and IT teams have even bigger concerns than I do. So that's a major problem. And I think that's going to slow adoption of AI agents within enterprises. The other thing I think is that agents are going to be largely narrow by vertical and use case initially. So again, go to like the deep research phenomenal example. Like that's a great product, but it's narrow in its ability. It's like it's specifically for research, but that's great.
Mike
Becoming more general and horizontal over time, I think.
Paul Raetzer
Still happens though, to where we just.
Mike
Have an AI agent that it can.
Paul Raetzer
Just do anything I can do.
Mike
It's not trained on any specific task per se.
Paul Raetzer
It just do my job. And that's when things get really weird. And then that leads to organizations being begin to build AI agents into their charts and teams. There's a quote I shared on the podcast back in November 2024 from Jensen Wong, the CEO and founder of Nvidia, and he said, quote, these AI workers can understand, and he's referring to AI agents. They can plan, they can take action. We call them AI agents. And just like digital employees, you have to train them, you have to create data to welcome them to your company, teach them about your company. You train them for their particular, particular skills, you evaluate them. After you're done training, you guardrail them.
Mike
To make sure that they perform, perform.
Paul Raetzer
The job they're asked to do. And of course you operate them, you deploy them. So in other words, humans are in the loop all over the place with these things.
Mike
So when you hear about AI agents, don't assume that a year from now.
Paul Raetzer
Everybody'S job is gone and the agents are going to do it. That is not what's happening. So we'll see early forms of autonomy, but again, it's going to be very.
Mike
Narrow and likely highly trained to do those things.
Paul Raetzer
But we will start to see or at least get visibility into what the.
Mike
Disruption from these things will look like in knowledge.
Paul Raetzer
Work, work it's going to start to become more tangible and measurable. The next phase is robotics explosion. Humanoid robots to be exact. 2026 to 2030 is the range I have here. So I don't want to spend a ton of time on this one because it's, it's important, but it's not as directly impactful to knowledge workers right now. But there's major investments going into this space.
Mike
Lots of breakthroughs in the last 12 months.
Paul Raetzer
OpenAI is getting back into robotics.
Mike
They started there.
Paul Raetzer
It was one of the things they're working on in the days of OpenAI.
Mike
And Tesla with Optimus, which may become.
Paul Raetzer
Actually the biggest revenue channel for Tesla over time versus their cars.
Mike
Figure is a major player here.
Paul Raetzer
Amazon ton with robotics, Google, Nvidia, Boston.
Mike
Dynamics, Unitree, I think they're out of.
Paul Raetzer
China, has had some insane demonstrations recently. So what's happening is there's major advancements.
Mike
Being made on the hardware side of these things.
Paul Raetzer
So they become more human like in their, their capabilities. But the real breakthrough was multimodal language models being dropped into them as the brains. So basically all these abilities of text and images and video and audio, all of that living in the robot so it can see and understand the world and interact with people and objects, that's the real breakthrough. And so I think what's going to happen is there'll be like narrow applications initially of commercial robots and, and then more general robots that are capable of quickly developing a diverse range of skills.
Mike
Through observation and reinforcement learning.
Paul Raetzer
Meaning they just watch what a human.
Mike
Does and they learn how to do.
Paul Raetzer
It, or they're trained specifically to do these skills by kind of like, yes, you did a good job, no you didn't get a job like a reward function basically to, to learn these things. And then I think by like I don't know, maybe 2028 to 2030, you.
Mike
Start to get much more widespread commercial.
Paul Raetzer
Applications starting to really affect numerous industries.
Mike
And then I think over time, maybe.
Paul Raetzer
In the next decade, there's a potential for general purpose consumer robots that you and I could actually like lease or purchase and you could just have a.
Mike
Robot around your house for say 20,000.
Paul Raetzer
A year or $200 a month. And it'll start as a luxury for the elite and then it'll eventually, as they get manufacturing costs down, quickly become a mass market thing. And that's when you start to really see the impact on blue collar jobs. But again, I, I don't think I'm not as bullish on this as others. Like I'm Very aggressively looking at investment opportunities in this space, like who's going to be the major players as this takes off. But I think there's a lot of exaggeration right now about how quickly these things are actually going to affect our lives. Now, Jensen Wong, who I just mentioned earlier, he said the chat GPT moment for robotics is coming less than 10 years from now. I'm certain of it. Humanoid robots will surprise everybody how incredibly good they are. That was January 2025. Elon Musk recently said that Tesla is aiming to build 5,000 of its Optimus humanoid robots this year. At CES in January, he shared an ambitious vision for Tesla's Optimus humanoid robot.
Mike
Projecting that within three years, Tesla would produce 500,000 humanoid robots.
Paul Raetzer
With production scaling significantly each year, he envisioned a future with tens of billions of robots globally. And then just a few days ago, he said that SpaceX, one of his companies, Starship, their major rocket, is set.
Mike
To depart for Mars at the end of next year.
Paul Raetzer
So they want to land a rocket.
Mike
On Mars and he wants to send.
Paul Raetzer
A Tesla Optimus bot to Mars. And then if that goes well, they want to send humans in 2029. And then actually today, Tesla is on Capitol Hill demonstrating Optimus along with some other robotics companies. There's apparently a robotics symposium. So again, just a prelude, you're going.
Mike
To hear a ton about humanoid robots.
Paul Raetzer
I would just put it in the category of pay attention.
Mike
Probably not as far along as you may be made to believe, sort of.
Paul Raetzer
In a way how AI agents are today. All right, and then the final element.
Mike
Of the timeline is AGI emergence.
Paul Raetzer
And I have that as 2027 to 2030. I moved it up a year. I had this as 2028 last year. So when AGI emerges, we've spent a lot of time talking about what AGI is and isn't, but the way I think about it is new science becomes possible. It's no longer just connecting dots from.
Mike
Existing human knowledge and kind of making predictions about words.
Paul Raetzer
It's actually discovering new things. And so, like that stuff that isn't.
Mike
In the training data or wasn't learned.
Paul Raetzer
In the training data. And so it starts to be able to develop its own ideas and hypotheses and drugs and solutions to math problems and things like that.
Mike
So it really starts to make an.
Paul Raetzer
Impact in chemistry and biology and mathematics and in business as well. And so once this starts to happen.
Mike
Now, you start to get in a.
Paul Raetzer
Complete reset of what a business actually is. You. You will, I would guess, sometime in the Next couple years we will hear.
Mike
About the first one to ten person.
Paul Raetzer
Billion dollar company that might happen this year. Honestly, you'll hear about this idea of.
Mike
AI agent clusters or hives that function.
Paul Raetzer
As largely autonomous enterprises. When this happens, we have to truly start rethinking how we measure economic health and growth. And I'm a massive believer that economists.
Mike
Should be doing this right now.
Paul Raetzer
I just don't know of any that are. Because I think that if you said to someone, hey, this is like not a 0% chance, maybe not even 10, like maybe there's like 20 to 30% chance we get to this idea by the end of the decade. That feels like something we should be planning for, for that we should be.
Mike
Considering a possibility of.
Paul Raetzer
Now I get that there's some people who are just complete pessimists on this and think there's no probability, they have no standing on that, like there's no argument behind that it's not going to happen. No one knows that for sure. So I'm a believer of there's a possibility, I believe a strong probability. And I, I just think we should be thinking about it. When this happens, we are talking about wide scale workforce disruption, job displacement, it becomes much more likely. And so we have to rethink business.
Mike
We have to rethink education in a really weird way.
Paul Raetzer
We have to start rethinking human purpose, like what a lot of us assign our jobs to, to our purpose.
Mike
Like they're a very important part of what we do.
Paul Raetzer
We have our family, we have our friends, we have our community, we have our faith. Like we have all these things that.
Mike
Define who we are, but the job.
Paul Raetzer
Is part of that. It's like, gives us fulfillment, makes us feel worth, worthwhile, like we contribute to society. And if all of a sudden that's not part of the equation or as significant as it used to be, that's a, that's a major problem. So when I think about AGI, what I know to be true is the.
Mike
Models are getting smarter fast.
Paul Raetzer
And I believe as a result we should be doing more to prepare for what comes next. Because if this AI timeline is even, directionally true, even if it's just off by a couple years, if it's directionally true, we are not ready. So when I, when I was kind of preparing this, I went back to the original. I was like, well, what changed? And that's, I want to highlight for you a few quick things here of.
Mike
What changed from last March.
Paul Raetzer
So one, lots of leading AI researchers switched labs and started Their own AI companies. So we see this all the time. Mike and I talk about this on the podcast, sometimes half jokingly, but these researchers are jumping all the time and it's highly competitive and you have researchers that'll leave their labs, they'll go start their own companies. Like Noam Shazier comes to mind. Google reacquired him. Or Aqua hired his company character AI for like two and a half billion dollars. Last year you had Mustafa Salomon who left DeepMind and then left Google and, or left Google and left DeepMind or vice versa. Goes and starts inflection, he gets Aqua hired by Microsoft, come in and run, you know, AI. There you have Noam Brown who was.
Mike
At Meta, who's a major player in.
Paul Raetzer
The development of reasoning models at OpenAI. Like they just jump in all the time.
Mike
Ilya leaves and starts his own safe super intelligence.
Paul Raetzer
So yeah, so that, that's a, that's a major component of it. It shifts the landscape all the time. The other major factor that happened in fall and you know, really into January of this year was new administration in the United States. We have a new president and they.
Mike
Have a very different view of this stuff.
Paul Raetzer
Um, energy investments are going to skyrocket, investments in infrastructure to build out, you know, these data centers. And what's going to be needed, dramatic.
Mike
Reduction in regulations, much more kind of.
Paul Raetzer
Free market in terms of driving innovation, letting these labs do what they're going to do. You're probably going to see increased mergers and acquisitions in the AI space. We're already starting to see it happen. And the main reason is they don't want to lose.
Mike
So they see this as a war.
Paul Raetzer
For AI supremacy with China and others and they intend to win it. And they think it's important that the leading AI labs, that the people who get to AGI first, that it has democratic values. And so that's happening. And in the midst of all that.
Mike
We had the deep seek moment where.
Paul Raetzer
A Chinese lab created something that jumped to the top of the charts in apps in the App Store and sort of changed the direction or at least sped up the direction of American based labs because they did something more efficiently than the US labs had. Also, what's changed in the last 12 months?
Mike
The test time compute, scaling law, the thinking law, as we talked about earlier.
Paul Raetzer
Which led to reasoning and thinking models which we're seeing now coming out from everywhere. We had this major focus on AI agents, even though the marketing of the Autonomy is misleading and confusing. We had computer use debuted, which we talked about the tone and confidence of the AI leaders that AGI is near absolutely picked up starting last summer. And then that leads me to think that the timeline for AGI is moved up. And I do think that there's a. I think I said in my executive AI newsletter on my SmartRx AI executive newsletter that I think right now there's.
Mike
Probably a greater than 50% chance that.
Paul Raetzer
An AI lab claims AGI within one to two years claims that they've achieved it. Now, whether or not they did and whether or not we agree on it, I don't know. But I think it'll happen.
Mike
All right.
Paul Raetzer
So as we kind of start to wrap up, I wanted to cover a.
Mike
Few other key areas. One, what accelerates AI progress?
Paul Raetzer
Two, what slows it down? And then I want to kind of wrap with how you can prepare, what steps you can take. So what accelerates it continued algorithmic breakthroughs like we saw with Deep Sea out of China.
Mike
There. There are ways to make these models smarter without having to buy more Nvidia.
Paul Raetzer
Chips and build bigger data centers. I think there's going to be a big focus on that. And if we can keep having these breakthroughs, we might get to AGI sooner.
Mike
Clean energy abundance.
Paul Raetzer
We invest in wind, solar, nuclear fission. We're seeing that building nuclear power plants, buying nuclear power plants, nuclear power plants coming back online. And so that's going to continue happening. Compute efficiency breakthroughs.
Mike
These smaller models through targeted search retrieval, like finding ways to do things faster.
Paul Raetzer
Like the human brain does. Our, our brains are very, very efficient. Models aren't. And so they're trying to figure out.
Mike
How to give the models the kind.
Paul Raetzer
Of efficiencies we enjoy in our, in our brains.
Mike
Cost of intelligence declines at a rapid rate.
Paul Raetzer
I think I forget the exact number, but I want to say Sam Altman recently said that the cost of compute.
Mike
Drops 10x every 12 months.
Paul Raetzer
So like a model today that costs x, 12 months from now it's going to cost Y. And so it just becomes cheaper and cheaper to, to use these tools as a business person, as a company. Energy breakthroughs. Nuclear fusion is the one that I.
Mike
Pay the closest attention to.
Paul Raetzer
It's actually Sam Altman. I think his largest investment is in a nuclear fusion company. And I believe they actually have a contract with Microsoft already for like 2028. So fusion is one of those things that might not happen for 20 years.
Mike
Might not happen ever.
Paul Raetzer
But there's a lot of progress being made into space. I'm very keenly interested in large scale government funding. I've for Over a year been sort of trumpeting we needed like an Apollo level mission to build AI. I think that's going to happen. We're starting to see some early signs of that. But I do think that the federal government, at least the United States is going to try and nationalize components of this. And, and I don't know that you and I are going to hear about it, but I'm pretty convinced it's going to happen. And I think other governments are going to do the same thing. Greater network and data security against threats.
Mike
So there's a lot of risk related to this stuff. If we find ways to put greater.
Paul Raetzer
Protections in for the data privacy security, then we can actually accelerate progress more.
Mike
But right now there's gonna be a.
Paul Raetzer
Whole bunch of threats that emerge. Another could be new scaling laws.
Mike
So we found test time compute last year.
Paul Raetzer
What is the equivalent of that this year?
Mike
Is there a new scaling law that's.
Paul Raetzer
Gonna emerge that's gonna accelerate things again? Infrastructure investments, upgrade and expand electrical grids, more data centers. Honestly, like one of the biggest bottlenecks is going to. We don't have enough electricians. So moving people into the trades is, would be key because there's going to be enough people to build all these data centers that need to get built and to do all the electrical work that needs to go into them. More compute capacity. So more chips and fabs that build the chips. Plus a diversity in the chip supply chain. There's still a massive reliance on Taiwan for chips and that's quite dangerous given the geopolitical climate between China and Taiwan and America. So that could be a problem. But if we can find ways to get the fabs working in the United States and, and bring some of that onshore, that could accelerate it as well as other allied countries with the United States and then other scientific breakthroughs like quantum computing is certainly an area I pay attention to. I get similar feeling to nuclear fusion. Like it could be five years away.
Mike
Or it could be 50 years away.
Paul Raetzer
Like we just don't really know. There's a lot of really sexy headlines about quantum milestones from Microsoft and Google. I'm not sure that they really mean anything in the near term to commercialization of it. Okay, and then what slows AI progress? A breakdown in the AI compute supply chain. Earthquakes, hurricanes, human forces, cyber sabotage, physical impact on these data centers and things like that in the fabs. So I don't want to spend a lot of time on that one. I forgot to think about it. But that's the reality.
Mike
Catastrophic Events that are blamed on AI.
Paul Raetzer
So you could see something going wrong and the, the, the talking point becomes that it was AI that caused it. Chip scarcity, which we're in. We don't have enough chips to do what they want to do.
Mike
We don't have enough energy.
Paul Raetzer
So energy scarcity is another one.
Mike
Failure of the models to align with human values, intentions, goals and interests. This is a big one.
Paul Raetzer
There's been research recently that shows the models are deceptive by nature, that they.
Mike
Intentionally mislead their human creators and testers when they know they're being tested.
Paul Raetzer
Now why they do that we don't know. But that's a problem. And the smarter they get, the harder.
Mike
It'S going to know if they're purposely deceiving us and if they're actually not.
Paul Raetzer
Going to do what we want them to do. Sounds very sci fi. It is, but it is also reality that we already see this happening with the models we have today and the labs don't know how to stop that yet. At least they haven't publicly said how to stop it. Human misuse that violates laws and values, that's very real. That one's going to happen this year. Another thing that could slow it down.
Mike
Is lack of value created in the enterprises.
Paul Raetzer
We see this every day. A lot of times I think the lack of value is due to a lack of literacy, a lack of understanding. It's not because the technology isn't capable of helping, it's that companies haven't taken the steps they needed to figure this stuff out and, and, and adopt it properly. Landmark IP lawsuits that impact access to training data and the legality of existing models.
Mike
I would have had this higher up.
Paul Raetzer
On the likelihood list last year due.
Mike
To the current administration and my belief.
Paul Raetzer
That they're going to basically throw out this stuff.
Mike
I don't think this is going to.
Paul Raetzer
Be a problem in the United States. Some other countries have already taken steps to do this. Not great news for copyright holders.
Mike
Authors like me whose books were pirated.
Paul Raetzer
And put into the training data get nothing for it. Photographers, artists, writers, anybody, anybody who's created something that these models were trained on.
Mike
And they absolutely were trained on copyrighted material.
Paul Raetzer
There's no debating that. Their argument is they had the rights to it and that if the US stops them from doing it, they will.
Mike
Thwart innovation and we will lose to China.
Paul Raetzer
So if you listen to episode 1 40, we talked about this, I, I, I just. There's gonna be a bunch of lawsuits, it's gonna be a bunch of Legal cases may go to the Supreme Court. I think at the end of the day, the current administration could care less about copyright holders. So that that's new this year. Restrictive laws and regulations, again, far less of a likelihood. Now, we Talked on episode 140 also about there's over 700 AI bills at the state level right now at different stages. I don't know what's going to happen to those. But again, I just don't think that this administration is going to allow laws and regulations to slow down innovation.
Mike
Societal revolt.
Paul Raetzer
This is one I actually would probably put pretty high on my list of things I think could slow it down. I think that there will be pushback.
Mike
Increasingly in society against AI. I think once job loss starts to.
Paul Raetzer
Pile up, I think politics could choose to make it a much stickier talking point. I think perceptions and fears may expand as different things unfold. And I think this is going to become a big problem for tech companies. And I.
Mike
My current perception is they're doing nothing.
Paul Raetzer
To prepare for this and I think they should be. So I think there's a reality that at some point you're going to start to see pushback on these things as they become more powerful. Two other final ones here. Unexpected collapse in the scaling laws. So again at this time last year, the original scaling law of more compute plus more data plus more training time seemed to be humming right along. It did slow down in the fall.
Mike
So we did see a bit of.
Paul Raetzer
An unexpected slowdown, not collapse. But then the test time compute reasoning one just showed up and things just kept humming. So it's possible that we could have a situation where we just. The scaling law stopped working and they just stopped getting more powerful. I don't see that, and I don't think any of the labs see that happening, but it's possible. And then the, the final one here.
Mike
Is the voluntary or involuntary halt on.
Paul Raetzer
Model advancements due to catastrophic risks. I think that that one is a possibility. Um, Anthropic talks a lot about this. In particular that when they run it, do a new training run and they.
Mike
Find out that they can't control the.
Paul Raetzer
Thing, that it's too deceptive, that it's.
Mike
Completely misaligned and hiding the misalignment that.
Paul Raetzer
It'S capable of doing things it shouldn't be capable of doing, they have to shut it down. Now, I think Anthropic would. There are other AI labs that I don't think would. And so I think this is going to be really fascinating how this plays out. I, I do think that within the next two years, someone is going to.
Mike
Do a training run that they decide.
Paul Raetzer
Is too dangerous to release. And we're going to be a very interesting point in society when that happens, if we hear about it. And I think we're going to be at a very interesting point from a government perspective also of whether or not at some point they have to nationalize this technology to control it. So that I would put that one pretty high up on my list as well of, like, we should probably be considering this.
Mike
And I know the major labs are, are they all have ways of measuring.
Paul Raetzer
This, but I also know that they're not really a hundred percent sure how these models actually work. And so I, I don't have high confidence that they're going to know it.
Mike
When it happens or that they're going.
Paul Raetzer
To be able to do anything about it. All right, so as we wind down here, what do you do about all this? I get this is a lot. Maybe you've paused this and gone away for a day and come back to it. Maybe you're listening to it for the third time to try and process it all. Um, I will probably actually go back and revisit this to process it all myself. I, I put all this together and just showed up and started talking. I, I haven't actually internalized a lot of this myself yet, so I get that this is a lot. Um, so I want to give you a few things you can do. So the first is, as you always hear me say, ad literacy is far and away the most important thing any of us can do for our kids.
Mike
For our coworkers, for ourselves, for our.
Paul Raetzer
Our businesses, for our communities. People have to understand this stuff. And so the tech companies are going to keep accelerating. They're going to keep building smarter tech and more generally capable tech. They're going to pursue AGI and beyond. We have to figure out what that means to us, our companies, our careers. So I announced in late January the AI Literacy Project. You can go to Literacy LiteracyProject AI to learn more about that.
Mike
That is designed to help prepare individuals.
Paul Raetzer
And organizations for the future of work by making education accessible and personalized. So we offer a ton of, like, free resources intentionally. I have a very focused effort in.
Mike
Our company to try and provide as much free education as we can.
Paul Raetzer
That's why I do a free intro class every month on Zoom. I do a free scaling AI class every month. Newsletters, blueprints, all of it's free.
Mike
And so you can go and learn.
Paul Raetzer
More about that and hopefully take Advantage of some of that. The the AI Literacy Project's anchored in the belief that AI literacy is not just a competitive advantage, but a competitive.
Mike
Career and business imperative.
Paul Raetzer
My belief is that as weird as all this is, we get a choice. We can either do nothing, maintain status.
Mike
Quo, or we can accelerate our AI literacy and capabilities.
Paul Raetzer
Our focus is in on trying to.
Mike
Empower knowledge workers across every industry to.
Paul Raetzer
Thrive through the disruption and the uncertainty. So for yourself, focus on what can.
Mike
You do to drive literacy and for your teams.
Paul Raetzer
The other step in an organization is build AI councils. You don't have an AI council yet, raise your hand and start one. Focus on near term piloting, scaling generative AI policies, responsible AI principles. Think about not only adoption, but adaptability. How are we going to evolve as this stuff keeps getting smarter, as the timeline keeps accelerating? Because I think it will. And how do we think about change management? It's not just about getting a bunch of tools in and thinking about it as a technology thing. This is a people thing, it's a process thing thing. It's a business structure thing like that.
Mike
Requires change management and planning.
Paul Raetzer
The third thing is impact assessments. AI impact assessments. You can do this on yourself. So we have jobs GPT. You just go to SmartRx AI, click on tools. There's a jobs GPT one right there that will help you assess your current role. It'll walk you through an exposure key.
Mike
Of your role by title. Just put a job title in and.
Paul Raetzer
It'Ll assess how exposed that job is to AI as the models get smarter. So I developed an exposure key that considers these improvements in the models. And the other thing I just introduced about a month ago is you can now actually pro like project out future.
Mike
Roles for different professions or college majors.
Paul Raetzer
And so you can just put click on you know the look at the future jobs and then put your job title in there or what you do, what your profession is and it'll actually try and help you envision what an AI powered version of that job could be. Or like reimagine completely new titles. I would also at a business level think about building AI roadmaps that actually.
Mike
Guide the projects and use cases.
Paul Raetzer
It's going to be you going to need to adapt it all the time, but you're looking at the adoption of.
Mike
The technology, the integration of IT into.
Paul Raetzer
Processes, workflows, campaigns, thinking about your talent, your tech, your strategies.
Mike
So that's really important.
Paul Raetzer
And you it's an ongoing thing. You can do all these other things while you're doing the roadmap and then the big thing I just talked about on episode 140 and I featured in the exec AI newsletter this past week is this idea of an AGI Horizons team. So I think the most AI forward companies, the most innovative companies, the most prepared companies are going to put together.
Mike
Small groups of teams.
Paul Raetzer
It could be some internal experts as well as some outside advisors. It could be a bit more objective and they're going to start saying, okay.
Mike
If this timeline is directionally true, what.
Paul Raetzer
Does that mean to us? What does it look like to our.
Mike
Business, to our industry?
Paul Raetzer
What does it mean maybe more broadly.
Mike
To society and how our recruiting works.
Paul Raetzer
And how we develop our people. I really think we're at the point, and I can't stress this enough that we need to be contingency planning, we.
Mike
Need to be building scenarios of possible futures and we need to start thinking about these things because this isn't 10 years off.
Paul Raetzer
If these people are right, it's like one to two years before this starts to happen. Now it's not going to flip a switch.
Mike
And AGI, everything just changed.
Paul Raetzer
Think about your own business and how long it's taken you to integrate Gen AI. Like we're two and a half years into it and some companies haven't figured out what to do with Chat GPT yet. So it's not like AGI shows up and every industry's just disrupted and we all go home. It's like, no, it'll take a while once it gets here, but you don't want to be waiting around.
Mike
Like you want to be out ahead of this.
Paul Raetzer
So I would just really encourage you to pursue this idea of an AGI Horizons team that monitors advancements toward AGI and then assesses potential threats and opportunities. And then the final thing I'll say is like to explore this story of AI together. Like, I don't know where this goes.
Mike
I'm just doing my best to try.
Paul Raetzer
And like lay out scenarios based on spending a whole lot of time, probably too much time and mental capacity thinking about this. And so my hope is to like put this out and then like see.
Mike
Where the conversation takes all of us.
Paul Raetzer
What I would encourage people to do is, you know, I often say is like, don't try and do what I'm doing. Like it most people who have full.
Mike
Time jobs aren't going to be able.
Paul Raetzer
To keep up with every piece of this.
Mike
Hopefully that's what we help you do.
Paul Raetzer
Every Tuesday is like bring the things that matter to you. What I would tell you to do is Pick a thread, like find the.
Mike
Parts of this that you find incredibly.
Paul Raetzer
Intriguing, that you, makes you very curious or passionate. It could be related to your domain, expertise, your profession. Pick a topic or two and really.
Mike
Go in on those.
Paul Raetzer
So maybe like, maybe it is energy or government regulation, or maybe IT's application to SEO. Like whatever it is, just like pick those threads and become an expert in that area. Like be the one that really like pushes that forward. And then the other thing I'll say, and I mentioned this on 1:40, is we recently teamed up with Google Cloud.
Mike
To form a marketing AI industry council.
Paul Raetzer
So we're trying to look at what's.
Mike
Around the corner for the marketing industry. If we assume some level of truth.
Paul Raetzer
To this direction of these models continually getting smarter, more generally capable, then what does that mean to market, you know.
Mike
How'S it going to impact jobs and agencies and brands and consumer behavior?
Paul Raetzer
And so I would encourage people to do something similar in their own industry. You know, get together with some other people, get together with an association and form an AI council that tries to look out ahead the next few years.
Mike
And say, well, how is our industry going to change?
Paul Raetzer
You can do this within your own company, but like try and do this at a community level, at an industry level, because I think those are the kinds of conversations that need to happen. So like for us, we pulled together a couple dozen AI experts and marketing leaders and let's just talk, let's like think this through. So I, I think of it as more of like a think tank than anything, but I think things like that can make a difference. So hopefully those whatever, five or six things give you some level of peace.
Mike
Of mind or at least some direction.
Paul Raetzer
To go and, and help figure this out. And then I'll just kind of wrap here with what's coming next.
Mike
So my plan for this series is.
Paul Raetzer
Not for me to sit here and talk to you all for an hour and 40 minutes, you know, every other week. My plan is to actually interview experts, relate in related domains and topics. So a few of the key areas I'm looking at is like AI model advancement. So talking to the AI labs people, cybersecurity. As much as I don't want to think about cyber security, it's, it's critical we all think about it and talk about it. The economy, education, energy and infrastructure, future of business, future of education, future of work, specifically jobs, government, laws and regulations.
Mike
Scientific breakthroughs, societal impact, and then the supply chain.
Paul Raetzer
Like those are kind of the main areas I'm focused on because I think.
Mike
There'S something to be learned in all.
Paul Raetzer
Of them to figure out the bigger picture. There may be other areas as well, but I'll hopefully in the next couple weeks start announcing some of the upcoming sessions. I'm in the, in the process of scheduling interviews now and pursuing experts in these areas. And then I'll bring those to you through, you know, regular series over the next year. Beyond probably, we'll start having our, we'll continue to have our AI weekly, every Tuesday with Mike and I, and then I'll start doing these regularly. We'll have just expert perspectives, so some closing thoughts and then I will sign off here. I guess I did get close to two hours, huh? So we'll, we'll sign off here and then, you know, kind of be back next week for episode 142 with Mike again. So the main thing to think about here is the definitions of AGI are going to vary. It's not clear how we will know when it's achieved, but my main takeaway is it doesn't even matter if we get there. If we never agree on AGI arriving. We know the models are going to keep getting smarter and we know they're going to keep getting more generally capable. We can look at the scaling laws and we can see that. And that alone, whether we get to AGI or not in the next couple years is going to completely transform business, the economy and society.
Mike
So even just preparing for the possibility of AGI will put you in a.
Paul Raetzer
Better position to deal with smarter models, whether we call them AGI or not.
Mike
But as we progress toward this idea.
Paul Raetzer
Of AGI, there are some inevitable impacts.
Mike
That we should be considering and preparing for in business.
Paul Raetzer
So every business, regardless of the industry, think about shifts in your consumer customer behaviors. Think about the fact that you're probably going to need fewer people doing the same jobs. What do you do as a result of that?
Mike
Do you find new roles, reskill, upskill.
Paul Raetzer
Or are you going to choose to actually reduce the workforce?
Mike
Hopefully it's the prior that you choose. Automation of tasks across industries.
Paul Raetzer
It's going to continue to happen. There will be a premium on proprietary.
Mike
Data and distribution as differentiators, especially for these model companies.
Paul Raetzer
We have increases in capacity to produce.
Mike
More goods and services.
Paul Raetzer
There'll be an increase in competition and.
Mike
A potential for your business to be.
Paul Raetzer
Disrupted or for you to disrupt other businesses. There'll be increases in productivity and efficiency.
Mike
Increases in creativity and innovation.
Paul Raetzer
If we choose to use them in.
Mike
That way to augment what we're capable.
Paul Raetzer
Of, increases in profitability, job creation.
Mike
Certainly I think there's a whole possibility.
Paul Raetzer
Of like a renaissance in entrepreneurship. I think we could create millions of.
Mike
Small businesses that don't need a ton of people that are very innovative and can just build AI native from the ground up.
Paul Raetzer
And that could be the thing that offsets the job displacement. Because I do think job displacement happens too. And I think it's going to happen at different levels across different industries. But I, I think we should just.
Mike
Start to accept that that is going.
Paul Raetzer
To happen, but we can do something about it still. And then as the models get smarter, we have to be proactive in pursuing answers to critical questions like how will these next generation models affect you, your team, your company? How will the model advancements impact creative work and creativity?
Mike
How will consumer information consumption and buying behavior change?
Paul Raetzer
How will consumer changes impact things like.
Mike
Search and advertising and publishing?
Paul Raetzer
How are we going to ensure responsible.
Mike
Use of AI in our organizations?
Paul Raetzer
How are these copyright and IP issues going to affect our businesses and our use of generative AI tools? How's it going to impact strategies and budgets, Technology stacks, the environment? A lot of people ask me that question about the impact of these models.
Mike
And these training runs on the environment.
Paul Raetzer
And the use of AI as it proliferates. How's it going to impact educational systems? How's it going to impact organizations like yours, like mine? How are jobs going to change? And then the thing I'm very interested to explore and sometimes I think I have a grasp on this and other times I don't. What remains uniquely human? So these are the some of the questions that I plan to explore as part of the series. We have an opportunity and I think.
Mike
An imperative to reimagine business models, reinvent.
Paul Raetzer
Career paths and redefine what's possible. And I think you have an opportunity to lead. I believe deeply that we should be.
Mike
Optimistic about the future, that it can be abundant and incredible if we choose.
Paul Raetzer
To be responsible and human centered in our use of AI.
Mike
The goal of AI should be to.
Paul Raetzer
Unlock human potential, not replace it. But we have to be proactive and intentional about pursuing that outcome.
Mike
And I think we still have time.
Paul Raetzer
I don't think we're at the end of the line here where we can't have that outcome. So I believe we get a choice here. We can choose to make the future more intelligent and more human. And I hope this episode and the rest of this series can play a.
Mike
Role in preparing and inspiring you to take action.
Paul Raetzer
So thank you for being a part of this first episode and this journey and thank you for letting us be a part of yours. Thanks for joining us on the road to AGI and beyond as we navigate.
Mike
The breakthroughs, challenges and possibilities of artificial general intelligence.
Paul Raetzer
The conversation is just beginning. The future of AI is unfolding faster than we can imagine. We hope this series helps you stay informed and prepared. For more insights, resources and discussions, visit SmarterX AI and subscribe to the Artificial Intelligence Show. Until next time, stay curious and explore.
Driving Change Podcast Host
AI Welcome Change Agents to your go to place for stories that ignite your spirit, fuel your purpose and connect us all. We believe in the incredible power of the human spirit, its boundless resilience, and the inspiration it brings to our lives. On the Driving Change podcast, we'll journey together through the extraordinary, yet very relatable experiences of some of the most amazing people on earth. Our mission that through these stories we might just spark change within you and awaken a newfound motivation to harness your unique gifts to make a real difference in the world. So get ready to be inspired and join us on this incredible adventure. You can find the Driving Change podcast on Apple Podcasts, Spotify, iHeartRadio or wherever you love listening to your favorite podcasts.
Jim Roos
How much do you understand the future of finance? I'm Jim roos, a top 10 banking influencer and host of the podcast Banking Transformed where we dive deeply into the rapidly evolving world of banking and financial technology. Join me as I interview industry experts, thought leaders and innovators as they unravel the latest banking trends, disruptions and game changing technologies reshaping the world of finance Redefine your understanding the banking ecosystem. Subscribe now to Banking Transform. Available wherever you get your podcast and now available on YouTube.
Podcast Summary: The Artificial Intelligence Show - Episode #141: Road to AGI (and Beyond) #1 — The AI Timeline is Accelerating
Release Date: March 27, 2025
In the inaugural episode of the special miniseries "Road to AGI and Beyond," hosts Paul Roetzer and Mike Kaput delve into the rapidly accelerating timeline toward achieving Artificial General Intelligence (AGI). They explore the current advancements, key players, potential impacts, and the responsibilities that come with pursuing AGI.
Paul reflects on his early engagement with AI, sparked by IBM Watson's 2011 victory on Jeopardy. Over the years, he observed AI's slow progress punctuated by "AI winters," until breakthroughs in deep learning around 2011-2012 reignited optimism. The landmark emergence of ChatGPT in November 2022 marked a significant turning point, making generative AI accessible to the public.
A central theme of the episode is the elusive definition of AGI. Various organizations and experts propose differing definitions:
Paul summarizes his own definition: "AGI is a system that is generally capable of outperforming the average human at most cognitive tasks" (19:09).
Referencing a May 2024 paper by Google DeepMind titled "Levels of AGI," Paul and Mike discuss a framework categorizing AGI into five levels based on performance, generality, and autonomy:
The hosts highlight statements and actions from leading AI labs and personalities:
Paul posits that the timeline for AGI has significantly shortened based on recent developments and declarations from AI leaders. Key indicators include:
The discussion underscores several risks that could impede AI progress:
Paul emphasizes proactive measures for individuals and organizations to prepare for the impending AI transformations:
Paul and Mike conclude by urging businesses to embrace AI advancements thoughtfully, ensuring AI complements rather than replaces human potential. They advocate for responsible, human-centered AI deployment to unlock unprecedented opportunities while mitigating risks. The hosts express optimism about AI’s potential to drive innovation, productivity, and economic growth, provided stakeholders actively engage in preparing for its impacts.
Notable Quotes:
Key Takeaways:
For more insights, resources, and discussions, visit SmartRx AI and subscribe to The Artificial Intelligence Show.