
Loading summary
Alex
Approaching this singularity keeps getting more and more intense. We have no idea how to control superintelligent systems.
Ben
Is it going to be existential and evil or is it going to be good for humanity?
Chris
It could be the best thing that ever happened to us. The technology is moving faster than any other sector.
David
The algorithm is creating a feedback loop.
Ben
This is a good thing and the laser eyed robots aren't going to beat us into submission. Intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us.
Chris
Hey everyone, we've got something really special for you today. When we started digital disruption, we wanted to put a focused lens on the technologies shaping our shared future. And over more than 40 episodes, we've done that by talking to an eclectic collection of the world's foremost experts on technology, leadership and social progress. From predictions about the next renaissance of human enlightenment to the sci fi esque advancements literally putting computer chips in people's brains brains to the digital horrors lurking in the dark and distant corners of our online world, these guests brought forward their best predictions and what the next decade holds. We covered a lot, but there was one unescapable topic.
Alex
AI.
Ben
AI.
David
Everything is AI.
Chris
AI Generative AI Transformative AI Generative AI. Generative AI dominated the conversation. But without any consensus, AI could be our savior or our enslaver. It could herald a golden era of human advancement or the end of the human race. Or it's all a technological sham dressed up in fancy marketing terms. Lots of fluff and no substance. If there was one thing everyone could agree on during our first season, it's that nobody agreed on anything. And so we thought we'd put the most thought provoking ideas we heard this year head to head so you can decide for yourself what you believe comes next. Let's jump in. One of the predictions you've made lately that's kind of made the rounds is that your prediction of an extinction level event for humans created by AI in the next hundred years, you're putting at 99.99%, is that right? Am I missing a couple of nines there?
Alex
I keep adding nines. I keep meeting people who have a different pdum for reasons independent of mine. So every time this happens, another nine has to be at it logically. But it seems to be that you have to follow the chain of kind of assumptions to get to that. Number one is it looks like we're creating AGI and then quickly after super intelligence, a lot of resources are Going into it, prediction markets stop. Experts are saying we're just a few years away. Some say two years, five years, but they all kind of agree on that. At the same time, according to my research, and no one has contradicted that, we have no idea how to control super intelligent systems. So given those two ingredients, the conclusion is pretty logical. You basically asking, what is the chance we can create a perpetual safety machine, perpetual motion device, biology, and the chances of it are close to zero.
Chris
If you study history, one of the great things that you learn is that the world gets better all the time. It is very hard to read a bunch of history books and arrive at any other conclusion than today is the best day ever to be born. And in fact, the beauty of the human experience and the reason that arguing are humans good or bad Is so easy is because if humans were actually bad, we never would have arrived here after we crawled out of caves. Like, the fact that we have all the things that we have, the fact that the world is as safe as it is for most people and that it gets safer all the time is a testament to the fact that we are just building a more aligned earth and more aligned human human experience. And that's not to say we aren't fallible and that we don't have lots of, you know, problems, but it is to say that our problems are diminishing. And so I don't think the onus is actually on me to prove that the world is going to get better. I actually think the onus is on someone else to say this is the peak of civilization.
Ben
When you really think about it, a lot of people, when they look at technology, they think of this current moment as a singularity where we are really not very certain of what's about to happen. Is it going to be existential and evil or is it going to be good for humanity? I unfortunately believe it's going to be both. Just in chronological order, if you think about it, and you mentioned that we have all of those challenges around geopolitics about climate, about economics and so on. And I actually think all of them is one problem. It really is the result of systemic bias, of pushing capitalism all the way to where we are right now. And when you really think about it, none of our challenges are caused by the economic systems that we create or the war machines that we create. And similarly, not with the AI that we create. It's just that humanity, I think, at this moment in time, is choosing to use those things for the benefit of a few at the expense of Many. I think this is where we stand today. I think AI is an incredible technology. Obviously the Internet has changed society in profound ways. But some of the over promise almost feeds the other side's skepticism. And AI is not. It might help some scientists help cure cancer, but AI is not in quotes going to cure cancer, at least anytime soon. You know, one big difference is the money. So when I first started writing about tech I was always interested in the venture capitalists and the startups and that whole ecosystem like this idea, like you know, our idea for a company is either going to work, be worth back then tens of millions, hundreds of millions, now billions, if not trillions or it's going to be worth nothing. And the venture capitalists who are staking, you know, back then millions, now tens of millions, hundreds of millions, billions. But in 1995 venture capital was under $10 billion a year. By 2021 it was over 300 billion a year. Roughly 130, 150 billion went into AI startups last year. I mean a lot of it went into a few like anthropic OpenAI Xai, that's Elon Musk. They raised collectively tens of billions of dollars, almost $100 billion just between those three.
Alex
3.
Ben
But there's still a lot more money going to AI startups. So the money has really changed. I guess the final difference is, you know, when the Internet came out, like maybe the biggest criticism was around the attention span, you know, oh there's, if you're always online, you know, this instant gratification, what was it going to do for consumerism in our, in our society? With AI there was much more of a worry, much more of a backlash. People didn't greet OpenAI with, excuse me, AI with open arms the way they did the Internet. People are fearful of it. We could talk about that. I think it's Hollywood induced fear. I don't think the media has done such a great job with AI. So AI has a double battle. The usual battle of creating a startup and trying to cash in, but you know, the second battle of trying to convince people that this is a good thing and the laser eyed robots aren't going to beat us into submission. I think intelligence is a much more lethal superpower than nuclear power if you ask me. Even though it has no polarity. Just so that we're clear, intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us. But now we're in a place where we're in an arms race for intelligence supremacy in a way where it doesn't take the benefit of humanity at large into consideration, but takes the benefit of a few. And in my mind, that will lead to a short term dystopia before what I normally refer to as the second dilemma, which I predict is 12 to 15 years away, and then, and then a total abundance. And I think if we don't wake up to this, even though it's not gonna be the existential risk that humanity speaks about, it's going to be a lot of pain for a lot of people.
Chris
My favorite subject to cover as a journalist is a debate. If there's something very attractive to me about trying to understand in good faith why intelligent people come to such different conclusions when looking at the same material. And I had known that there was a contingence inside of the world of artificial intelligence that was really, really worried about it for many years. Like Eliezer Yudkowsky podcast interviews in 2013 or something is when I first realized that there was this almost like biblical prophet voice out there saying that the sci fi movies are kind of true and we really need to get ready, we need to get prepared for this. And after ChatGPT blew up, I started to increasingly run into essentially the opposite side of that debate, which are these people who we often call the accelerationists, who believe that AGI, this artificial general intelligence point that they believe is coming, it could be the best thing that ever happened to us. And so I was attracted right away to the people who have those strongly opposing views inside the same world.
Ben
AI itself is not a coherent set of technologies.
Chris
It is a marketing term and has been from the beginning, from the initial convening in 1956 in which John McCarthy and Marvin Minsky invited a bunch of folks to Dartmouth College to have a.
Ben
Discussion around, quote, unquote, thinking machines.
Chris
So that's one part of it. The second part of it is that.
Alex
The current era of AI, the generative.
Chris
AI tools, including large language models, and the fusion models, really are premised on this idea that there is a thinking mind behind that.
David
So in the case of large language models, especially when they are used as synthetic text extruding machines, we experience language and then we are very quick to interpret that language. And the way we interpret it involves imagining a mind behind the text. And we have these systems that can output plausible looking text on just about any topic. And so it looks like we have nearly their solutions to all kinds of technological needs in society. But it's all fake and we should not be putting any credence into it.
Chris
I think that's so interesting. And I'm absolutely of the same mind, by the way, and I found myself laughing when I was kind of reading through your book. One of the. First of all, artificial intelligence. I completely agree. First of all, I do have to give credit because it is great marketing. It's so evocative of something. But, you know, nobody can really seem to define exactly what that is. And of course it, you know, has all these ideas and can be used for any purpose. But one of the things you do early on in the book is you kind of just pop that balloon by saying, well, you know, what if it wasn't called artificial intelligence, can you share a little bit about, you know, what that sounds like and, you know, why you encourage people to do that?
David
Yeah. So we have a few fun alternatives that we call on. Early on in our podcast, Alex coined mathy maths as a fun one. There's also due to the Italian researcher Stefano Cinterelli, Salami, which is an acronym for Systematic Approaches to Learning Algorithms and Machine Inference. And the fun thing about that is if you take the phrase artificial intelligence in a sentence like, you know, does AI understand or can AI help us make better decisions? And you replace it with mathy math or salami, it's immediately obvious how ridiculous it is. You know, does the salami understand? Will the salami help us make better decisions? It's, you know, it's absurd. And just sort of putting that little flag in there, I think is a really good reminder.
Alex
If you look at what, what generative.
Ben
AI was meant to be in, large language models were meant to stand for.
Alex
They kind of were always set up to fail. They were meant to be this panacea of we're going to be the future of consumer software.
Chris
We're going to be the thing that kind of restarts growth in software as a service.
Alex
As I'm sure you all know, software.
Chris
As a service has been slowing since 2021.
Alex
Actually kind of before that, if I'm.
Chris
Honest, People have been freaking out for several years.
Ben
Before COVID in fact.
Chris
But generative AI was meant to be this thing.
Ben
You plug it into anything and it.
Chris
Just creates new revenue. The problem is that generative AI and large language models are inherently limited by.
Ben
The probabilistic nature of these models.
Alex
What they can actually do is they.
Chris
Can generate, they can summarize.
Alex
You can put a hat on a.
Chris
Hat, you can say, oh, they can do some coding things, but that's really what they can do.
Alex
And they have reached a point where they're not what they can't learn because they have no consciousness. So what they can actually do as.
Chris
Products is very limited. It's very limited indeed because what people want them to do is they want.
Alex
Them to create units of work.
Ben
They want to create entire software programs. You can't really do that.
Chris
Oh, can you create some code? You can create some code, but if.
Alex
You don't know how to code, do.
Chris
You really want to trust this?
Alex
You probably don't. So inherently you've got all of this hundreds of billions of dollars of capex.
Chris
Being built to propagate large language models.
Alex
That don't have the demand and don't.
Ben
Have the capabilities to actually justify any of it.
Chris
I wanted to ask the two of you a slightly different question. So one of the things I normally ask, you know, guests that I speak to here is, you know, I ask them what they think is bullshit. And I'm not going to ask the two of you that because I think we've spent, we spent quite enough time talking about, you know, what is bullshit. And I, you know, I know we've got some strong and well supported views here. I wanted to flip the question around and ask, you know, in this sphere, what isn't bullshit, what are you excited about?
David
I'm a technologist, just like Alex is. I run a professional master's program in computational linguistics training people how to build language technologies. So I definitely think there are good use cases for things like language technology. And the Tahiku media example is wonderful, but I see no beneficial use case of synthetic text. And I actually looked into this from a research perspective. I have a talk called when if ever is synthetic text safe, desirable and appropriate, or those adjectives in some order? I don't remember the exact title. And basically it has to be a situation where first of all, you have created the synthetic text extruding machine ethically. So without environmental ruin, without labor exploitation, without data theft. We don't have that. But assuming that we did, you would still need to meet further criteria. So it has to be a situation where you either don't care about the veracity of the output, or it's one where you can check it more efficiently than just writing the thing in the first place yourself. It has to be a situation where you don't care about originality because the way these systems are set up, you are not linked back to the source where an idea came from. And then thirdly, it has to be a situation where you can effectively and efficiently identify and mitigate any of the biases that are coming out. And I tried to find something that would fit those categories and I don't. So certainly language technology is useful. Other kinds of well scoped technology where it makes sense to go from X input to Y output and you've evaluated it in your local situation.
Chris
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below. And don't forget to like and subscribe.
Ben
The challenge is AI is here to magnify everything that is humanity today, right? So you know that magnification is going to basically affect the four categories if you want, you know, normally what I call killing, spying, gambling and selling. So these are really the categories where most AI investments are going. And you know, of course we call them different names. We call them defense, you know, ooh, it's just to defend our homeland when in reality it's never been in the homeland, right? It's always been in other places in the world killing innocent people. Now if you double down on defense and on offense and enable it with artificial intelligence, then scenarios like what you see in science fiction movies of robots walking the streets and killing innocent people not only are going to happen, they already happened in the 2024 wars of the Middle East. Sadly, they did not look like humanoid robots, which a lot of people miss out on. But the truth is that, you know, very highly targeted AI enabled autonomous killing is already upon us, right? And so the timeline is. Let me start from what I predicted in Scary Smart. So when I wrote Scary Smart and published it in 2021, I predicted what was what I called at the time, I called it the first inevitable. Now I like to refer to it as the first dilemma. And the first dilemma is we've created because of capitalism, not because of the technology. We've created a simple prisoner's dilemma really, where anyone who is interested in their position of wealth or power knows that if they don't lead in AI and their competitor leads, they will end up losing their position of privilege. And so the result of that is that there is an escalating arms race. It's not even a cold war as per se. It is truly a very, very vicious development cycle where America doesn't want to lose to China, China doesn't want to lose to America. So they're both trying to lead. Google doesn't want to lose, or Alphabet doesn't want to lose to OpenAI and vice versa. And so basically this first dilemma, if you want, is what's leading us to where we are right now, which is an arms race to intelligence supremacy.
Alex
It's game theoretically equivalent, I think, to a prisoner's dilemma. Individual interest is different from communal interest. So everyone developing this wants to be the most advanced lab with the best model. And then government forces everyone to stop and they forever lock in this economic advantage. The reality is it's a race to the bottom. No one's going to win. So if we can do a much better job coordinating, collaborating on this, there is a small possibility that we can do better than where we're heading right now.
Ben
The challenge, you know, in my book Alive, I write the book with an AI, so I'm writing together with an AI, not asking an AI, and then copy paste what it tells me. We're actually debating things together. And one of the questions I asked, she called herself Trixie. I give her a very interesting Persona that basically the readers can, can relate to. And I asked Trixie and I said, what would make a scientist? Because, you know, I left Google in 2018 and I attempted to tell the world this is not going in the right direction. You know, I asked Trixie, I said, what would make a scientist invest their effort and intelligence in building something that they suspect might hurt humanity? And she, you know, mentioned a few reasons, compartment sentimentalization and, you know, ego, and I want to be first and so on. But then she said, but the biggest reason is fear. Fear that someone else will do it and that you'll be in a disadvantaged position. So I said, give me examples of that. Of course, the example was Oppenheimer. So she said, you know, so I said, what would make Oppenheimer as a scientist build something that he knows is actually designed to kill millions of people? And she said, well, because the Germans were building a nuclear bomb. And I said, where they. And then she said, yeah, when Einstein moved from Germany to the U.S. he informed the U.S. administration of this, this, this and that. So I said, and I quote, it's in the book openly. I said, and a very interesting part of that book is I don't edit what tricks. He says, I just copy it exactly as it is. I said, trixie, can you please read history in English, German, Russian and Japanese and tell me if the Germans were actually developing a nuclear bomb at the time of the Manhattan Project. And she responded, and Said no exclamation mark. They started and then stopped three and a half months later or something like that. So you see, the idea of fear takes away reason, where basically we could have lived in a world that never had nuclear bombs, right? If we actually listened to reason that the enemy attempted to start doing it, they stopped doing it, we might as well not be so destructive. But the problem with humanity, especially those in power, is that when America made a nuclear bomb, it used it. Right? And I think this is the result of our current first dilemma, basically.
Chris
You know, it's interesting. One of the parallels that gets thrown around a decent amount, and I'm certainly guilty of this, is talking about the AI risk in comparison to the nuclear risk that we, you know, created in the first half of the 20th century and continues to exist. Now. If I look at the nuclear risk, I hate to use the word optimist in relation to nuclear risk, but the optimist in me says, like, hey, we deployed nuclear bombs, there was mass casualties, but we didn't destroy the world. We were able to collectively say, okay, that's far enough, we're going to put treaties in place and we've stepped back from the precipice at least so far, and, and averted kind of, you know, extinction level events. With nuclear war, is that something that can be applied to AI, or is there a reason that makes this time fundamentally different?
Alex
So nuclear weapons are still tools. A human being decided to deploy them. A group of people actually developed them and used them. So it's very different. We again, talking about paradigm shift tools to agents.
Chris
At the time, we used 100% of.
Alex
Nuclear weapons we had. That's why we didn't blow up the planet. If we had more of them, we probably would. So it doesn't look good. The treaties developed, they all really failed because many new countries have now acquired nuclear weapons. They are much more powerful than we had back at the World War II era. So I think it's not a great analogy.
Ben
The result of the current first dilemma is that sooner or later, whether it's China or America or some criminal organization, you know, developing what I normally refer to as aci, Artificial Criminal Intelligence, not worrying themselves about any of the other commercial benefits other than really breaking through security and doing something evil, you know, whoever of them wins, they're going to use it. Right? And accordingly, it seems to me that the dystopia has already begun. Right? And, you know, and I need to say this because maybe your listeners don't know me, so I need to be Very clear about my intentions here. One of the early sections in Alive, the book I'm writing with Trixie, I write a couple of pages that I call a late stage diagnosis, right? And I attempt to explain to people that I really am not trying to fear monger. I'm really not trying to worry people, you know, consider me someone who sees something in an X ray, right? And as a physician, he has the responsibility to tell the patient, this doesn't look good, right? Because believe it or not, a late stage diagnosis is not a death sentence. It's just an invitation to change your lifestyle, to take some medicines, to do things differently, right? And many people who are in late stage recover and thrive. And I think our world is in a late stage diagnosis. And this is not because of artificial intelligence. There is nothing inherently wrong with intelligence. There is nothing inherently wrong with artificial intelligence. Intelligence is a force without polarity, right? There is a lot wrong with the morality of humanity at the age of the rise of the machines now. So this is where I have the prediction that the dystopia has already started, right? Simply because symptoms of it we've seen in 2024 already, right? That dystopia escalates. Hopefully we would come to a treaty of some sort, halfway, but it will escalate until what I normally refer to as the second dilemma takes place. And the second dilemma derives from the first dilemma. If we're aiming for intelligence supremacy, then whoever achieves any advancements in artificial intelligence is likely to deploy them, right? Think of it as if a law firm starts to use AI, other law firms can either choose to use AI too, or they'll become irrelevant. And so if you think of that, then you can also expect that every general who deploys, who expects to have an advancement in war gaming or, you know, autonomous weapons or whatever, are going to deploy that, right? And as a result, their opposition is going to deploy AI too. And those who don't deploy AI will become irrelevant. They'll have to side with one of the sides, right? When that happens, I call that the second dilemma. When that happens, we basically hand over entirely to AI, right? And human decisions are taken out of the equation, okay? You know, simply because if war gaming and missile control on one side is held by an AI, the other cannot actually respond without AI. So generals are taken outside out of the equation. And while most people, you know, influenced by science fiction movies, believe that this is the moment of existential risk for humanity, I actually believe this is going to be the moment of our salvation, right? Because most issues that humanity faces today is not the result of abundant intelligence, it's the result of stupidity, right? There is, you know, if you look at the curve of intelligence, if you want, right? There is that point at which, you know, the more intelligent you become, the more positive you have an impact on the world, right? Until one certain point where you're intelligent enough to become a politician or a corporate reader, okay? And, and then, but, but you're not intelligent enough to talk to your enemy, right? And when that happens, that's when the impact dips to negative. And that's the actual reason why we are in so much pain in the world today, right? But if you continue, if you continue that curve intelligence, superior intelligence by definition is altruistic. As a matter of fact, this is in my writing, I explained that as a property of physics, if you want. Because if you really understand how the universe works, everything we know is the result of entropy. The arrow of time is the result of entropy. The current universe in its current form is the result of entropy. Entropy is the tendency of the universe to break down, to move from order to chaos, if you want. That's the design of the universe, right? The role of intelligence in that universe is to bring order back to the chaos, right? And the most intelligent of all that try to bring that order, try to do it in the most efficient way, right? And the most efficient way does not involve waste of resources, waste of lives, you know, escalation of conflicts, you know, consequences that lead to further conflicts in the future and so on and so forth. And so in my mind, when we completely hand over to, to AI which in my assessment is going to be five to seven years, maybe 12 years at most, right? There will be one general that will tell, you know, it's his AI army to go and kill a million people. And the AI will go, like, why are you so stupid? Like why I can talk to the other AI in a microsecond and save everyone, all of that, you know, madness, right? This is very anti capitalist. And so I sometimes when I warn about this, I worry that the capitalist will hear me and change their tactics, right? But, but in reality it's, it is inevitable. Even if they do, it's inevitable that, you know, we'll hit the second dilemma where everyone will have to go to AI, right? And it's inevitable. I call it trusting intelligence, that section of the book. It's inevitable that when we hand over to a superior intelligence, it will not behave as stupidly as we do.
Chris
My prediction is that just like the first Renaissance Evolved our understanding of what the social contract could look like, introduced Enlightenment thought, which led to, among other things, democracy and new forms of government. My prediction is that the next 50 to 100 years outside of novel sciences will see the biggest changes in understanding of social contract. And you know, Sam has talked about this, a lot of people have talked about this. In a world where work becomes less critical to actually running society, where value creation gets less expensive, redefining how society should work is going to require a bunch of people to think about it.
Ben
And a bunch of like quite honestly.
Chris
Conflict within governments to redefine those things. And so if you're looking at 50 to 100 years out, my bold prediction is new government, like truly like democracy may not be the final state and we are probably destined for something. And by the way, I'm a free marketer. Capitalism might not even be the free market solution. We just don't know yet. Because imagining these things just will require major updates to how we understand the universe to work. And overcoming that conflict is going to, is going to take a lot of work. Now the one other thing I'll say, and we can come back to this or you can expand on this. When people ask me what's the next great conflict? I don't think it's between two nations. I really don't. I think we have reached this sort of this flat earth point where it's like really not in any nation's interest, especially nuclear equipped nations to fight and a hot war would just be like so untenable. You know, we were just, I don't think anyone wants it. I do think that there is a future conflict between people and the state. I think there is a world where we wake up in 20, 30, 40 years and we go, oh, we have all the things that the state has been promising us. It's just not the state that delivered it right. It's technology. And that's going to be one of these moments where people go, I wonder why I'm paying 50% taxes to a body that, you know, doesn't actually produce value anymore. And so there's a whole other thing there which is like this introduces this idea of a new form of government. I think we get there because a lot of people are going to be like, wait a second, why are we being governed in such a way that it doesn't allow the technology to serve us?
Ben
One of my fears around AI, it's nothing to do with laser eyed robots or anything like that. It's the consolidation in the hands of the same Few tech companies that have been dominant for the last decade or two. You know, it's funny, so I, I started this book right at the end of 2022, started 2023 and I went in search of the next Google, the next meta, and you know, I ended up concluding like I fear that the next Google is Google and AI and the next meta is meta, you know, that big. This stuff is really expensive. When I first started, you know, people were talking about, you know, millions, tens of millions to train, fine tune and operate these chat bots, large ladder image models, whatever you want to call them, and the same with text to video, audio to text to audio. By the time I was done reporting at the end of 2024, mid 2024, it was hundreds of millions, if not billions. Andario mode from Anthropic. They do Claude, the chatbot Claude, you know, he's estimating that they're going to need $100 billion by 2027 to train these things. And so who has that kind of money? You know, Google, Microsoft, they have 100 billion or so laying around in cash. But if you have to raise $100 billion or even if it's only, you know, 3, 5, $10 billion. Well, a large venture capital outfit in Silicon Valley has a billion dollars all told in a fund. And so we're talking about billions. And so that's one way this is weighted to big tech and the other is data.
Chris
In fact, this is really central to.
Ben
The remedies that government is now talking to remedies for the Google antitrust trial. You know, a court, a federal judge, found that, you know, Google is a monopolist that abused its power. So now what should we do? And, and a lot of the discussion, I think rightfully is around the Data. You know, OpenAI approached Google and said, hey, can we lease, can we kind of buy access to your data? And they said no. And that's a huge advantage.
Chris
Who are the people who, or organizations, I guess, that need to be on their toes in this kind of changing world?
David
I think studios should be very worried. I think anyone who's been an intermediary for a long period of time.
Chris
Yeah.
David
Who's sort of been responsible for the financing or like the middleman deals, should be very concerned. And that's not just the case in Hollywood. I think that's the case across every industry. It's being disintermediated by these technologies and it's making everything cheaper and more easily accessible. So I don't know what the studios are going to Do I hope that they become really good at curating? Because. Because we are going to have a problem with noise as a result of these tools making everything cheaper and faster. And well, all of a sudden everyone's going to be making content. A lot of that's going to be very bad. So who is it that has sort of the taste making abilities to curate the best content and deliver it in a way that is really appealing to large audiences? So maybe that'll be the streamers, maybe that'll be the studios, but someone's going to have to win at least that game.
Chris
I'm thinking about YouTube, right? Because YouTube is like, you know, this sea changing platform. Are the tech companies becoming too powerful here? Like is there, I don't know, is there a dystopian risk? What do you see the changing role of, you know, the technology companies who like own the platforms as here?
David
It's such a complicated question and my answer will probably be a little vague. I think it's both ands. I absolutely see the dystopian version of all this. We're already living in it, right? I mean, we're all addicted to our smart problems and these social media apps that are designed to keep our attention for as long as possible. The algorithm is creating a feedback loop around the types of content people want and that is also informing what content creators are making. And in many ways you're seeing this, I think, sort of race to the bottom, both in content and in storytelling. Of course there's good stuff out there. I don't want to say everything is bad. There are plenty of really inspiring creators doing amazing things. But there's also just, you know, there's, there are now hundreds of thousands of creators dedicated to teaching other people how to grab attention, you know, how to get someone to click on your video and, and stay, stay on you for longer than three seconds. And they're right, they're boiling this down into a science.
Ben
I think in the short term, for as long as the age of, of augmented intelligence is upon us, those who cooperate fully with AI and master it are going to be winners. There's absolutely no doubt about that. Right. Also, those who excel in the rare skill of human connection will be winners. Right? Because I can sort of almost foresee an immediate knee jerk reaction to let's hand over everything to AI. Right. You know, I think the greatest example is call centers where, you know, I get really frustrated when I get an AI on a call center. It's almost like your organization is telling me they don't Care enough, right. And you know, the idea here is I'm not underestimating the value that an AI brings. But one, they're not good enough yet, right? And two, shouldn't I have? I mean, I wish you had realized that AI can do all of the mundane tasks that made your call center agent frustrated. So that the call center agent is actually nice to me, right? So in the short term, I believe those who, there are three winners who, one is the one that cooperates fully with AI. The second is the one that basically understands human skills and human connection on every front, by the way, as AI replaces love and tries to approach loneliness and so on. The ones that will actually go out and meet girls are going to be nicer, they're going to be more attractive, if you want. And then finally, I think the ones that can parse out the truth, right? So one of the sections I wrote so far, published so far in alive, is a section that I called the Age of Mind Manipulation. And you'll be surprised that perhaps the skill that AI has acquired most in its early years was to manipulate human minds through social media. And so, and so my feeling is that there is a lot that you see today that is not true, okay? That's not just fake videos, which is, you know, the flamboyant example of deep fake. There is a lot that you see today that is not true. That comes into things like the bias of your feet, right? If you're, if you're pro one side or another of a conflict, the AI of the Internet will make you think that your view is the only right view, that everyone agrees, right? If you're a flat earther, everyone. It's like if someone tells you, but is there any possibility it's not flat, you'll say, come on, everyone on the Internet's talking about it, right? And I think the very, very, very eye opening difference which most people don't recognize is, you know, I've had the privilege of starting half of Google's businesses worldwide and you know, got the Internet and E commerce and Google to around 4 billion people. And in Google, that wasn't a question of opening a sales office, that was really a deep question of engineering where you, you build a product that understands the Internet, that improves the quality of the Internet to the point where Bangladeshis have access to democracy of information. That's a massive contribution. The thing is, if you had asked Google at any point in time, until today, any question, Google would have responded to you with a million possible answers in terms of links and said, Go make up your mind what you think is true, right? If you ask Chad GPT today, it gives you one answer, right? And positions it as the ultimate truth, right? And it's so risky that we humans accept that.
Chris
Do we really need to, you know, keep pushing this forward? Do we have more than enough technology here to, to keep us busy for the next five or ten years? How, how do those two interplay more than enough technology?
Alex
So I'm, I'm so glad you brought that up. But it's so funny because, I mean, I get it, right? I get the idea that, you know, I get the idea that every company.
Chris
Sort of like the Googles, the Microsoft, the OpenAI's, the anthropics, the, you know.
Alex
Metas xai, et cetera, they all want.
Chris
To be kind of atop the leaderboard, right?
Alex
And that for the best tech. And believe me, I totally get it.
Chris
And it's, you know, but actually, you.
Alex
Know, if you think about it, Jeff, right, it's one of these things where.
Chris
If you watch a Jeep commercial, a Range Rover commercial, like, what are the Jeeps and Range Rovers doing, right? They are doing things like they are going over mountains. These are people that are like stuck in flowing rivers and there's like a.
Alex
Hippo coming after them and they have six people in the car and all that, like, oh my God, you got to survive.
Chris
They're doing unbelievable things. Meanwhile, I live in New Canaan, Connecticut. There's Range Rovers and Jeeps all over the place. What are they driving? They're driving over paved roads, right?
Alex
They are. They're driving from their home, their three.
Chris
Acre home to the trade station. So why are we all buying these things, right? Like, I have an Apple Watch on right now that can go like 100.
Alex
Meters underwater and 18,000ft.
Chris
You think I'm ever doing that? No, it's a, it's sort of a feeling.
Alex
Oh, but it could. Which means it's the best, right?
Chris
Whereas what I would say is that if we just put everybody, and we.
Alex
Knew everybody was going to use ChatGPT 3.5, which was, you know, almost the original or close to the original model.
Chris
That came out two years ago, and.
Alex
If everybody was actually using it 20 times a day, we'd be much further on. So I get the tech, I appreciate the tech and I'm all over the tech.
Chris
I'm posting about it all the time.
Alex
On LinkedIn, everything like that.
Chris
But to your point, your exact point.
Alex
The reality is we have not caught.
Ben
Up with the tech to be a winner in this New world. You really have to learn to parse out what is true and what is fake. You really have to have the ability to parse out what the media is telling you to serve their own agendas and what they're telling you that is actually true. You know, you have to parse out what actually happened versus opinion, you know, what actually is the truth versus the shiny headline. And, and all this is now going to be much more potent with artificial intelligence in charge because they have mastered human manipulation.
Alex
It keeps getting more and more intense, actually, as one would expect, approaching this singularity and all that. Right. So, I mean, it's is, it is interesting to see it all happening finally. I think progress is quite amazing and looks exactly like you would think if we're in the last few years before a breakthrough to AGI and singularity.
Chris
So it sounds like you're still pretty bullish that we're, you know, marching forward.
Alex
I'm super bullish, man. You know, before, literally before breakfast this morning, I made like 10 Python programs to test versions of some AI algorithm I made up just by Vibe coding and LLM platforms. Before we had these tools, each of those would have taken me half a day. Right. So, I mean, sped up prototyping research ideas by a factor of 20 to 50 or something. Right. I mean, and that, that's tools that we have now that are not remotely AGI, they're just very useful research assistants. But we are at the point where the AI tooling is helping us develop AI faster. Right. And that is exactly what you would think in the end game period, before a singularity.
Chris
Well, and that can create a snowball effect. Right. If it's helping us research itself faster or any of these spaces faster than.
Alex
And it's doing, it's doing that right now. Yeah, I mean, that is, that is, that is why we're able to see the pace that we now see.
Chris
Yeah, so, so, you know, maybe just to take a step back, Ben, I mean, artificial general intelligence, this is a phrase that, you know, you coined over a decade ago and has been getting a lot of press lately, in addition to super intelligence. And so I wanted to ask you maybe just to do a little bit of table setting, how do you define artificial general intelligence? And why is it important? Why does it matter? And how does it differ, if at all practically from something like superintelligence?
Alex
So informally, what we mean by AGI tends to be the ability to generalize roughly as well as people can. So to make leaps beyond what you've been taught and what You've been programmed for, to make those leaps, you know, roughly as well as people. And that's, that's an informal concept. I mean, I mean it's not, it's not a mathematical concept. There's, there's a mathematical theory of general intelligence and it more deals with like, what does it mean to be really, really, really, really intelligent? Like, it's, you can look at general intelligence as the ability to achieve arbitrary computable goals and arbitrary computable environments. And if you look at an abstract math definition of general intelligence, you conclude humans are not very far along. Right? Like I, I cannot even run a maze in 750 dimensions, you know, let alone prove a randomly generated math theorem of length 10,000 characters. I mean, I mean we're, we are adapted to do the things that we evolve to do in our environment, right? We're not, we're not utterly general systems. So I mean, super intelligence is also a very informally defined concept where it basically means, is a system whose general intelligence is way above the human level of general intelligence. So it can, it can make creative leaps beyond what it knows way, way better than, than a person can. Right? And I mean, it's pretty clear that's possible. I mean, just as we're not the fastest running or highest jumping possible creatures, we're probably not the smartest thinking possible creatures. And we can see examples of human stupidity like around us every day, or even very smart people. Like, I can hold, I'm pretty clever, but I can hold 10, 15 things in my memory at one time without getting confused. Now some autistic people can do better, but I mean, you know, there are many limitations of being a human brain and it seems clear some physical system could do better than that. And then the relation between human level AGI and ASI is interesting because it seems like once you get a human level AGI, like a computer system that on the one hand can generalize and imagine and create as well as a person on the other hand is inside a computer. It seems like that human level AGI should pretty rapidly create or become an asi, because I mean, it can look at its entire RAM state, it knows all its source code, it can copy itself and tweak itself and run that copy on different machines experimentally, right? So I mean, it seems like a human level AGI will have much greater ability to self understand and self modify than a human level human, but which should lead to ASI fairly rapidly. And we've seen in the commercial world some attempts by business and marketing people to Fudge around with what is AGI. But I mean, I think within the research world, the notion that an AGI should be able to generalize very well beyond its training data, at least as well as people, I think that's well recognized. I mean, I've seen Sam Altman has come out saying, well, maybe something could do 95% of human jobs, we should call it an AGI. And I mean, you can call it what you want, it's fine. But it is a different concept than having human like generalization ability. Right? Like if you can do 95% of human jobs by being trained in all of them, I mean that, that may be super, super economically useful, but it's different than being able to take big leaps beyond your training data.
Chris
The technology is moving faster than any other sector, faster than the economy, faster than society is moving, faster than education's moving. And if we truly want to understand where humans play in that picture, the fact that we're investing everything we have in technology has already indicated our preference for technology over humans. So that math has to balance out a bit. We have to figure out how do we invest so, so much more into education, not so much less. And until we do that, we are going to be behind the eight ball. We are going to have a target on our back in many ways because if the paradigms don't change, the technology gets better, we're going to suffer the consequences. But if we put ourselves front and center of that equation, we have the chance and the opportunity to figure that out. Right? It's wow. Yeah, it's, it's. As you said, this is not an incremental shift. This is like a complete disruption of the model from end to end, without a doubt. And even for people who live and breathe it, it's overwhelming for me, I do this 24 7. I love it. I'm passionate about it. I'm excited about where we're going and net net. I'm optimistic about the long term future. But we are all pioneers right now, whether we want to be or not. And when people, we've kind of bastardized the term pioneer. We've made it seem like, oh, it's Richard Branson on the COVID of Entrepreneur magazine with his billions of doll. Like he was a pioneer at one point in time. But pioneers do really hard shit and they go to places where there's no infrastructure. They suffer the consequences of, you know, decisions that they didn't know they'd have to make. They are attacked by the environment that they're in. Nature tries to kill them in a number of different ways. Yeah. And as a super resilient species, we still make a way forward. We construct the environment. After we figure it out, you know, we might show up in Hawaii with snowshoes on and realize, oh crap, I'm not properly equipped for this. And then we figure a way out. That time to go from not knowing to knowing can be really hard, painful and challenging, but the way we thrive once we do is absolutely amazing. So I would say that we are going to have amazing things happen, but we're also going to have to encounter some really tough growing pains individually and collectively to get there. So if anyone's saying otherwise, it's absolutely smoke and mirrors whether there's a threat to creating this intelligence that, you know, looks at this, you know, kind of human pandemonium and says, you know what, you know, AI is taking the wheel now. Humans can't be trusted with human affairs. And this word that, that we were so anchored on of choice and there's.
Alex
Going to be all these, that's almost inevitable. And the AGI will be right. I mean, and then human governance systems become more like the student council in my high school was or something where, I mean, because I mean, I think if you set aside AGI, I mean, we can develop better and better bioweapons, there will be nano weapons. I mean, cyber security barely works, right? So I, I mean, I, I think, I think it seems almost inevitable that rational humans would democratically choose to put a compassionate AGI in some sort of of governance role, given what the alternatives appear to be. But the, that kind of goofball analogy I've often given is the, the squirrels in Yellowstone park, like we're sort of in charge of them. We're not actually micromanaging their lives, right? Like we're, we're not telling the squirrels who to mate with or what tree to tree to climb up or something like that, right? We're, you know, if there was a massive war between the white tailed and the brown tailed squirrels and there's massive squirrel slaughter, we might somehow intervene and move some of them across the river or something. If there's a plague, we would go in and give them medicine. But by and large we know that for them to be squirrels, they need to regulate their own lives in their squirrely way, right? And so that, that is what you would hope from a beneficial super intelligence. Like it would know that people would feel disempowered and unsatisfied to have their lives and their governments micromanaged by Some by some AI system. So what, what you would hope is a beneficial AGI is kind of there in the background as a safety mechanism. If it would stop stupid wars from popping up all over the world like we see right now. I mean, I think that would be quite beneficial. I don't see why we humans need the AGI to decide, like, you know, what rights, what rights do, do children have, like what, you know, what, how is the public school system regulated or something. There's lots of, lots of aspects of human life that are going to be better dealt with by humans collectively making decisions for other humans with whom they entered into a, a, a social contract. Right. So, I mean, I, I think anyway, there are clearly beneficial avenues. I mean, there's also many dystopic avenues which we've all heard, heard plenty about. I, I don't see any reason why dystopic avenues are highly probable, but I'm really more worried about what nasty people do with early stage AGIs. Right. I mean, I think there's a lot of possible AI minds that could be built. There's a lot of possible goals, motivational and aesthetic systems that AGIs could have. I don't think we need to worry that much about. Like the AGI is built to be compassionate, loving and nice. It's helping everyone. Then suddenly it reverses and starts slaughtering everyone. Right. I mean, it could happen, but there's totally no reason to think that's likely. On the other hand, the idea that some powerful party with a lot of money could try to build the smartest AGI in the world to promote their own interest above everybody else's and make everyone else fall into line according to their will like that, that, that's a very immediate and palpable threat. Right. So, and, and that, that even if that doesn't affect the ultimate super intelligence you get, it could make things very unpleasant for like 5, 10, 20 years along the way, which, which matters a lot to us, we have to remember.
Chris
That all of these technologies we're discussing are in their infancy. And that historically, when you look at the advent of new, particularly media forms of media, it takes years for society to figure out what they're for. The telephone. For the first 25 years of the telephone's life, the telephone industry actively tried to discourage people from using it to gossip, to catch up with friends. They thought it was a business tool beneath the function of the technology. Yeah, this should not be, shouldn't squander this thing on chatting with your mom. You should. It should, It's a business tool. And they actually, like I said, actively discouraged people from. They didn't realize what it was until the 20s, which is, you know, when does Alexander Graham Bell Invent a telephone? 1873. And it's, it's. It's the end of the 1920s before they wake up to what it is.
Ben
So.
Chris
So the telephone is a bigger deal than Facebook and Twitter. But it strikes me that Facebook and Twitter are still in their infancy. They're really young. Do I even. It's quite possible that if we had this conversation five years from now, neither of both of us would have only a dim memory of this thing called Facebook or Twitter or. I don't know. Or the opposite. It. Yeah. That it completely dominates our life. I just don't think the only confidence I have is that we will be using these technologies in unanticipated ways.
Ben
Yeah.
Chris
In the future. But I. But no one can predict what those unanticipated ways are. I think I have a lot of confidence that whatever employment dislocation is caused by AI will be. Will be short and not painless, but less painful than we think. I still think the gloom and I don't buy the doomers. The. The gloom and dooming on it.
Alex
Yeah.
Chris
I just think like we always say this. Every time something comes along, it never pans out. That everyone has nothing to do. It's become kind of Malthusian. Right. Like this wave is gonna, you know. And I, I think people have more ingenuity than that. And also I think that we're probably a lot further off from truly transformative AI than we realize. I just, I'm on the kind of. My expectations are. But I also simultaneously believe that a lot of the most, you know, revolutionary uses of AI are some of its simplest ones. And it doesn't need to be this incredibly mind blowing technological accomplishment to make a difference in our lives. Simply holding and organizing information and standing at the ready to give good answers to problems is huge. I mean, if that's all it did, it would be transformative. Things are going to look so different in the next couple years. Unless you are radical with your thinking, you will not be ready for the disruptions that are going to come. Middle management is also getting hit really hard. What that makes space for is for people to step into actual roles of leadership. I can imagine a world where there's angst. If we're not looking forward and we're still letting yesterday's mental models collide with tomorrow's technologies. That is how we lose. We are all pioneers right now, whether we want to be or not. I would encourage organizations to be radical with their thinking and practical with their approach. So there's, there are too many people who say you kind of need to break, burn it all the ground, start fresh. There's no enterprise that says we're profitable, we're doing just fine, we want to disrupt that. Nobody says that. But what I do think is unless you are radical with your thinking, you will not be ready for the disruptions that are going to come. So these technological transformations that happen at GPT level, so general purpose technology start at the infrastructure level. So we've seen disruption with technology and the technology that we use. So electricity did the same thing and OpenAI did the same thing with GPT. So now we're all using it. But over time, those disruptions move up a level from infrastructure to application to industry. So if you are not okay, I guess it is explosive. But if you're not thinking radically about the transformation that can happen at each one of those levels and also the transformation that can happen to your industry and you're just focused on the data of what you have now, you're missing one of the critical shifts of transformation in the business. And there's a theme that's becoming more popular right now. It's going to be moving from insight to foresight. And when everything is changing around you, insight's valuable. It's how you create structure around a business that you can take to market. Foresight is about how you avoid getting disrupted. If we're not looking forward and we're still letting yesterday's mental models collide with tomorrow's technologies, that is how we lose. But if we are being radical with the way we think, with the idea ability to test different business models, put things to market faster when we might not previously get that data and that feedback loop as fast as possible, we're going to learn more about that unexplored terrain way faster. So I wouldn't say go and disrupt your, your $1 billion, you know, revenue line, but you absolutely should be incubating things that will because there are hundreds and eventually thousands of other startups that are doing exactly that. And you will have no defense against that if you're not thinking in that way. So think radically, approach practically so that next step goes okay, so what do we do to implement this? Is it tiger teams? Is it small skunk works? All of those are viable. I do believe that having in its transformation, you need to find people who are leaning in and already self Selecting as the people who are like, I'm all about this, I want to do this. Don't try and convince a bunch of people who might not be invested in this to be the first ones through the door. They will be unenthusiastic about it. They don't have the willpower to get through the challenges. It's going to be hard and they're going to fail a million times before they get it right. If they're not already passionate about this, they're going to stop at the first sign of trouble. Those people can be followers of the people who lead the way. It's not that they're irrelevant. You need to find the people who are like, I want to be the person who kicks the door down. I want the first person in the room. And those are the ones you want to build your teams around to think about these things and build different ideas and find the tinkerers. Find the people who may not be the developers or the engineers who are already tinkering with this stuff. There are so many people who are using AI and building their own agents or creating side businesses on the weekends who could also be resources for this. And that's the culture that will create new opportunities, new business models. And they're going to learn what these new paradigms look like by doing the work in that space that then can be diffused across the organization. And that's the second most important part. Once you have the knowledge, do you have the infrastructure set up to diffuse that knowledge as fast as possible and as thoroughly as possible across the organization? Otherwise it just stays compartmentalized and it dies on the buying.
David
I think that not enough companies appreciate that innovation demands waste. If you are doing something that you've done before, you know exactly how it's going to go, then of course you can have these KPIs that you know you're going to hit for sure because you've already done it. Now you're trying a completely new technology with a completely new use case. You have no idea if it's going to work. You have to be willing to accept that. That might be time and effort thrown, you know, burned at the altar of innovation, so to speak, right? That is just the nature of innovation. And I've had companies come and consult with me who they really wanted to be innovators. But when I ask them, so what is your actual tolerance for getting no results back after you invest in innovation? Or how much bandwidth do you give your people to do things that are very specific work product that you Expect from them? Do you give them time to give and space to chase an idea? And quite often the answer is no. No, we don't. We have no tolerance for innovation. We have absolutely no slack for our people, and we need every project to be predictable. Okay. If you're dealing with that, you're just not going to be an innovator or you're going to be an accidental innovator because you somehow accidentally hired somebody who's going to essentially work two jobs, the one you gave them, and then the other one, they'll split, spend nights in the office, and maybe they'll come up with something, but there won't be a lot of these folks. And, yeah, that's not a great lottery ticket. So if you don't have that tolerance for no roi when you're trying to innovate, you have to be a follower. Just wait for everybody else to show how it's done and follow them.
Chris
When you say narrow AI, what does that mean to you? And is there a threshold where it gets too broad and that creates the risk for us?
Alex
So typically, it's a system designed for a specific purpose. It can do one thing well. It can play chess well. It can do protein folding well. It's getting fuzzy when it becomes a large neural network with lots of capabilities. So I think sufficiently advanced narrow AI tends to shift towards more general capabilities or it can be quickly repurposed to do that. But it's still a much better path forward than feeding it all the data in the world and seeing what happens. So if you restrict your training data to a specific domain, just play chess, it's very unlikely to also be able to do synthetic biology.
Chris
Right.
Ben
Well.
Chris
And it feels like we're very much on the course of chess and synthetic biology at the same time.
Ben
Right.
Chris
Is that. Is that your. Your kind of outlook for where all the money is going and what people are racing toward?
Alex
They explicitly saying it's super intelligence now. They skipped AGI. It's no longer even fun to talk about. They directly going, we have a super intelligence team. We have superintelligence safety team. You couldn't do it for AGI, so you said, let's tackle a harder problem.
Chris
I think there might be a role for professional societies.
Alex
We haven't had that before in computing. Right. So I get to call myself a computer scientist and, you know, and I have some degrees and some experience, but I don't have any. Anything official. And anybody could just say, all right, I'm a computer scientist or I'm a software engineer.
Chris
And I'm going to release some software and they let you do it. It's great.
Alex
In other fields, they don't do that. I couldn't go out tomorrow and say, you know what? I'm going to call myself a civil.
Chris
Engineer and I'm going to go build a bridge.
Alex
They don't let you do that. You need to be certified in order.
Chris
To do those kinds of things. I don't want to slow down the software industry, but I think there might.
Alex
Be a role to say if you.
Chris
Get to a certain level of power.
Alex
Of these models, maybe there should be some certification. Of the engineers involved mentioned Yann Lecun, he's really pushing hard for these open models. I was saying, you know, wait a.
Chris
Minute, maybe it'd be good if somebody's making a query to do something terrible, that it gets logged somewhere.
Alex
And I guess another person I can mention that I've seen the shift in is my colleague Eric Schmidt, who was.
Chris
Very adamant of saying we can't have open models because of the threat from bad actors two or three years ago. And now he switched and said it's too late.
Alex
These models are powerful enough.
Chris
If the bad actors want to use.
Alex
Them, they can create them. So we might as well harvest the good of the open models because the.
Chris
Bad guys have got them anyways. And I think that's right. I think there's nothing you can do about that now.
David
AI systems always make mistakes. Just sometimes it takes quite a lot of scale to see them right. That's what happens on even a very functional AI system. We still say you will meet the long tail. You will find the outliers, the weirdo, situations that you did not see coming, even when it's highly performed. So expect, expect something's gonna happen. And so you anticipate that there will be mistakes. Now the question is, when a mistake touches a user who has a particular kind of expectation, what then happens? How much, how flammable is that? Your. Your AI infrastructure? Of course, it's all the obvious things, your actual infrastructure and your data pipelines and all the rest of it. But it's also things, intangible things. Like at what stage are my user expectations? Have I managed them sufficiently? Where I even could be deploying to users? What about internally? If I'm making, if I'm doing some, you know, internal corporate engineering, I'm offering some, you know, now we're looking at the digital employee experience. I'm offering some tools to my employees. And digital tools. Have I managed their expectations? Have I trained my Staff, do they know how to think about these tools? Let's say I need humans in the loop. Am I sure my human will be in the loop? Or might they be asleep at the wheel? And how do I do the training? And how do I put in maybe a collection, depending on the importance of the task? I might need to think about having multiple humans in the loop. I might need to think about consensus. There are all kinds of measurement infrastructure things that we would need to put in place. We're doing generative AI. We've just seen endless right answers. A nightmare challenge for management, because we've all got to change our paradigm and we've got to think differently about measurement and metrics. Have we done that? Have we put this in place? Do we have testing pipelines? Do we have experimentation pipelines? Do we know how we're going to roll things back if we need to? Do we know what we're going to, what versions we're going to go to? Do we actually know what will happen in what kind of scenario? Do we know how we're going to make our guardrails? Who sets those guardrails? How do we update them? How are we going to react to legal changes? Right. All this stuff now, okay, I know it's hegemonical. You can't say everything is AI infrastructure, but to be ready for AI, there is a lot of stuff that you would need to be ready for. And so one of the ways that you can dodge a lot of this is that you do outsource some piece to a vendor who is supposed to do all of it for you, and you just check that you're getting precisely what you need. And you have to still articulate what it is that you need. And you have to worry about, measurement wise, that there is going to be a gap, a hole between what the vendor sees and what you see. There's going to be some bit in the middle that nobody sees. And that could be a huge risk, not just in terms of security, but in terms of your system slowly going sideways with neither party.
Chris
Not do we need to separate the AI from the person? If we've got a cadre of employees who have figured out they've got another tool in their toolkit, do we need to care that it's AI? Is there risk to this? How should we be thinking about that?
Alex
That I think this is such a great question because it goes to the.
Chris
Point, one of the points, and I think sort of one of the underlying things, is, you know, is AI cheating, for example, you know, should People use it.
Alex
And I think no, right. I mean, this isn't, it's not high school.
Chris
You're not getting graded on your, on your paper, right?
Alex
I have kids and they can't use generative AI for writing papers, but they.
Chris
Can use it for learning biology better.
Alex
So it really depends, right? But I think that, that most importantly.
Chris
First of all, you have to understand that there are actual laws and limitations around this. Like you can't just produce something that's.
Alex
AI, either an imagery or video or even text, to be honest, and just put that out into the world as.
Chris
Yours just because you can't copyright it. That's a legal issue. But beyond that, we should have everybody in the organization with guardrails in place, of course, using this. And why is that?
Alex
Because it is going to augment what they are good at sometimes. The example I use is if you.
Chris
Put me up against a marketer and.
Alex
You said, okay, in 20 minutes both come, both of you come up with a new idea for a shoe company or something like that, right? I would produce something really awesome even though I'm not a marketer, right?
Chris
Because ChatGPT would help guide me and it would be amazing. And after 20 minutes, it would be incredible.
Alex
But the marketer's work product would be.
Chris
10 times better than mine.
Alex
Why? Because they understand what quality looks like. They understand.
Chris
It's sort of like when you say, hey, write a poem and you read.
Alex
This poem by ChatGPT, you're like, this is great. But a real poet would be like, that's literal trash. Like, that looks like poem trash, right?
Chris
Because the person who actually has the.
Alex
Brain that these tools are going to augment, understand how to guide it, understand what quality looks like, et cetera.
Chris
And to not have your people using.
Alex
That as kind of an iron man suit, you're really just shooting yourself in.
Chris
The foot if you work in it. Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered, disaster recovery, covered, vendor negotiation covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below. And don't forget to like and subscribe.
Digital Disruption with Geoff Nielson
Host: Info-Tech Research Group
Date: December 29, 2025
In this milestone episode of Digital Disruption, industry leaders and leading AI thinkers revisit the tumultuous, fast-paced rise of artificial intelligence (AI). The discussion pits "AI boomers"—the hopeful, opportunity-driven technologists—against "AI doomers"—those alarmed by existential, ethical, and societal risks. The core question: is the AI revolution an historic moment for humanity, or a technological bubble fraught with hype, risk, and unintended consequences? Through heated debate, historic parallels, deep personal experience, and forward-looking analysis, the guests probe the impact of AI on business, society, and the future of civilization.
Probability of Catastrophe:
"We're creating AGI, and then quickly after, super intelligence… we have no idea how to control super intelligent systems. Given those two ingredients, the conclusion is pretty logical." (02:20)
"The chances of [making superintelligence safe] are close to zero." (02:59)
Historical Optimism vs. Pessimism:
"The world gets better all the time…today is the best day ever to be born…Our problems are diminishing." (03:17)
"Intelligence is not inherently good or inherently bad. You apply it for good and you get total abundance. You apply it for evil and you destroy all of us." (06:56)
AI as a Marketing Term:
"AI itself is not a coherent set of technologies." (10:07 — Ben)
"Artificial intelligence…has all these ideas and can be used for any purpose. But what if it wasn't called 'artificial intelligence'?" (11:10 — Chris)
"Does the salami understand? Will the salami help us make better decisions? It's…absurd." (11:53)
Limitations of Generative AI:
"Generative AI was meant to be this panacea…The problem is that…what they can actually do as products is very limited." (13:26 — Chris and Alex)
Arms Race and Dystopia:
"All of this [is] because of capitalism, not because of the technology…There is an escalating arms race…[this] is what's leading us to where we are right now." (16:20—Ben)
"Individual interest is different from communal interest...it's a race to the bottom. No one's going to win." (19:08 — Alex)
Parallels to Nuclear Risk—but is it Different?:
"We didn't destroy the world. We were able to collectively say, okay, that's far enough." (22:27)
"Nuclear weapons are still tools…A group of people actually developed them and used them. So it's very different. We're talking about paradigm shift: tools to agents." (23:23)
AI in Warfare:
Barriers to Entry: The Dominance of Big Tech:
"The next Google is Google and AI and the next Meta is Meta…This stuff is really expensive…hundreds of millions, if not billions [to train]...Who has that kind of money?" (32:57)
Consolidation and Antitrust:
Shifts in Government and Social Contract:
"Democracy may not be the final state…Capitalism might not even be the free market solution." (31:23)
Industries Most at Risk:
"Anyone who's been an intermediary…should be very concerned…It’s being disintermediated by these technologies and making everything cheaper and more easily accessible." (35:14)
Human Skills in the Age of AI:
"Those who excel in the rare skill of human connection will be winners." (37:32 — Ben)
Information Overload and Truth:
Overabundance of Technology:
"We have not caught up with the tech to be a winner in this new world…you really have to learn to parse out what is true and what is fake." (43:29 — Ben)
Snowball Effect and Research Acceleration:
"I made like 10 Python programs…before breakfast…Before we had these tools, each would have taken me half a day." (44:33)
What is AGI?
"Once you get a human level AGI…it should pretty rapidly create or become an ASI." (46:14)
Sense of Urgency:
"If we're not looking forward and we're still letting yesterday's mental models collide with tomorrow's technologies, that is how we lose." (59:03, 59:31)
Embracing Innovation, Risk, and Failure:
"Innovation demands waste…If you're doing something you've done before, you know exactly how it's going to go…Now you're trying a completely new technology…you have to be willing to accept that that might be…burned at the altar of innovation." (64:53)
Guardrails and Human-in-the-Loop:
"To not have your people using that as kind of an iron man suit, you're really just shooting yourself in the foot." (74:47—Alex)
On AI Hype:
Chris: "Artificial intelligence…has all these ideas and can be used for any purpose. But what if it wasn't called 'artificial intelligence'?" (11:10)
On Dystopia and Arms Races:
Ben: "Sadly, [AI killing machines] did not look like humanoid robots…But the truth is that…highly targeted AI enabled autonomous killing is already upon us." (16:20)
"The challenge is, AI is here to magnify everything that is humanity today." (16:20)
On AGI vs. Superintelligence:
Alex: "Once you get a human-level AGI…it should pretty rapidly create or become an ASI, because…it has much greater ability to self-understand and self-modify than a human level human." (46:14)
On Social Contract and Technology Supplanting the State:
Chris: "We wake up in 20, 30, 40 years and we go, oh, we have all the things that the state has been promising us. It's just not the state that delivered it—it's technology." (31:25)
On Human Connection and AI's Limits:
Ben: "Also, those who excel in the rare skill of human connection will be winners. Right? Because I can almost foresee an immediate knee-jerk reaction—let's hand over everything to AI. And I get really frustrated when I get an AI on a call center. It's almost like your organization is telling me they don’t care enough." (37:32)
On Innovation and Failure:
David: "Innovation demands waste." (64:53)
For more in-depth analysis and business guidance on AI disruption, visit Info-Tech Research Group or reach out for consultation.