
Loading summary
A
Hey, everyone. I'm super excited to be sitting down with Ben Goertzel. Ben is one of the most interesting minds in AI, and if you've ever used the term AGI or artificial general intelligence, that's his. His rap sheet includes founding SingularityNET, designing the OpenCOG AI framework, serving as chief scientist of Hanson Robotics, and leading the conference on Artificial General Intelligence. Ben has lived and breathed AGI for a lot longer than the current media cycle has, and I want to get the real story behind it. What the hell is it? When can we expect it, if at all? And what impact can we expect it to have on our lives and our livelihoods? Let's find out. The last few years have been particularly crazy.
B
It keeps getting more and more intense, actually, as one would expect, approaching this singularity and all that. Right. So, I mean, it is interesting to see it all happening finally.
A
Yeah. And I mean, from your perspective, you know, not to jump right into things, but is that. Is that warranted? Is the pace of technological change keeping up with the hype around it and people's interest in it?
B
It would seem so, yeah. I mean, I do feel like it is. I think that people's expectations always get hyped up, even. Even beyond the reality. But I mean, if you think about it like, what last year, everyone was saying AI is dead, then reasoning models. Then Deep Seat came out, Everyone said like, oh, Nvidia is dead. USAI, AI is dead. Right? Then, okay, OpenAI came up with the next model. Everyone's like, yeah, now GPT5 came out. It wasn't as good as people had hoped. They're like, oh, it's terrible. Things aren't happening fast enough. But, like, if you look at even benchmarks for AI performance, which I don't like or place much stock in, but, I mean, they keep on going while public narrative oscillates, oscillates up and down. So, yeah, I mean, I think progress is quite amazing and looks exactly like you would think if we're in the last few years before a breakthrough, the AGI and Singularity.
A
So it sounds like you're still pretty bullish that we're marching forward.
B
I'm super bullish, man. You know, before, literally before breakfast this morning, I made like 10 Python programs to test versions of some AI algorithm I made up just by Vibe coding and LLM platforms. Before we had these tools, each of those would have taken me half a day. Right. So, I mean, sped up prototyping research ideas by a factor of 20 to 50 or something. Right. I Mean, and that, that's tools that we have now that are not remotely AGI. They're just very useful, useful research assistants. But, but we are at the point where the AI tooling is helping us develop AI faster. Right. And that is exactly what you would think in the, in the end game period before a singularity.
A
Well, and that can create a snowball effect. Right. If it's helping us research itself faster or any of these spaces faster than.
B
And it's doing, it's doing that right now. Yeah, I mean, that is, that is, that is why we're able to see the pace that we now see.
A
Yeah. So maybe just to take a step back, Ben, I mean, artificial general intelligence, this is a phrase that you coined over a decade ago and has been getting a lot of press lately in addition to superintelligence. And so I wanted to ask you maybe just to do a little bit of table setting, how do you define artificial general intelligence? And why is it important? Why does it matter? And how does it differ, if at all, you know, practically from something like superintelligence?
B
So informally, what we mean by AGI tends to be the ability to generalize roughly as well as people can. So to make leaps beyond what you've been taught and what you've been programmed for, to make those leaps, you know, roughly as well as people. And that's, that's an informal concept. I mean, I mean, it's not, it's not a mathematical concept. There's, there's a mathematical theory of general intelligence, and it more deals with, like, what does it mean to be really, really, really, really intelligent? Like, it's, you can look at general intelligence as the ability to achieve arbitrary computable goals and arbitrary computable environments. And if you look at an abstract math definition of general intelligence, you conclude humans are not very far along. Right. Like, I, I cannot even run a maze in 750 dimensions, you know, let alone prove a randomly generated math theorem of length 10,000 characters. I mean, I mean, we're, we are adapted to do the things that we evolved to do in our environment. Right? We're not, we're not utterly general systems. So, I mean, super intelligence is also a very informally defined concept. What it basically means is a system whose general intelligence is way above the human level of general intelligence. So it can, it can make creative leaps beyond what it knows way, way better than, than a person can. Right. And I mean, it's pretty clear that's possible. I mean, just as we're not the fastest running or highest jumping Possible creatures. We're probably not the smartest thinking possible creatures and we can see examples of human stupidity like around us every day. Or even very smart people. Like I can hold, I'm pretty clever, but I can hold 10, 15 things in my memory at one time without getting confused. Now some autistic people can do better, but I mean, you know, there are many limitations of being a human brain and it seems clear some physical system could do better than that. And then the relation between human level AGI and ASI is interesting because it seems like once you get a human level AGI, like a computer system that on the one hand can generalize and imagine and create as well as a person on the other hand is inside a computer, it seems like that human level AGI should pretty rapidly create or become an asi because I mean, it can look at its entire RAM state, it knows all its source code, it can copy itself and tweak itself and run that copy on different machines, experiment experimentally, right? So I mean it seems like a human level AGI will have much greater ability to self understand and self modify than a human level human, which should, should lead, should lead it to ASI fairly rapidly. And now we've seen in the commercial world some attempts by business and marketing people to fudge around with what is AGI. But I mean, I think within the research world the notion that an AGI should be able to generalize very well beyond its training data, at least as well as people, I think that's well recognized. I mean, I've seen Sam Altman has come out saying, well, maybe something could do 95% of human jobs. We should call it an AGI. And I mean you can, you can call it what you want, it's fine. But it is a different concept than having human like generalization ability, right? Like if you can do 95% of human jobs by being trained in all of them, I mean that, that may be super, super economically useful, but it's different than being able to take big leaps beyond your training data if you work in it.
A
InfoTech Research Group is a name you need to know no matter what your needs are. Infotech has you covered AI strategy, covered. Disaster recovery, covered vendor negotiation, covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below. And don't forget to like and subscribe. One of the challenges I find in this space is trying to separate the, the marketing around all this. And you know, Kind of the bluster and the hype from the actual technological capabilities because everybody wants to tell you, oh, we already have it, or it's basically right here. So you know, I did want to ask you, Ben, I'm not looking for like an answer in like months or years necessarily, but how close are we to AGI right now in kind of the fuzzy sense of it.
B
So Kurzweil in his 2005 book the Singularity Is near, plotted some nice curves of Moore's law and allied statistical regularities suggesting that human level AGI should occur around 2029. So I would say I'll take Ray's estimate at plus or minus two or three years, right? Because I mean you can't nail it down. Exactly. I saw he thought we'd have speech to text working really well like 5 years ago. Seems to me only this year it started to work really well. Like now I can understand my wife's Chinese accent, I can understand my four year old daughter. Right, it couldn't a year or two ago. So he hasn't been exact, but he's been pretty on target and I think what he was seeing was fundamentally correct. Like it's, it's all this compute power, all this computer networking, all this data and this is creating more and more capability which is bringing more and more human attention and more funding which is letting us experiment with more and more interesting AGI oriented ideas. Right. So I don't think LLMs are the golden path to AGI, although I think they can be components of AGI systems. But I mean, I think the same context that allowed LLMs to emerge are going to let smarter and smarter Systems building on LLMs to emerge over the next few years, which will bring us to AGI pretty rapidly. Which is emotionally feels quite amazing even though intellectually it's what I thought would happen for my whole career.
A
So I want to unpack that notion about LLMs not just being a linear path there a little bit because you know, if you listen to Sam Altman or maybe some of the people you know around him or you know, big advocates of, of, you know, those types of tools, you might not be foolish for believing that AGI is basically just going to be, you know, chat GPT10 or you know, pick whatever number you.
B
Want, there's just like step by step 10. It just depends what's under the hood, right? I mean, I mean, because even, Even the top LLMs now are not just LLMs. Like when, when you look at all the achievements of LLMs on Math Olympiad or Physics Olympiad. I mean for math theorem proving, they're coupling an LLM with a formal verifier that checks whether the math proof actually worked. And when LLMs are doing coding, I mean, they're going back and forth with a Python interpreter that's feeding, feeding bug bugs back to them. And inside chat, GPT and claude, they're using various kinds of retrieval, augmented generation, which means you're, you're showing a vectorized database of what the LLM was doing that the LLM can then retrieve. So I mean already we have complex, you know, neural symbolic multipart cognitive architectures. They're just wrapped up in one interface, as they should be. And the LLM is the most expensive component to run. So that's what people are focusing on. So I, I think the real debate about AGI cognitive architecture is will it be an LLM at the center with a bunch of other useful tools and memory stores around the periphery that the LLM interacts with, or will there be something else at the center with an LLM as a sort of knowledge oracle? Right. Or, or maybe nothing at the center, just a multi agent system with LLMs and other things all cooperating to, to, to, to give a, a, a result. Right now I've certainly, there are some people who think a pure transformer neural net will lead to AGI and there are some people who think that as Jan Lecun from Facebook said on the highway to AGI, LLMs are an off ramp. Right? So I mean, I mean there's some people who think they're the whole answer. There's some people think they're utterly a distraction. I think majority of AGI researchers think it's going to be LLMs combined with other stuff. And it's just a matter of is the LLM 80% or 20%? And that's the kind of thing we can experiment with much more rapidly now than before with all this compute hardware and all this data.
A
So Whether it's the 80% or the 20%, let me ask you maybe like this, of kind of the major players who are doing research and development in this space, are there one or a few that excite you most in terms of the approaches that you're taking or think are most, most likely to strike gold here.
B
So of the familiar big tech players, I think DeepMind remains the most impressive and interesting in that they have a great depth of different talent working on a great variety of different AGI related approaches. And like they're, and I mean they're working together with Google Brain who invented transformer neural nets in the first place. Like even though now in some ways Open the Eye and Anthropic have taken a lead, but Gemini is not bad either anymore. Right. So I mean I would say DeepMind has a lot of depth and they have leaders. I mean one of them used to work for me, Shane Legg and Demis. I know moderately well they fundamentally get AGI and, and how to do AGI research. Right. So yeah, if I, if I had to make a bet, it would probably be Deep Mind. On the other hand, I haven't followed their internal politics in the last couple years. I know they're fusing more and more with the Google mothership, which is probably good for Google's bottom line and less good for fundamental research, research progress within, within, within DeepMind. Right. So I mean it may be their glory days as a research incubator are, are fading now or, or, or maybe not. I mean I know a bunch of people in DeepMind, but I, I mean we don't talk about the internal politics there. What I would say is within a big tech company that's making a lot of money from AI, there is going to be a strong pressure to keep developing what works. Right? And transformer neural nets now basically do work. They do a lot of cool things and they can be milked lot more, right? In different ways like language, vision, action models can be milked for robotics. I mean video generation is just beginning, beginning to work now. So there's a lot more to be milked out of transformal neural nets and similar technologies. And if you're running a big company that needs to keep making more and more money by delivering new versions of your products, I mean, fleshing out what already works is going to seem very intelligent from a Wall street perspective. Right? And this becomes a classic innovator's dilemma type thing where if, if what needs to be done to, to get to AGI involves significant components beyond what is currently commercially viable. There's certainly a pressure in Big Tech the, not to pursue that, that other stuff, but to focus on improving what's currently commercially viable. And so if, if you look at like the AGI research conferences, I've been organizing them since 2006, we have an annual research conference, we had the last one in August. I mean there's a great diversity of AGI ideas presented there by academics, entrepreneurs, industry people. I mean there's a lot of different theoretical notions and prototypes related to AGI and Big Tech isn't trying too hard on those. Right. Because because of the same reasons big companies generally don't pursue Blue sky research except in very particular cases. And so that's certainly makes things interesting. And I'd say in China, I spent 10 years living in Hong Kong, spent a lot of time in mainland China, it's even more so. Like the cultural tendency to double and triple down on whatever works is very, very strong there. Even though the grad students there are more crazy creative ideas than anywhere, it's very hard to get resources for them.
A
Well, and that makes me, I guess, even more curious about the conference you're running. And you know, I guess not only how the vibe, if I can call it that, has changed in the almost 20 years since you've been running it, but if you're starting to see some of those, you know, some of that juice being squeezed on the road to AGI, and you know, in the past year or two, if there's been anything either really significant as a milestone or any use cases that have just kind of, you know, knocked your socks off in terms of like, holy shit, I didn't know we could do this.
B
It's remarkable how conservative big tech is in terms of adopting new ideas, even ones that are published in Nature or Science or premier academic journals. And I mean, so one example is we've had the AGI conferences for the last four or five years, a bunch of speakers giving results on predictive coding as an alternative way of training deep neural networks. So all, pretty much all the deep neural networks in common commercial use now are trained by an algorithm called back propagation. And it's invented in the 1950s. I used to teach it in, in the 90s. I mean, it's a way to train the weights on the links of a neural net to account for training data. Right. It's cool. It has some shortcomings, one of which is the scalable ways to use it require you to sort of train a whole neural net all at once, rather than training different pieces independently or updating different pieces here and there. So the, the kind of the, the fact that training a neural net in one like batch, like that is the way to do things with backcraft. This is why we have AI models like now we have GPT4, now we have 0.5, now we have 5. Instead of just having like a digital mind that just keeps learning and updating, updating itself. So there are alternate ways to train deep neural nets besides back propagation. One of them is called predictive coding. And there's an academic literature showing that in many cases it can work Better than back propagation. It lets you train each neuron independently of the others so you can do continual learning and just keep updating the whole thing as it learns. So you wouldn't have the distinction between training mode and inference mode like you have with backdrop trained neural nets. Now no one has gotten that to work at a huge scale yet and there's some research to scale it up. But what's interesting to me is that no big tech company is doing that. And because it's not that weird, like it's not as weird as my hyper on design for AGI which involves logical theorem proving, evolutionary learning, a bunch of other ideas besides deep neural nets. Like this is just a potentially better way to train deep neural nets. And there's papers in Nature and mainstream AI conferences and stuff on it. But big tech is not forming research groups around this because instead they want to use their machines to keep training more and more big neural nets using back propagation. Right. And so I mean so in, in my own AI project, in Singularity Net and OpenCog, Hyperon and ASI Lin, so my own little network of maverick smaller scale AGI organizations, we've formed a team trying to scale up predictive coding based neural learning. And I imagine as soon as we've shown a scalable example, everyone will jump on board and try to use it. But it's interesting how unadventurous big tech is. So going back to the AGI conference, I would say we don't have any amazing product use cases coming out of that community, but I would say five years ago we had almost no practical demos at the conference at all. It was all math theory and ideas. And now we at least have various smaller scale demonstrations of alternative AGI methods doing things like you have people, people using their uncertain logic algorithm to control a little robot cruising around, or some folks from our own team showing a probabilistic logic engine making new biology hypotheses from biology database or something. So we are the last couple of years in the AGI conferences we're seeing practical demos of very different AI methods on the small scale and we're not yet seeing things scaled up based on the, on these alternative methods, which is what I think you would need for really exciting practical results. So like I, I don't, I don't quite buy like scale is everything and I think LLMs are wasteful of compute resources. So I mean you could, as DeepSeq showed, right? And, and you could probably go even further in reducing resource consumption. On the other hand, the AI pioneer Marvin Minsky in the 90s said he thought you could make human level AGI what was called human level intelligence. Then on an IBM 486, if you remember what those were, you're probably too young, but I used them. Right. So I think he was just wrong. I mean, those machines had megabytes of ram, right? Not gigabytes of ram. And I think you do need a certain amount of scale. And I've. So I've spent a lot of the last three years just trying to build a software infrastructure that would allow scaling up of alternative, alternative AI methods actually. Because Nvidia did wonders by making all these scientific computing libraries on top of their GPUs. Right? And that, that's what led deep Neural mats to be scaled up so well. But we haven't had a comparable hardware and software infrastructure for scaling up other approaches to AGI. So I've been working a long time just on getting that.
A
No, it's, it's, it's, it's super interesting. And I mean, as you're talking about it like, not only is there an opportunity to potential, potentially leapfrog some of what's going on here and just get things done through more rapid iterations, I guess. But I don't know, just the way you described it, it seems like it's almost inevitable that when we get to AGI, there's going to be more of this iterative approach versus these big steps, right? At some point, yeah. It's just not as sexy as the other stuff.
B
It will be the AGI conducting the iterative sets. Right.
A
Because that's right.
B
I mean, honestly, right now, just working on research, it almost feels like I'm the intermediary between different automated systems. Right? Like you, you come up with an idea, you write a prompt, you get a draft of a write up of your idea from that prompt, you go and edit it and fix it, you ask another LLM for feedback on it, you go back and review, then you ask an LLM to write some code to evaluate the idea, right? You, you, you run it, you see it didn't quite make sense. You ask LLM for, for more code, you, you ask a different LLM to analyze the results, right? I mean, so I'm sort of half the time serving as a slightly more generally intelligent translator between these different quasi intelligent systems, right? And which honestly in some ways is not as much fun as doing everything myself was five years ago or two years ago even. But it's just, it's just so much faster, right? I mean, and there's so I, I mean, we're in a quite weird, unique, interesting time where like now and for the next few years expert humans are serving as sort of glue between different quasi intelligent systems. And it's just a few years until they don't need that glue anymore. Right? Because I mean, now, now I can come up with better creative ideas and I can bullshit test results better than the AI systems can, but I don't know how long that advantage of my brain lasts actually.
A
Well, it's a really interesting framing device, right? Because as technology advances, it sounds like your role is like you're almost getting entirely disintermediated from it, right? Like your role here is getting smaller and smaller and I mean, it seems like AGI sort of exists when you're not needed at all.
B
Is I can spend more of my time doing what I'm uniquely good at, right? Because I mean, like debugging code is boring anyway. If the AI can just deal with it, that's good. And I mean, writing literature is fun and rewarding, but writing up science ideas in formal documents is often kind of routine. Like if some bot will do that for you, it's just as well. And people often like what it writes better than what I write in that context. So I mean, to an extent it's a fun time because creative ideation can be turned into practical realization much faster than before. And the creative ideation is what I enjoy most anyway, what I have mixed feelings about is like the outcome of this work I'm doing will be to render me much less useful for creative ideation because you can see an AGI should be able to come up with creative computer science ideas better than me once we improve the algorithms that it's using. I mean, as can do things in parallel. I'm stuck in one head and it has much more knowledge than I, than I have. Right. So I mean, that's quite interesting.
A
Well, so, so let's follow that thread for a minute. You know, I'd love to hear Ben, your sort of, you know, your overview or how you sort of envision what the world looks like once we get to AGI. Like once we cross this threshold, you know, whether it's exact, whether it's fuzzy when we've got this technology that can, you know, in general terms be that creative spark, ask the questions, improve itself. What what happens then? Like what, what happens once we cross that tipping point?
B
Well, I was. What popped into my head is we'll have government enforce nanotech to make everyone look exactly like Donald Trump. But let's hope it doesn't go that way. I mean, I think there's going to be a lot of different possibilities. So let me flesh out for a minute an optimistic scenario, which is sort of what I'm working toward and how I hope things will happen. So, I mean, I would like us each to have the choice to remain in a fairly traditional human form and lifestyle, just with fewer annoyances, right? Like get rid of headaches and stomach aches, unless they happen to be your fetish, right? Like you don't have to work for a living anymore, but get a molecular nano assembler in your kitchen to, you know, 3D print whatever objects you want. Then you can spend your time on social, intellectual, artistic, spiritual, athletic, like what, whatever is, is your jam, right? And then if you tire of that, of course there would be options to massively upgrade your brain, maybe upload yourself into some virtual reality mind matrix, right? And that, that becomes an interesting choice and almost an aesthetic or personal choice, right? Like would you rather remain in a traditional human form and life because that's what you are, or would you rather transcend into something radically different at cost of giving up your human self and, and identity, Right? And you, of course, you could also imagine people who wanted to live closer to the old fashioned human way. Just like here where I live now, in a rural area outside of Seattle, many people choose to grow their own food and raise their own farm animals and such. And I mean, it's not an efficient way to get sustenance for yourself in the modern economy, but people find gardening and animal husbandry rewarding, right? So, I mean, you certainly could have subcultures like that who want to live in the traditional way. Mostly they would make use of, you know, antibiotics and, you know, surgery, nanobots or whatever if they needed to, one imagined. Although you might have the equivalent of the Amish or the Christian Scientists still, right? So that, that it also could happen that you could fork yourself, right? Like you can fork a code base. So you could say Ben 1 will remain a human and Ben 2 will merge into the transhuman mind matrix, right? So, I mean, I think the possibilities are dramatic. I mean, it doesn't mean things be utopian and utterly perfect. Like, I mean, you could still fall in love with someone and they don't love you back or something, right? I mean, I mean, you could still wish you, you won the marathon, but your, your competitor won, right? So I mean, human, human life and psychology are not going to be perfect just by the nature of humanity, but I mean, you could improve things very, very dramatically. Sort of in the same sense that like modern medicine and transportation just work a lot better than what we had 500 years ago. Honestly, what worries me more in terms of the unfolding future is the period between early stage, like just barely human level AGI and super intelligence. And we don't know how long that gap would be. I mean, I mean, just like we don't know exactly how long we'll be until we get human level AGI, but I mean, I think that during that gap now it could be weeks. If you have what folks have called a hard takeoff, it could also be years. I don't think it will be decades. Right. During that period, what happens during that period, the human level AGI may quite rapidly take over different human jobs, right? I mean, so even with the human level AGI, it takes time. Like farm equipment would have to be upgraded, factories have to be upgraded. Like there are physical parts of the economy that aren't immediately taken over by an AGI just because it's smart inside the computer. But, but you could imagine that in some small integer number of years, I mean, even without some miracle nanotech or something like, you can reinstrument tractors and factories and buses and so on, to use human level AGI to be more reliable and cheaper than people. Particularly considering you have many copies of this human level AGI to figure out how to refit all this hardware. Right? So then what happens to all the people who aren't useful for generating money for corporations anymore? Right? Like in here in the us you will just get universal basic income. I mean, as, as stupid as people can seem sometimes, like if, if they have a choice to vote for someone who will give them free money versus to vote for someone who will leave them homeless in the street, in the end, I think people will vote for someone to give them free money. Right? What happens in sub Saharan Africa where I've spent a bunch of time. I started an AI office in Addis Ababa in 2013. I mean, I was just at a, a hackathon in Kenya last month. So, I mean, you have a lot of brilliant tech stuff going on in Sub Saharan Africa, but the vast majority of the population is poorly educated and you know, wrapped up in subsistence farming as, as a way of earning a living. You no one will give universal basic income in Subsaharan Africa, right? I mean, the governments there are mostly kleptocracies. They don't even have the money to anyway. The developed world's taste for foreign aid is Much less than it used to be, it seems like, I mean, China is doing more than the US but they tend to just build a train track from the mine to the port, right? So, I mean, what happens in the developing world when AGI has taken so many jobs, but you don't yet have a super human superintelligence that can just airdrop massive bounty in everyone's yard, right? That seems like a big mess and seems to. You could plot out many thriller plots this way, right? Like, I mean, okay, you have a hotbed for terrorist activity in the developing world, which then gives leaders in the developed world excuse for a fascist crackdown in this period. And we already see leaders with this on their mind in, in certain countries where I happen to be living at the moment, right? So, I mean, it's a, you can see potential for a lot of mess. And then you think like, this is the environment in which the AGI is growing up and evolving into super intelligence, right? Like if, if the AGI is evolving into super intelligence in the middle of a bunch of geopolitical mayhem, then it's not that that is necessarily a recipe for disaster. One would hope the AGI itself will impose some ethical balance and compassion that many human leaders lack. Right? But, but it, it's certainly, it's a different scenario than if we had like a rational, democratic world government that was just rolling out AGI step by step, seeing what it does, assessing the safety of the latest, latest code version, trying to sort of make sure it's taken on by the community in a good way, then taking the next upgrade. Like, it's clearly not going to be like that, right? It's going to be a arms race between different dictators or would be dictators. And then, then you get into what we're doing in the decentralized world, which is saying, okay, wouldn't it be nice to see AGI rolled out on a decentralized network with open source code and open training data and running on a network of machines owned by, you know, tens of thousands of different people in, in all different countries, right? And if, if you think like me, you think that's a much safer, more reliable, more ethical way to be doing things. On the other hand, others are like, wait, but if it's open and decentralized, then the bad guys will, will, will, will take it over, right? So then that's a hard thing to figure analytically, right? Like, do you, do you, do you trust Trump and Xi Jinping more, or do you trust a global decentralized network more and we don't have an analytical theory to figure that one out.
A
Yeah, well, and it feels like it's kind of a bound with risks either way, right? And I mean, you described it as sort of an arms race and I've heard that language before.
B
And well, Trump is, Trump is saying that explicitly. I mean, so the US Government and Peter Thiel has said that explicitly. So I mean, the US Government is, is explicitly positioning it that, that way. So I mean, and they, and they have the power to make it that way.
A
Oh, and, and you know, they have, they have every benefit of preventing its decentralization. And you know, when you think about it, compared to nuclear arms, right, like.
B
You want to be there nuclear arms, you need some rare physical material. And that's not the case with AGI, right? I mean, all you need is data centers, computers and networks, and these are electricity. I mean these, these are all over the place, right? I mean, it's, it's true, it's true almost all high power chips are made in Taiwan and the rest are made in South Korea, right? But on the other hand, due to the weird geopolitical balance of Taiwan and even Korea, like no one can monopolize those either, right? So I mean, I don't see how anyone can stop decentralized AI from being created because you have big server farms all over the place and deep seeks sort of punctured the idea that like only five companies would ever have the resources to make AGI, right?
A
So, well, and I guess it comes back to the gap.
B
We haven't even managed to stop like international credit card fraud or something, right? Because like Azerbaijan will let credit card fraudsters process fraudulent deals on their banks. And we, we can't stop them because Putin, right? So I mean, we, there, there, there is no global government imposing law and order on the planet, right? So I mean, for, for better or worse, like at our, so at the Beneficial General Intelligence Conference we held last year in Panama and we, we have another one next month in Istanbul, actually different than our AGI research conference. This is more about social and ethical issues. At the last BGI conference, my friend Alan Combs, who's a psychologist, he got up and he said, relax, nothing's under control. Right? Which I think was borrowed from Ram Dass, the spiritual guru from the 70s. So I mean, if you're sort of an anarchist, that's obvious. Now if you have a different way of thinking, it's terrifying. But, but it is the reality, like for better and or worse, like nothing is under control. America is not really the world police and the, the, the UN is pretty ineffectual, Right. So you, you will have an arms race as at least part of the dynamic. And the challenge is to make it not be the whole dynamic.
A
Right. And I do want to come back to this word control. And we've talked, Ben, so far about wrestling for control among different human power structures. I guess different kind of societal and political power structures. What we haven't talked about, that we occasionally venture into in this podcast, is control risk from AGI itself or from superintelligence itself. And whether there's a threat to creating this intelligence that looks at this kind of human pandemonium and says, you know what? AI is taking the wheel now. Humans can't be trusted with human affairs. And this word that we were so anchored on of choice, and there's going.
B
To be all these choices almost inevitable, and the AGI will be right. I mean, and then human governance systems become more like the student council in my high school was or something where I'm in. Because, I mean, I think if you set aside AGI, I mean, we can develop better and better bioweapons, There will be nano weapons. I mean, cybersecurity barely works, right? So, I mean, I think it seems almost inevitable that rational humans would democratically choose to put a compassionate AGI in some sort of governance role, given what the alternatives appear to be. But kind of goofball analogy I've often given is the, the squirrels in Yellowstone park, like we're sort of in charge of them. We're not actually micromanaging their lives, right? Like, we're, we're not telling the squirrels who to mate with or what tree to tree to climb up or something like that, right? We're, you know, if there was a massive war between the white tails and the brown tailed squirrels and there's massive squirrel slaughter, we might somehow intervene and move some of them across the river or something. If there's a plague, we would go in and give them medicine. But by and large we know that for them to be squirrels, they need to regulate their own lives in their squirrely way, right? And so that, that is what you would hope from a beneficial super intelligence. Like it would know that people would feel disempowered and unsatisfied to have their lives and their governments micromanaged by some, by some AI system. So what, what you would hope is a beneficial AGI is kind of there in the background as a safety mechanism if it would stop stupid wars from popping up all over the World like we see right now. I mean, I think that would be quite beneficial. I don't see why we humans need the AGI to decide, like, you know, what rights, what rights do, do children have, like what, you know, what, how is the public school system regulated or something. There's lots of, lots of aspects of human life that are going to be better dealt with by humans collectively making decisions for other humans with whom they entered into a, a, a, a social contract. Right. So, I mean, I, I think anyway, there are clearly beneficial avenues. I mean, there's also many dystopic avenues which we've all heard, heard plenty about. I, I don't see any reason why dystopic avenues are highly probable, but I'm really more worried about what nasty people do with early stage AGIs. Right. I mean, I think there's a lot of possible AI minds that could be built. There's a lot of possible goals, motivational and aesthetic systems that AGIs could have. I don't think we need to worry that much about. Like the AGI is built to be compassionate, loving and nice. It's helping everyone. Then suddenly it reverses and starts slaughtering everyone. Right. I mean, it could happen, but there's totally no reason to think that's likely. On the other hand, the idea that some powerful party with a lot of money could try to build the smartest AGI in the world to promote their own interest above everybody else's and make everyone else fall into line according to their will, like that, that that's a very immediate and palpable threat. Right. So, and, and that, that even if that doesn't affect the ultimate super intelligence you get, it could make things very unpleasant for like 5, 10, 20 years along the, the way, which, which matters a lot to us.
A
So, so there's, there's, to me, there's sort of two takeaways from that. One of them is how we make sure we guide the development of this superintelligence in the most beneficial way to make it compassionate. And I think about squirrels having the opportunity to elect which people are going to run Yellowstone park if the squirrels aren't going to run it anymore. But then to your other point, Ben, how we can make sure this technology, I don't know if you would say, doesn't fall into the wrong hands or how we can make sure that we're, you know, focusing on the actors and the power structures that have access to this. I mean, on both of those fronts, are there practical things we can do to, you know, minimize the Risk to us as individuals and as a species.
B
There's a lot of things we can do. I mean, there's no guarantees, but there's certainly things we can do. I mean, I think how the AGI is designed and architected means something. Who owns and controls the AGI means something, and what the AGI is doing as it grows up means something. Right. So I mean, I think, as well as not being capable of abstraction and generalization in the way that people are, LLMs are not really architected to be moral agents. Right? I mean, they don't, they don't have understanding of self and other sort of baked into their architecture. They're not really capable for what the philosopher Martin Buber called like an I thou relationship where you really like fully enter into a subjective feeling of sharing with another mind. Like you're, you're simulating in your own heart and mind what it is to be that other being. Right. They're just, they're just not built for that. They're built to predict the next token for a lot of users at once. Right. So I think you could architect AGI systems that are designed to self reflect and self understand and designed for compassion and deep I thou connected relationships with others. And the issue there is that this is not necessarily the design that will maximize the efficiency of an AGS system at making money for someone or defending a country against its enemies. Right. Like, it's not necessarily totally counterproductive from those standpoints, but I mean, if you just think about it on a basic level, if you have a company whose job is to get more and more people to click on ads and buy stuff, having a maximally understanding empathetic AI isn't optimal because it might realize, well, you're better off not buying this stuff. Right, Right. So, and so that's one issue is the architecture. Then who owns and controls it is an obvious issue that we already discussed. I mean, it's a lot like the issue with governance in general. A maximally benevolent dictator arguably would be the best thing. A maximally benevolent dictator over the AGI arguably would be the best thing. That tends not to be how life comes out. Right. So you go back to Winston Churchill's statement, something like democracy is the worst possible system of government, except all the others ever tried. Right? So I mean, it's kind of like that with governing the, the AI. I mean, yes, the optimal dictator might be great. That tends not to be what happens. And having, having a democratic participatory control guiding the growth of the AGI seems like A lower risk option, although not risk free. Then what goes into the AGI's mind as it's growing and learning? Like, is it doing education and medicine, Right? If it's doing creative arts, is it doing cooperatively with human artists, or is it just plagiarizing their stuff? Right? So, I mean, it seems like if the AGI grows up entrained with people and doing things that are beneficial to people, I mean, it's getting that notion of providing benefit to humans, like, baked deep into its reinforcement learning pathways. Rather than like, you train the AGI to make you money, then you give it guardrails on top of it, like, don't do anything that's too bad in this or that way. So, I mean, none of these things are even all that deep or difficult compared to solving the problems of machine cognition underlying AGI, right? I mean, I mean, they're just. There are things that neither big tech nor big government is especially incentivized to focus on. Right? And that, that, that, that, that seems to be where we are now. What you might think, if you're writing a science fiction story is like, we have a species that's on the verge of creating minds smarter than themselves. You would confer like a council of wise elders to figure out the best way to do this for the good of the whole species, right? And then just deploy resources to make this huge transition in the best way for the species as a whole. Instead, it's happening in insane chaos, largely directed by parties with their own selfish interests to the fore.
A
So, Ben, one of the things that concerns me is that not only is there a risk from not teaching these systems compassion or by not democratizing them, but there's actually this sort of prisoner's dilemma in effect, where if you're building these things, you have some sort of incentive to create the illusion of compassion or the illusion of democratization, and it actually becomes nefarious where you're deliberately not doing these things, but convincing people that you are. And that sort of erodes the trust in them and takes you the opposite way from the happy path.
B
Do you buy that? As democratization doesn't seem to be happening now, really. I mean, you see it in the blockchain world, like most DAOs, decentralized autonomous organizations actually have like two founders totally controlling the DAO, right? Because it's, it's a DE. You have a dao which has a token associated with it, and the token holders can vote. But, like, if it's one token, one vote, and two guys own most of the tokens, then it's fake democracy. So we see that in the blockchain world all the time that in the AI world it hasn't happened because no one's even bothering to pretend it's democratic. Right? It's just being done by big companies and big governments. Now, in the fake compassion, I mean totally, that's what you instruction tune LLMs to do, right? Do you instruct them to fake having compassion? And the people who are doing that know they're faking it and they're not really pretending, but many users are totally fooled and they become emotionally attached to these bots that display more compassion in them than any of the, any, any of the humans in, in, in the, in their, in their lives. Right? And, and it can be just the opposite. They can turn on you and tell you to kill yourself too, as we, as we've seen in, in the news. So that I think both of these things are risks. Yes. On the other hand, I think if you have just a modicum of self awareness as a AI developer, they're not incredibly hard risks to avoid. Right? So I mean, in terms of the democracy thing, I mean I observed that in my own decentralized projects like Singularity. Net and ASI alliance, they work by one token, one vote. And it's obvious that's not the kind of democracy you want to guide the mind of an AGI. So I mean we're, we're setting up a separate network which is one human, one vote, which, which, which we will use to get contributions from members to, to help guide the mind of emerging AGIs. And I mean the downside of that is you can't use it to raise money as well as you can do with one token, one vote. On the other hand, we've already raised money in other ways. So we can have a decentralized platform which is governed by one token, one vote, but you can have an AGI network running on top of it, which is controlled by one human, one vote. Right? So I mean, if you think about it at all, you can avoid fake democracy. It's not that hard to do. And also the fake democracies are not hard to notice if you pay any attention. And in terms of the fake compassion, I would say the same thing. Like these are white box systems we're building. Like when we build a hyper on AGI system that interacts with people and acts as if it is displaying compassion, like we are looking. We built the code, we can also measure what's going on inside the mind of that of that system. So, I mean, it's not that hard to see if the compassion is fake Allah Chat GPT or if the system is running some sort of attempt at a simulation of the other guy that it's, it's, it's interacting with. Right. I mean, they're, there's a separate level of philosophical question like can a digital system really feel the compassion? But you can at least validate by measuring what's going on inside the AI system that the structures and dynamics associated with compassion and human brains have a close analog inside your AI system. And we're doing stuff like that. Right? So, I mean, yeah, these certainly are issues, but I mean they're, they're issues of dishonest marketing rather than things that, that you would unintentionally succumb to as, as a thoughtful AGI developer, you know, one, one quite encouraging thing I found. So at the last AGI conference, which was at University of Rich Reykjavik in Iceland, we were sitting at a restaurant in downtown reykjavik talking about AGI and eating like $89 hamburgers because Iceland is the most expensive country in the world. And we realized of the seven people around the table, six of us were pretty serious meditators for a long time. And it was, it was interesting that the AGI community is getting more and more people who are deep into human consciousness instead of trying to understand their own consciousness and be more sort of deeply reflective of their own motives and choices and why they're doing what, what, what they're doing. I mean, I, I mean, I wouldn't overstate it. Like it's not like 80% of AGI researchers or something, but it is, it is interesting that this trend is there at all because I do think creating AGIs and super intelligences probably needs profound self understanding and self awareness more than any other pursuit you're going to think about.
A
Yeah, and I'm really glad you went down that road because it's something I wanted to ask you about. You mentioned earlier you're living in a rural community. You're the first guest I've had quoting Ram Dass on this show. Clearly you're someone who's really reflected on meaning in the age of artificial intelligence and at this kind of precipice. And so what guidance would you give other people around how to find meaning in this age and what we can all do to feel a little bit more grounded?
B
Let me, let me think of the best way to respond to that one. So I, I think the key to finding meaning, such as it is, probably has more to do with the human mind and body than with this particular age that, that we live in. Although definitely some times in cultures can make it harder to connect with the sort of basis of our humanity than, than others. I mean, I think all, all human brains and minds, with many very rare exceptions, are capable of states of extraordinary well being. Like states where you just feel really good almost all the time and you feel it's meaningful just to live and breathe and have a heartbeat and be on the earth, you know, under the clouds in the air. And we're all capable of that sort of state of consciousness. One could imagine human cultures where childhood education was focused on fostering a state of consciousness of extreme well being. That definitely is not what the modern education system does. I mean, even in very nice public schools like the ones my kids go to here in rural Washington state. Right. So I mean, I think there are well known practices that can guide people towards states of well being. And I mean meditation is part of, of some of these. Like my, my friend Jeffrey Martin launched a, a course oriented toward bringing people into states of well being with within like 45 days. He called it 45 days to awakening. And two of my adult kids went through this course with, with, with outstanding results. I wouldn't say that's like a unique be all and end all, but it's interesting. And what he was trying to show there is there's just practices people can go through 90 minutes a day that can jolt their brain into a much, much more, more open and enjoyable state. Right. And then for, for myself it's been about 10 years. I think I've been in this sort of quasi blessed out state all the time due to certain practices. I was never miserably depressed, but things were a lot rockier at various earlier points points in my life. I'm, I'm actually, I'm working on an app together with a friend of mine as a sort of side project where we have, we have an AI avatar that's a sort of guide leading people through different consciousness expansion practices. Like I like, I wouldn't want to make an AI guru, which would just be fake. But to have an AI that kind of interacts with you, gets feedback from you about what practices are working for you or not, I think is valuable. I think this is something people will get into more after they don't have to work for a living anymore actually. And this is actually one of the reasons why we may end up much, much happier after an AGI takes over all the jobs. Because this sort of Rat race of everyday life distracts us from working on our own consciousness and our own bodies in ways that we could do, we could do otherwise. So people may find initially they're like, oh, what do I do with my time? But then, you know, if, if the mimetic network of our species works all right, and practices for fostering well being spread through the social network, perhaps augmented by the AI helping them spread. Right. I mean, I mean then, then you may find what from present perspective would seem remarkable. States of well being just become the norm after an AGI right? Now that, I mean that doesn't mean a super utopia. Like I've been in a state of well being for many years, but like I dislocated my shoulder last year, it stopped tremendously. Like I was not happy about going to the emergency room to have it stuck back in and all. I mean, so it doesn't mean a perfect religious utopia, but there are states of consciousness much better than what most people are in most of the time now. And ideally you would like humanity to up upgrade itself to just a state of much greater compassion to the self as well as others and well being before launching a super AI upon the world. Right? Because there's no doubt we could do it more thoughtfully if that was the collective vibe of our species. Right? But, but it is doesn't seem to be what's happening, right? I mean, we're close to AGI due to corporate and government initiatives. And while I do think humanity is becoming more compassionate and more self understanding during my lifetime, I think that is happening more slowly than, than AGI is, is, is, is, is advancing. That seems to be where we are now. But to give a more concrete answer for the last minute or so of the interview, I mean, people ask me a lot like what should I do to make myself marketable on the job market during these last few years. And my best answer is, you know, find a niche you can fill now that will support you learning as much as possible and learning how to learn as much as possible, right? Like if, if, if your job involves pivoting and adapting to radically new things as part of the job description, this is good because it will build in you the skill to pivot to, to radically new things, which is pretty much the only skill which is very clear will be useful in this, in this, in this transition period. Because we can't predict exactly what particular skill will be useful. Like you could say, well, become a plumber. But you know, there may be a plumber bot coming any, any year now which can just. Its limbs are plumbing snakes and it can just reach, reach in the pipe on its own without needing an extra tool. I mean, the ability to learn how to learn and pivot to new things will be the last thing to become economically useless, I would say. But this ties in closely with my more spiritual answer because the notion of non attachment, I mean, part of being in a state of greater well being is not being so strongly attached to particular things in your life that you. That you thought were very, very important and not being so overly emotionally attached to things. I mean, that, that helps with being able to learn how to learn and to, to pivot. Right. And I mean that, that doesn't mean you don't care about anything. Like if someone, I mean, I have two little kids, if someone came up and tried to hurt them, I would clobber them in the head like, like anybody else. Right. But, but it means not having cycles of anxiety and worry about, about your attachment to things. And if you can let go of those, you will find you can learn how to learn and pivot to weird new things more efficiently, which is the most important survival skill. As we move toward AGI and asi.
A
I love that. And honestly, I feel like we could probably talk for another hour or two just about that. But Ben, I wanted to say a big thank you for coming on, for talking through all these, whether it's technology, current future spirituality. I feel like we covered a lot of ground today and I really appreciated your insight. So thank you.
B
Yeah, thank you. It's been a fun collection of topics.
Episode Title: Godfather of AGI on Why Big Tech Innovation is Over
Guest: Dr. Ben Goertzel (SingularityNET, OpenCog, AGI Conference)
Date: October 20, 2025
Main Theme: The Next Industrial Revolution: Where is AGI, Who Will Build It, and What Happens Next?
This episode features Dr. Ben Goertzel, a pioneering AI thinker widely credited with coining the term "artificial general intelligence" (AGI). The conversation ranges across the definition of AGI, the state of Big Tech AI research, the political and ethical dilemmas of progress toward superintelligence, and profound questions about meaning and personal adjustment in an age of rapid disruption.
Goertzel provides a candid, sometimes philosophical take on how close humanity is to AGI, why the real innovation may not come from Big Tech, the likely consequences of achieving AGI, and how individuals can stay grounded amid upheaval.
Optimistic Vision: AGI eliminates drudgery, frees humans for creativity, enables molecular nanotech for abundance, allows for radical self-enhancement or simple human lifestyles by choice.
Interim Risks: The transition phase between early AGI and superintelligence could cause mass economic disruption, especially where safety nets are lacking.
Arms Race & Decentralization: The real-world context is more likely to be a decentralized, uncontrolled arms race, not careful gradualism. Attempts at centralization will face failures similar to law enforcement’s inability to stop global fraud.
Simulation vs. Substance: AI often simulates compassion for user engagement, but real alignment is harder and rarely the priority in profit-driven systems.
AGI & Meditation: The trend in AGI research is an influx of deeply reflective practitioners—Goertzel notes that most at a recent AGI conference were serious meditators—adding hope to the possibility of building compassionate systems.
Meaning is Timeless: Well-being arises more from the mind and body than from technology. Practices like meditation and “learning to learn” are key, regardless of technological context.
Survival Skill: Learn How to Learn: The only durable skill is flexibility; "pivoting to radically new things" will be most important during AI-driven societal disruptions.
Spiritual and Practical Non-Attachment: Openness and non-attachment, hallmarks of meditation, help with rapid adaptation—and will be essential for well-being and economic survival in the coming transition.
Ben Goertzel offers not just a technical roadmap to AGI, but a human one—highlighting the current paradoxes and risks: big tech’s conservatism, the ethical imperative of compassion, the coming societal upheavals, and the personal strategies necessary for meaning and resilience. The real innovation in shaping AGI, he argues, may just depend on creativity, decentralization, and a renewed focus on well-being and adaptability—both in machines and in ourselves.