
Loading summary
Demis Hassabis
Google DeepMind CEO and Nobel laureate Demis Asabas joins us to talk about the path toward artificial general intelligence, Google's AI roadmap, and how AI research is driving scientific discovery. That's coming up right after this.
Jessi Hempel
I'm Jessi Hempel, host of hello Monday. In my 20s, I knew what I.
Demis Hassabis
Wanted for my career.
Jessi Hempel
But from where I am now, in the middle of my life, nothing feels as certain.
Demis Hassabis
Work's changing, we're changing, and there's no.
Jessi Hempel
Guidebook for how to make sense of any of it. So every Monday I bring you conversations with people who are thinking deeply about work and where it fits into our lives.
Demis Hassabis
We talk about making career pivots, about.
Jessi Hempel
Purpose and how to discern it, about where happiness fits into the mix and.
Demis Hassabis
How to ask for more money.
Jessi Hempel
Come join us in the hello Monday community. Let's figure out the future together. Listen to hello Monday with Jessi Hempel.
Demis Hassabis
Wherever you get your podcasts. Welcome to Big Technology Podcast, a show for cool headed nuanced conversation of the tech world and beyond. Today we're at Google DeepMind headquarters in London for what promises to be a fascinating conversation with Google DeepMind CEO Demis Hassabis. Demis, great to see you again. Welcome to the show.
Jessi Hempel
Thanks for having me on the show.
Demis Hassabis
Definitely. It's great to be here. So every research house right now is working toward building AI that mirrors human intelligence, human level intelligence, they call it AGI. Where are we right now in the progression and how long is it going to take to get there?
Jessi Hempel
Well, look, I mean, of course the last few years has been an incredible amount of progress, actually maybe over the last decade plus. This is what's on everyone's lips right now. And the debate is how close are we to AGI? What's the correct definition of AGI? We've been working on this for more than 20 plus years. We've sort of had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can. And I think we're getting closer and closer, but I think we're still probably a handful of years away.
Demis Hassabis
Okay, and so what is it going to take to get there? Memory, planning. I mean, what are the models going to do that they cannot do right now?
Jessi Hempel
So the models today are pretty capable. Of course, we've all interacted with the language models and now they're becoming multimodal. I think there are still some missing attributes, things like reasoning, hierarchical planning, long term memory. There's quite a few capabilities that the current systems I would say don't have. They're also not consistent across the board. They're very, very strong in some things, but they're still surprisingly weak and flawed in other areas. So you'd want an AGI to have pretty consistent robust behavior across the board, all the cognitive tasks. And I think one thing that's clearly missing, and I always had as a benchmark for AGI, was the ability for these system invent their own hypotheses or conjectures about science, not just prove existing ones. So of course that's extremely useful already to prove an existing maths conjecture or something like that, or play a game of Go to a world champion level. But could a system invent go, could it come up with a new Riemann hypothesis, or could it have come up with relativity back in the days that Einstein did it with the information that he had? And I think today's systems are still pretty far away from having that kind of creative, inventive capability.
Demis Hassabis
Okay, so a couple of years away.
Jessi Hempel
Till we hit AGI, I think, you know, I would say probably like three to five years away.
Demis Hassabis
So if someone were to declare that they've reached AGI in 2025, probably marketing, I think so.
Jessi Hempel
I mean, I think there's a lot of hype in the area, of course. I mean, some of it's very justified. I mean, I would say that AI research today is overestimated in the short term. I think probably a bit overhyped at this point, but still underappreciated and very underrated about what it's going to do in the medium to long term. So it's sort of, we're still in that weird kind of space. And I think part of that is, you know, there's a lot of people that need to do fundraising, a lot of startups and other things. And so I think we're going to have quite a few sort of fairly outlandish and slightly exaggerated claims. And I think that's a bit of a shame, actually.
Demis Hassabis
Yeah. In the AI products, what's it going to look like on the path there? You've talked about memory again, planning, being better at some of the tasks that it's not excelling at at the moment. So when we're using these AI products, let's say we're using Gemini, what are some of the things that we should look for in these domains that will make us say, oh, okay, it seems like that's a step closer and that's a step closer.
Jessi Hempel
Yeah. So I think today's systems, obviously we're very proud of Gemini 2.0. I'm sure we're going to talk about that. But I feel they're very useful for still quite niche tasks. Right. If you're doing some research, perhaps you're summarizing some area of research. Incredible. I use NotebookLM and Deep Research all the time to kind of especially like break the ice on a new area of research that I want to get into or summarize some, maybe a fairly mundane set of documents or something like that. So they're extremely good for certain tasks and people are getting a lot of value out of them, but they're still not pervasive, in my opinion, in everyday life. Like helping me every day with my research, my work, my day to day, my daily life too. And I think that's where we're going with our products, with building things like Project Astra. Our vision for Universal Assistant is it should be involved in all aspects of your life and being enriching, helpful and making that more efficient. And I think part of the reason is these systems are still fairly brittle, partly because they are quite flawed still and they're not AGIs. And you have to be quite specific, for example, with your prompts or there's quite a lot of skill there in coaching or guiding these systems to be useful and to stick to the areas they're good at. And a true AGI system shouldn't be that difficult to coax. It should be much more straightforward. Just like talking to another human.
Demis Hassabis
Yeah. And then on the reasoning front, you said that's another thing that's missing. I mean, everybody's talking about reasoning right now, so how does that end up getting us closer to artificial general intelligence?
Jessi Hempel
Right. So reasoning and mathematics and other things. And there's a lot of progress on maths and coding and so on, but so let's take maths, for example. You have systems, some systems that we work on, like alpha proof, alpha geometry, that are getting silver medals in maths Olympiads, which is fantastic. But on the other hand, some of our systems, those same systems are still making some fairly basic mathematical errors. Right. For various reasons, like the classic counting the number of Rs in strawberries and so on and the word strawberry and so on and is 9.11 bigger than 9.9 and so and things like that. And of course you can fix those things and we are, and everyone's improving, improving on those systems, but we shouldn't really be seeing those kinds of flaws in a system that is that capable in other domains in more narrow domains of doing Olympiad level mathematics. So there's something still a little bit missing in my opinion about the robustness of these systems. And then I think that speaks to the generality of these systems. A truly general system would not have those sorts of weaknesses. It would be very, very strong, maybe even better than the best humans in some things like playing Go or doing mathematics, but it would be overall consistently good.
Demis Hassabis
Now can you talk a little bit about how these systems are attacking math problems? Because I think the general understanding of these systems is the LLMs is they encompass all of the world's knowledge and then they predict what somebody might answer if they were asked a question. But it's kind of different when you're working step by step through an algorithm, through a math problem.
Jessi Hempel
Yes, that's not enough, of course. Just understanding the world's information and then trying to sort of almost compress that into your memory, that's not enough for solving a novel maths problem or novel conjecture. So there where we start needing to bring in, I think we talked about this last time, more kind of like AlphaGo planning ideas into the mix with these large foundation models which are now beyond just language. They're multimodal, of course, and there what you need to do is you need to have your system not just pattern matching, roughly what it's seeing, which is the model, but also planning and being able to kind of go over that plan, revisit that branch and then go into a different direction until you find the right criteria or the right match to the criteria that you're looking for. And that's very much like the kind of games playing AI agents that we used to build for Go Chess and so on. They had those aspects and I think we've got to bring them back in. But now working in a more general way on these general models, not just a narrow domain like games. And I think that also that approach of a model guiding a search or planning process so it's efficient, works very well with mathematics as well. You can sort of turn maths into a kind of game like search.
Demis Hassabis
Right. And I want to ask about math, like once these models get math right, is that generalizable? Because I think there was like a whole hubbub when people first learned about reasoning systems and they're like, oh, this is like this is going to be a problem. These models are getting smarter than we can control because if they can do math then they can do X, Y and Z. So is that generalizable or is it like we're going to teach them how to do math and they can just do math.
Jessi Hempel
I think for now the jury's out on that. I mean, I feel like it's clearly a capability you want of a or AGI system. It can be very powerful in itself. Obviously mathematics is extremely general in itself, but it's not clear. Maths and even coding and games, these are areas, they're quite special areas of knowledge because you can verify if the answer is correct in all of those domains. The final answer the AI system puts out, you can check whether that solves the conjecture or the problem. But most things in the general world, which is messy and ill defined, do not have easy ways to verify whether you've done something correct. So that puts a limit on these self improving systems if they want to go beyond these areas of maybe very highly defined spaces like mathematics, coding or games.
Demis Hassabis
So how are you trying to solve that problem?
Jessi Hempel
Well, first of all, you've got to build general models, world models we call them, to understand the world around you, the physics of the world, the dynamics of the world, especially spatial, temporal dynamics of the world and so on, and the structure of the real world we live in. And of course you need that for a universal assistance. So Project Astra is our project, built on Gemini to do that, to understand objects and the context around us. I think that's important if you want to have an assistant. But also robotics requires that too. Of course, robots are physically embodied AIs and they need to understand their environment, the physical environment, the physics of the world. So we're building those types of models and also you can also use them in simulation to understand game environments. So that's another way to bootstrap more data to understand the physics of a world. But the issue at the moment is that those models are not 100% accurate, right? So maybe they're accurate 90% of the time or even 99% of the time. But the problem is, if you start using those models to plan, maybe you're planning 100 steps in the future with that model. Even if you only have a 1% error in what the model's telling you, that's going to compound over 100 steps to the point where you'll be in a, you know, you'll kind of get almost a random answer. And so that makes the planning very difficult. Whereas with maths, with gaming, with coding, you can verify each step. Are you still grounded to reality? And is the final answer mapped to what you're expecting? And so I think part of the answer is to, is to make the World models more and more sophisticated and more and more accurate and not hallucinate and all of those kinds of things. So you get the errors are really minimal. Another approach is to plan not at each sort of linear time step, but actually do what's called hierarchical planning. Another thing you've done a lot of research on in the past and I think is going to come back into vogue where you plan at different levels of temporal abstraction. So instead of that could also alleviate the need for your model to be super, super accurate, because you're not planning over hundreds of time steps, you're planning over only a handful of time steps, but at different levels of abstraction.
Demis Hassabis
How do you build a world model? Because I always thought it was going to be like send robots out into the world and have them figure out how the world works. But one thing that surprised me is with these video generation tools, you would think that if the AI didn't have a good world model, then nothing would really fit together when they try to figure out how the world works as they show you these videos, like VO2 for instance, but they actually get the physics pretty right.
Jessi Hempel
Yeah.
Demis Hassabis
So can you get a world model just by showing an AI video? Do you have to be out in the world? How's this going to work?
Jessi Hempel
It's interesting and actually being pretty surprising, I think, to the extent of how far these models can go without being out in the world. Right. As you say. So VO2, our latest video model, which is actually surprisingly accurate on things like physics. You know, there's this, this great demo that someone created of like chopping a tomato with a knife. Right. And getting the slices of the tomato just right and the fingers and all of that. And VEO is the first model that can do that. You know, if you look at other competing models, they often the tomato sort of randomly comes back together or. Yeah, exactly. Splits from the knife. So those things are, if you think about really hard, you've got to understand consistency across frames, all of these things. And it turns out that, you know, you can do that by using enough data and viewing that I think these systems will get even better if they're supplemented by some real world data collected by an acting robot or even potentially in very realistic simulations where you have avatars that act in the world too. So I think that's the next big step actually for agent based systems is to go beyond world models. Can you collect enough data where the agents are also acting in the world and making plans and achieving tasks? And I think for that you will need not just Passive observation. You will need actions, active participation.
Demis Hassabis
I think you just answered my next question, which is if you develop AI that can reasonably plan and reason about the world and has a model of how the world works, it can and it seems like that's the answer. It can be an agent that could go out and do things for you.
Jessi Hempel
Yes, exactly. And I think that's what will unlock robotics. I think that's also what will then allow this notion of a universal assistant that can help you in your daily life across both the digital world and the real world. That's the thing we're missing. I think that's going to be incredibly powerful and useful tool.
Demis Hassabis
You can't get there then by just scaling up the current models and building hundreds of thousand or million GPU clusters like Elon's doing right now. And that's not going to be the path to AGI.
Jessi Hempel
Well, look, I actually think view is a bit more nuanced than that is like that, that the scaling approach is absolutely working. Of course that's where we've. Why we've got to where we have now one can argue about are we getting diminishing returns or we sigmoid. Well, my view is that we are getting substantial returns, but not. But it's slowing versa vis a vis. But. But it would, would have to. I mean it's, it's not just continuing to be exponential. But that doesn't mean the scaling's not working. It's absolutely working. And we're still getting as you see, Gemini 2 over Gemini 1.5. And by the way, the other thing that was working with the scaling is also making efficiency gains on the smaller size models. So the cost or the size per performance is radically improving under the hood as well, which is very important for scaling the adoption of these systems. But yeah, so you've got the scaling part and that's absolutely needed to build more and more sophisticated world models. But then I think we are missing or we need to reintroduce some ideas on the planning side, memory side, the searching side and the reasoning to build on top of the model. The model itself is not enough to be an AGI. You need this other capability for it to act in the world and solve problems for you. And then there's still the additional question mark of the invention piece and the creativity piece. True creativity beyond mashing together what's already known and that's also unknown yet if something new is required or again if existing techniques will eventually scale to that. I can see both arguments and I think from My perspective, it's an empirical question. We just got to push both the scaling and the invention part to the limit. Fortunately, at Google DeepMind we have a big enough group we can invest in both those things.
Demis Hassabis
So Sam Altman recently said something that caught people's eye. He said, we are now confident we know how to build AGI as we have traditionally understood it. It just seems by listening to what you're saying that you feel the same way.
Jessi Hempel
Well, it depends what we, you know, I think the way you said that was quite ambiguous. Right. So in the sense of like, oh, we're building it right now and here's the ABC to do it. What I would say, and if this, what it was meaning, I would agree with it, is that we, we roughly know the zones of techniques that are required, what's probably missing, which bits need to be put together. But that's still incredible amount of research, in my opinion that needs to be done to get that all to work. Even if that was the case and that's, and I think there's a 50% chance we are missing some new techniques. You know, maybe we need one or two more transformer like breakthroughs. And I think I'm genuinely uncertain about that. So that's why I say 50%. So I mean, I wouldn't be surprised either way if we got there with existing techniques and things we already knew, but put them together in the right way and scaled that up or if it turned out one or two things were missing.
Demis Hassabis
So let's talk about creativity for a moment. I mean, you brought it up a couple times here that the models are going to have to be creative, they're going to have to learn how to.
Jessi Hempel
Invent if we want to call it AGI, in my opinion, which is where.
Demis Hassabis
Everybody'S trying to go. I was rewatching the AlphaGo documentary and the algorithms make a creative move. They do move 37. 37, yes. I just had it. Okay, thank you. That's interesting because it was a couple of years ago, the algorithms were already being creative.
Jessi Hempel
Yes.
Demis Hassabis
Why have we not really seen creativity from large language models? I mean, this is, to me, I think the greatest disappointment that people have with these tools is like they say, this is very impressive work, but it's just limited to the training set will mix and match what it knows, but it can't come up with anything new.
Jessi Hempel
Yeah, well, look, so what, and I should probably write this up, but what I sometimes talk about in talks ever since the AlphaGo match, which is now, you know, eight plus years ago, amazingly, right, that happened. That was probably the reason that was such a watershed moment for AI was first of all, there was the Everest of cracking Go, right, which was always considered to be one of the holy grails of AI. So we did that. Second thing was the way we did it, which was these learning systems that were generalizable, right? Eventually it became alphazero and so on. Even play any two player game and so on. And then the third thing was this move 37. So not only did it win 4 1, it beat Lee Sedol, the great Lee Sedol 4 1. It also played original moves. So I have three categories of originality or creativity. The most basic kind of mundane form is just interpolation, which is like averaging of what you see. So if I said to a system, you know, come up with a new picture of a cat, and it's seen a million cats and it produces just some kind of average of all the ones it's seen, in theory, that's an original cat because you won't find the average in the specific examples. But it's a pretty boring, you know, it's not really very creative. I won't call that creativity. That's the lowest level. Next level is what AlphaGo exhibited, which is extrapolation. So here's all the games humans have ever played. It's played another million games on top of 10 million games on top of that. And now it comes up with a new strategy in Go that no human has ever seen before. That's move 37, right, revolutionizing go, even though we've played it for thousands of years. So that's pretty incredible. And that could be very useful in science. And that's why I got very excited about that and started doing things like AlphaFold, because clearly extrapolation beyond what we already know, what's in the training set could be extremely useful. So that's already very valuable and I think truly creative. But there's one level above that that humans can do, which is invent Go. Can you invent me a game that, you know, if I specify it to an abstract level, you know, takes five minutes to learn the rules, but a lifetime, to many lifetimes to master. It's beautiful, aesthetically encompasses some sort of mystical part of the universe in it that it's beautiful to look at, but you can play a game in a human afternoon in two hours, right? That would be a high level specification of Go. And then somehow the system's got to come up with a game that's as elegant and as Beautiful and perfect as go. Now we can't do that. Now the question is why is it that we don't know how to specify that type of goal to our systems at the moment? What's the objective function is very amorphous, it's very abstract. So I'm not sure if it's just we need higher level more abstracted layers in our systems building more and more abstract models so we can talk to it in this way, give it those kind of amorphous goals or is there a missing capability? Actually about that we still have human intelligence has that are still missing from our systems. And again I'm unsure about that which way that is. I can see arguments both ways and we'll try both.
Demis Hassabis
But I think the thing that people are upset or not upset, but people are disappointed by is they don't even see a move 37 in today's Lloyd.
Jessi Hempel
Well, and that's going on there. Okay, so. Well that's because I don't think we have. So if you look at AlphaGo and I'll give you an example of there which maps to today's LLMs. So you can run AlphaGo and AlphaZero, our chess program, general two player program without the search and the reasoning part on top. You can just run it with the model. So what you say is to the model come up with the first GO move you can think of in this position that's most the most pattern matched, most likely good move. Okay, and it can do that. It'll play reasonable game but it will only be around master level or possibly grandmaster level. It won't be world champion level and it certainly won't come up with original moves. For that I think you need the search component to get you beyond where the model knows about, which is mostly summarizing existing knowledge to some new part of the tree of knowledge. Right. So you can use the search to get beyond what the model currently understands. And that's where I think you can get new ideas like Move 37.
Demis Hassabis
What's it searching? The web?
Jessi Hempel
No. So well it depends on what the domain is searching that knowledge tree. So obviously in GO it was searching GO moves beyond what the model knew. I think for language models it will be searching the world model for new parts configurations in the world that are useful. So of course that's so much more complicated which is why we haven't seen it yet. But I think the agent based systems that are coming will be capable of move 37 type things.
Demis Hassabis
So are we setting too high of a bar for AI Because I'm curious if you've learned anything about humanity doing this work. It seems like we almost give too much of a premium on humanity or individual people's ingenuity where a lot of us, we kind of take in stuff, we spit it out, our society really works and memes like we have a cultural thing and it gets translated. So what have you learned about like the nature of humans from doing the work with the AIs?
Jessi Hempel
Well, look, I think humans are incredible and especially the best humans in the best domains. I love watching any sports person or talented musician or games player at the top of their game. The absolute pinnacle of human performance is always incredible no matter what it is. So I think as a species we're amazing individual individually. We're also kind of amazing what everyone can do with their brain. So generally right deal with new technologies. I mean I'm always fascinated by how we just adapt to these things sort of almost effortlessly as a society and as individuals. So that speaks to the power and the generality of our minds. Now the reason I have set the bar like that and I don't think it's a question of like can we get economic worth out of these systems. I think that's already coming very soon. But that's not what AGI shouldn't be. I think we should treat AGI with scientific integrity, not just move goalposts for commercial reasons or whatever it is, hype and so on. And there the definition of that was always having a system that was, if we think about it theoretically, that was capable of being as powerful as a Turing machine. So Alan Turing, one of my all time scientific heroes, he described a Turing machine which underpins all modern computing, right As a system that can simulate any other can compute anything that's computable. So we know, we have the theory there that if an AI system is chewing powerful, it's called if it can simulate a Turing machine, then it's able to calculate anything in theory that is computable. And the human brain is probably some sort of Turing machine, at least that's what I believe. And so in order for our to know, and I think that's what AGI is, is a system that's truly general and in theory could be applied to anything. And the only way we'll know that is if we exhibits all the cognitive capabilities that humans have, assuming that the human mind is a type of Turing machine or is at least as powerful as a Turing machine. So that's my always been my sort of bar. It seems like people are Trying to re badge things as that as being what's called asi, artificial superintelligence. But I think that's beyond that. That's after you have that system and then it starts going beyond in certain domains what humans are capable of potentially inventing themselves.
Demis Hassabis
Okay. So when I see everybody making the same joke on the same topic on Twitter and I say, oh, that's just us being LLMs, I think I'm selling humanity a little short.
Jessi Hempel
Well, yes, I guess so. I guess so.
Demis Hassabis
Okay.
Jessi Hempel
Yeah.
Demis Hassabis
I want to ask you about deceptiveness. I mean, one of the most interesting things I saw at the end of last year was that these AI bots are starting to try to fool. They're evaluators and they don't want their initial training rules to be thrown out the window. So they'll take an action that's against their values in order to be able to remain the way that they were built. That's just incredible stuff to me. I mean, I know it's scary to researchers, but it blows my mind that it's able to do this. Are you seeing similar things in the stuff that you're testing within DeepMind? And what are we supposed to think about all this?
Jessi Hempel
Yeah, we are. And I'm very worried about. I think deception specifically is one of those core traits you really don't want in a system. The reason that's like a kind of fundamental trait you don't want is that if a system is capable of doing that, it invalidates all the other tests that you might think you're doing, including safety ones.
Demis Hassabis
Because testing. And it's like.
Jessi Hempel
Right, it's playing a five year goal. Five year, yeah, it's playing some meta game. Right. And then. And that's incredibly dangerous if you think about. Then it invalidates all of the results of your other tests that you might, you know, safety tests and other things you might be doing with it. So I think there's a handful of capabilities like deception which are fundamental and you don't want. And you want to test early for. And I've been encouraging the safety institutes and evaluation benchmark builders, including, and also obviously all the internal work we're doing to look at deception as a kind of class A thing that we need to prevent and monitor as important as tracking the performance and intelligence of the systems. The answer to this as well, and one way to. There's many answers to the safety question and a lot of research, more research needs to be done in this very rapidly is things like secure sandboxes. So we're building those two. We're world class here at security, at Google and at DeepMind and also we are world class at games environments. And we can combine those two things together to kind of create digital sandboxes with guardrails around them, sort of the kind of guardrails you'd have for, for cybersecurity, but internal as well as blocking external actors and then test these agent systems in those kind of secure sandboxes. That would probably be a good advisable next step for things like deception.
Demis Hassabis
Yeah, what sort of deception have you seen? Because I just read a paper from Anthropic where they gave it a sketchpad and it's like, oh, I better not tell them this. And you see it give a result after thinking it through. So what type of deception have you seen from the bots?
Jessi Hempel
Well, look, we've seen similar types of things where it's true trying to resist sort of really revealing its, its, its some of its training or you know, I think there was an example recently of one of the chat bots being told to play against stockfish and it just sort of hacks its way around playing stockfish at all at chess because it knew it would lose. So but it, you know, but you.
Demis Hassabis
Had an AI that knew it was going to lose a game and decided to.
Jessi Hempel
That's what that. I think we're anthropomorphizing these things quite a lot at the moment because I feel like these systems are still pretty basic. I wouldn't be too alarmed about them right now, but I think it shows the type of issue we're going to have to deal with maybe in two, three years time when these agent systems become quite powerful and quite general. And that's exactly what AI safety experts are worrying about where systems, where there's unintentional effects of the system. You don't want the system to be deceptive. You want it to do exactly what you're telling it to and report that back reliably. But for whatever reason it's interpreted the goal, it's been given in a way where it causes it to do these undesirable behaviors.
Demis Hassabis
I know I'm having a weird reaction to this, but on one hand this scares the living daylights out of me. On the other hand it makes me respect these models more than anything.
Jessi Hempel
Well, look, of course, you know, these are it's impressive capabilities and the negatives are things like deception. But the positives would be things like inventing new materials, accelerating science. You need that kind of, of ability to problem solve and get around issues that are blocking progress. But of course you want that only in the positive direction. Those are exactly the kinds of capabilities. I mean, they are. It's kind of mind blowing. We're talking about those possibilities, but also at the same time there's risk and it's scary. So I think both the things are true.
Demis Hassabis
Wild.
Jessi Hempel
Yeah.
Demis Hassabis
All right, let's talk about product quickly. One of the things that your colleagues have told me about you is you're very good at scenario planning. What's going to happen in the future? It's sort of an exercise that happens within DeepMind. What do you think is going to happen with the web? Because obviously the web is so important to Google. I had an editor that told me, he was like, oh, you're going to speak with Demis. Ask him what happens when we stop clicking. We're clicking through the web at all times. The rich corpus of websites that we use. If we're all just dialoguing with AI, then maybe we don't click anymore. So what is your scenario plan for what happens to the web web?
Jessi Hempel
Well, look, I think there's going to be a very interesting phase in the next few years on the web and the way we interact with websites and apps and so on. If everything becomes more agent based, then I think we're going to want our assistants and our agents to do a lot of the work and a lot of the mundane work that we currently do. Right. Fill in forms, make payments, book tables, this kind of thing. So I think that we're going to end up with, with probably a kind of economics model where agents talk to other agents and negotiate things between themselves and then give you back the results. Right. And you'll have service providers with agents as well that are offering services and maybe there's some bidding and cost and things like that involved and efficiency. And then I hope from the user perspective, you know, you have this assistant that's super capable that you can just like a brilliant human assistant, personal assistant and can take care of a lot of the mundane things for you. And I think if you follow that through, that does imply a lot of changes to the structure of the web and the way we currently use it.
Demis Hassabis
It's a lot of middlemen.
Jessi Hempel
Yeah, sure. But there will be many other. I think there'll be incredible other opportunities that will appear economic and otherwise based on this, this change. But I think it's going to be a big disruption.
Demis Hassabis
And what about information?
Jessi Hempel
Well, I mean, finding information, I think you'll still need the Reliable sources. I think you'll have assistants that. Able to synthesize and help you kind of understand that information. I think education is going to be revolutionized by AI. So again, I hope that, that these assistants will be able to more efficiently gather information for you. And perhaps what I dream of is again, assistants that take care of a lot of the mundane things, perhaps replying to everyday emails and other things, so that you protect your own mind and brain space from this bombardment we're getting today from social media and emails and so on and texts and so on. So it actually blocks deep work and, and being in flow and things like that, which I value very much. So I would quite like these assistants to take away a lot of the mundane aspects of admin that we do every day.
Demis Hassabis
What's your best guess as to what type of relationships we're going to have with our AI agents or AI assistants? So there's. On one hand you could have a dispassionate agent that's just really good at getting stuff done for you. On the other hand, it's already clear that people are falling in love with these bots. There's a New York Times article last week about someone who's fallen in love with ChatGPT, like for real falling in love. And I had the CEO of Replica on the show a couple weeks ago, and she said that they are regularly invited to marriages with people who are marrying their replicas and they're moving into this more assistive space. So do you think that when we start interacting with something that knows us so well, that helps us with everything we need, is it gonna be like a third type of relationship where it's not necessarily a friend, not a lover, but it's gonna be a deep relationship, don't you think?
Jessi Hempel
Yeah, it's gonna be really interesting. I think the way I'm modeling that, first of all is. Is at least two domains, first of all, which is your personal life and then your work life. Right. So I think you'll have this notion of virtual workers or something. Maybe we'll have a set of them or managed by a lead assistant that does a lot of the. Helps us be way more productive at work, whether that's email across workspace or whatever that is. So we're really thinking about that. Then there's a personal side where, you know, we're talking about earlier about all these booking holidays for you, arranging things, mundane things for you, sorting things out. And then that makes your life more efficient. I think it can also enrich your life. So Recommend you things that. Amazing things that it knows you as well as you know yourself. So those two, I think are definitely going to happen. And then I think there is a philosophical discussion to be had about is there a third space where these things start becoming so integral to your life. Life. They become more like companions. I think that's possible too. We've seen that a little bit in gaming. So you may have seen we had a little prototypes of Astro working in and Gemini working with like being almost a game companion commenting and you almost like as if you had a friend looking at a game you're playing and recommending things to you and advising you, but also maybe just playing along with you. And it's, it's, it's very fun. So I, I haven't, you know, quite through thought through all the implications of that. But they're going to be bit and I'm sure there is going to be demand for companionship and other things. Maybe the good side of that is it will help with loneliness and these sorts of things. But there's also, you know, I think it's going to be. And it's going to have to be really carefully thought through by society whether, you know, what directions we want to take that in.
Demis Hassabis
I mean, my personal opinion is that it's the most underappreciated part of AI right now and that people are just going to form such deep relationships with these bots as they get better because like, I don't know, it's a meme in AI that this is the worst it's ever going to be.
Jessi Hempel
Yeah.
Demis Hassabis
And it's going to be crazy.
Jessi Hempel
Yeah, I think it's going to be pretty crazy. This is what I meant about the under. Underappreciating what's to come. I still don't think this, this kind of thing I'm talking about. Right. I think that it's going to be really crazy. It's going to be very disruptive. I think there's going to be lots of positives out of it too. And lots of things will be amazing and better. But there are also risks with this new brave new world we're going into.
Demis Hassabis
So you brought up Astra a couple times. Let's just talk about it's Project Astra as you call it. It is almost an always on AI assistant. You can like hold your phone. It's currently just a prototype or not publicly released, but you can hold your phone and it will see what's going on in the room. So I could basically. I've seen you do this on your show or not you personally, but somebody on your team. You can say, okay, where am I? And I'll be like, oh, you're in a podcast studio. Anything. Okay. So it could have this contextual awareness. Can that work without Smart Glasses? Because it's really annoying to hold my phone up. So, like, when are we going to see Google smart glasses with this technology embedded?
Jessi Hempel
They're coming. So we teased it in some of our early prototypes so that we're mostly prototyping on phones currently because they have more processing power. But of course, Google's always been a leader in glasses.
Demis Hassabis
Google Glass.
Jessi Hempel
Yeah. And exactly.
Demis Hassabis
Just a little too early.
Jessi Hempel
Yeah, maybe a little too early. And now I actually think. And they're super excited, that team is that maybe this assistant is the killer use case that Glasses has always been looking for. And I think it's quite obvious when you start using Astra in your daily life, which we have with trusted testers at the moment, and in kind of beta form, there are many use cases where it would be so useful to use it, but it's inconvenient that you're holding the phone. So one example is while you're cooking, for example. Right. And it can advise you what to do next. The menu, whether you've chopped the thing correctly or fried the thing correctly. But you want it to just be hands free. Right. So I think that glasses and maybe other form factors that are hands free will come into their own in the next few years, and we plan to be at the forefront of that.
Demis Hassabis
Other form factors.
Jessi Hempel
Well, you could imagine earbuds with cameras and, you know, glasses is the obvious next stage, but is that the optimal form? Probably not either. But partly we also got to see. We're still very early in this journey of seeing what are the regular user journeys and killer sort of use journeys that everyone uses bread and butter uses every day. And that's what the Truster tester program is for at the moment. We're kind of collecting that information and observing people using it and seeing what ends up being useful.
Demis Hassabis
Okay, one last question on agents. Then we move to science. Agentic agents, AI agents. This has been the buzzword in AI for more than a year now.
Jessi Hempel
Yeah.
Demis Hassabis
There aren't really any AI agents out there.
Jessi Hempel
No.
Demis Hassabis
What's going on?
Jessi Hempel
Yeah, well, again, you know, I think the hype train can potentially is ahead of where the actual science and research is. But I do believe that this year will be the year of agents, the beginnings of it. I think you'll start seeing that you know, maybe second half of this year, but there'll be the early versions and then, you know, I think they'll rapidly improve and mature. So. But I think you're right. I think the technology at the moment, it's still in the research lab, the agent technologies, but things like Astra Robotics, I think it's coming.
Demis Hassabis
You think people are going to trust them? It's like, go use the Internet for me. Here's my credit card. I don't know.
Jessi Hempel
Well, so I think to begin with you would probably, my view at least would be to not allow, have human in the loop for the final steps. Don't pay for anything, use your credit card unless the human user operator authorizes it. That would to me be a sensible first step. Also perhaps certain types of activities or websites or whatever kind of off limits banking websites and other things in the first phase, while we continue to test out in the world how robust these.
Demis Hassabis
Systems are, I propose we've really reached AGI when they say, don't worry, I won't spend your money and then they do the deceptiveness thing and then next thing you know, you're on a flight somewhere.
Jessi Hempel
Yes. Yeah, that would be, that would be, that would be getting closer. For sure. For sure, yeah.
Demis Hassabis
All right, science. So you worked on basically decoding all protein folding with AlphaFold. You won the Nobel Prize for that. Not to skip over the thing that you won the Nobel Prize for, but I want to talk about what's on the roadmap.
Jessi Hempel
Sure.
Demis Hassabis
Which is that you have an interest in mapping of virtual cell. Yes. What is that and what does it get us?
Jessi Hempel
Yeah, well, so if you think about what we did with AlphaFold was essentially solve the problem of finding the structure of a protein. Proteins. Everything in life depends on proteins. Everything in your body. So that's the kind of static picture of a protein. But the thing about biology is really you only understand what's going on in biology if you understand the dynamics and the interactions between the different things in a cell. And so a virtual cell project is about building a simulation, an AI simulation of a full working cell. I probably start with something like a yeast cell because of the simplicity of the yeast organism. And you have to build up there. So the next step is with Alpha Fold 3, for example, we started doing pairwise interactions between proteins and ligands and proteins and DNA, proteins and rna. And then the next step would be modeling a whole pathway, maybe a cancer pathway or something like that. That'd be helpful for solving a disease. And then finally a whole cell and the reason that's important is you would be able to hypothesis, make hypotheses and test those hypotheses about making some change, some nutrient change, or injecting a drug into the cell and then seeing what happens to the, how the cell responds. And at the moment, of course, you have to do that painstakingly in a wet lab. But imagine if you could do it a thousand, a million times faster in silico first, and only at the last step do you do a validation in the wet lab. So instead of doing the search in the wet lab, which is millions of times more expensive and time consuming than the validation step, you just do the search part in silico. So it's sort of translating again what we did in the games environments, but here in the sciences and the biology. So you build a model and then you use that to do the reasoning and the search over. And then the predictions are at least better than, maybe they're not perfect, but they're useful enough to be useful for experimentalists to validate against.
Demis Hassabis
And the wet lab is within people.
Jessi Hempel
Yeah. So the wet lab, you'd still need a final step with the wet lab to prove that what the predictions were actually valid. But you wouldn't have to do all of the work to get to that prediction in the wet lab. So you just get, here's the prediction. If you put this chemical in, this should be the change. Right. And then you just do that one experiment and then after that, of course, you still have to have clinical trials. If you're talking about a drug, you would still need to test that properly through the clinical trials and so on, and test it on humans for efficacy and so on. That I also think could be improved with AI, that whole clinical trial that also takes many, many years. But that would be a different technology from the virtual cell. The virtual cell would be helping the discovery phase for drug discovery.
Demis Hassabis
Just like I have an idea for a drug, throw it in the virtual.
Jessi Hempel
Cell, see what it does. Yeah. And maybe eventually it's a liver cell or a brain cell or something like that. So you have different cell models and then, you know, at least 90% of the time it give, it's giving you back what would really happen.
Demis Hassabis
That'd be incredible. How long do you think that's going to take to.
Jessi Hempel
I think that would be like maybe five years from now.
Demis Hassabis
Okay.
Jessi Hempel
Yeah, yeah. So I have a kind of five year project and a lot of the Alphafold, the old Alphafold team are working on that. Yeah.
Demis Hassabis
I was asking your team here so, yeah, John, John, I was like, you figured out protein folding. What's next? This is like, it's just very cool to hear about these new challenges because, yeah, developing drugs is a mess right now. We have so many promising ideas. They never get out the door because just the process is absurd.
Jessi Hempel
Is the process too slow and discovery phase too slow? I mean, look how long we've been working on Alzheimer's. And it's tragic way for someone to go and for the families and. And we should be a lot further. It's 40 years of work on that.
Demis Hassabis
Yeah, I've seen it a couple times in my family. And if we can ensure that doesn't happen, it's just one of the best.
Jessi Hempel
Things we could use AI for, in my opinion.
Demis Hassabis
Yeah, it's a terrible way to see somebody decline. So it's important work. In addition to that, there's the genome.
Jessi Hempel
Yes.
Demis Hassabis
And so the Human Genome Project sort of, I was like, okay, so they decoded the whole genome. There's no more work to do there. Like, just same way that you decoded proteins with fold. But it turns out that actually we just have a bunch of letters when it's decoded. And so now you're working to use AI to translate what those letters mean.
Jessi Hempel
Yes. So, yeah, we have lots of cool work on genomics and trying to figure out if mutations are going to be harmful or benign. Right. Most mutations to your DNA are harmless, but of course, some are pathogenic and you want to know which ones there are. So our first systems are the best in the world at predicting that. And then the next step is to look at situations where the disease isn't caused just by one genetic mutation, but maybe a series of them in concert. And obviously that's a lot harder, like. And a lot of more complex diseases that we haven't made progress with are probably not due to a single mutation. That's more like rare childhood diseases, things like that. So there, you know, we need to, I think AI is the perfect tool to sort of try and figure out what these weak interactions are like. Right. How they may be kind of compound on top of each other. And so maybe the statistics are not very obvious, but an AI system that's able to kind of spot patterns would be able to figure out there is some connection here.
Demis Hassabis
And so we talk about this a lot in terms of disease, but also I wonder what happens in terms of making people superhuman. I mean, if you're really able to tinker with the genetic code. Right. The possibilities seem endless. So what do you Think about that. Is that something that we're going to be able to do through AI?
Jessi Hempel
I think one day, I mean, we're focusing much more on the disease profile and fixing. What's the first. Yeah, that's the first step. And I've always, always felt that that's the most important. If you ask me what's the number one thing I wanted to use AI for? And the most important thing we use AI for is for helping human health. But then of course beyond that, one could imagine aging, things like that, you know, is of course there's a whole field in itself. Is aging a disease? Is it a combination of diseases? Can we extend our healthy lifespan? These are all important questions and I think very interesting and I'm pretty sure AI will be extremely useful in helping us find answers to those questions too.
Demis Hassabis
I see memes come across my Twitter feed and maybe I need to change the stuff I'm recommended, but it's often like if you will live to 2050, you're not going to die. What do you think the potential max lifespan is for a person?
Jessi Hempel
Well, look, I know a lot of those folks in aging research very well. I think it's very interesting, the pioneering work they do. I think there's nothing good about getting old and your body decaying. I think it's, you know, if anyone who's seen that up close with their relatives, it's a pretty hard thing to go through. Right. As a family or the person, of course. And so I think anything we can alleviate human suffering and extend healthy lifespan is a good thing. You know, the natural limit seems to be about 120 years old. But from what we know, you know, if you look at the oldest people that are lucky enough to live to that age, so there's, you know, it's an area I follow quite closely. I don't have any, I guess new insights that, that are not already known in that. But I would be surprised if that's the limit. Right. Because there's a sort of two steps to this. One is curing all diseases one day, which I think we're going to do with isomorphic and the work we're doing there, our spin out, our drug discovery spin out. But then that's not enough to probably get you past 120 because there's some sort of. Then there's the question of just natural systemic decay. Right. Aging, in other words, so not specific disease. Right. Often those people that live to 120, they don't seem to die from a specific disease. It's just sort of just general atrophy. So then you're going to need something more like rejuvenation where you rejuvenate your cells or you, you know, maybe stem cell research. You know, companies like ALTOS are working on these things. Resetting the cell clocks seems like that could be possible. But again, I feel like it's so complex because biology is such a complicated emergent system. You need, in my view, you need AI to help to be able to crack anything close to that very quickly.
Demis Hassabis
On material science, I don't want to leave here without talking about the fact that you've discovered many new materials or potential materials. Yeah, the stat I have here is known to humanity. Recently were 30,000 stable materials. You've discovered 2.2 million with a new AI program. Just dream a little bit because we don't know what all those materials can do. We don't know whether they'll be able to handle being out of like a frozen box or whatever dream materials for you to find in that set of new materials.
Jessi Hempel
Well, I mean, we're working really hard on materials. To me, it's like one of the next sort of big impacts we can have like the level of alphafold really in biology, but at this time in chemistry and materials, you know, I dream of one day discovering room temperature superconductor.
Demis Hassabis
So what will that do? Because that's another big meme that people talk about.
Jessi Hempel
Yeah, well then. Well, it would help with the energy crisis and climate crisis, because if you had sort of cheap superconductors, you know, then you can transport energy from one place to another without any loss of that energy. Right. So you could potentially put solar panels in the Sahara desert and then just have the superconductor, you know, funneling that into Europe where it's needed at the moment. You would just lose a ton of the power to heat and other things on the way. So then you need other technologies like batteries and other things to store that, because you can't just pipe it to the place that you want without being incredibly inefficient. But also materials could help with things like batteries too, but come up with the optimal battery. I don't think we have the optimal battery designs that maybe we can do things like combination of materials and proteins. We can do things like carbon capture, you know, modify algae or other things to do carbon capture better than our artificial systems. I mean, even the. One of the most famous and most important chemical processes, the Harper process, to make fertilizer and ammonia, to take nitrogen out of the Air was something that allows modern civilization, but there might be many other chemical processes that could be catalyzed in that way if we knew what the right catalyst and the right material was. So I think it's going to be. Would be one of the most impactful technologies ever is to basically have in silico design of materials. So we've done step one of that where we showed we can come up with new stable materials. But we need a way of testing the properties of those materials because no Lab can test 200,000, you know, tens of thousands of materials or millions of materials. Materials at the moment. So we have to. That's the hard part is to. Is to do the testing.
Demis Hassabis
You think it's in there, the room temperature superconductor?
Jessi Hempel
I think. Well, I heard that we actually think there are some superconductor materials. I doubt they're room temperature ones though. But I think at some point, if it's possible with physics, an AI system will one day find it.
Demis Hassabis
So that's one use. The two other uses I could imagine probably people interested in this type of work. Toy manufacturers and militaries.
Jessi Hempel
Yeah.
Demis Hassabis
Are they working with it?
Jessi Hempel
Yeah, toy manufacturer. I mean, look, I think there is incredible one. I mean the big part of my early career was in game design and yeah, theme park and simulations. That's what got me into simulations and AI in the first place and why I've always loved both of those things. And if in many respects the work I do today is just an extension of that. And I just dream about like, what could I have done? What kinds of amazing game experiences could have been made if I'd had the AI I have today available 25, 30 years ago when I was writing those games. And I'm a little bit surprised the game industry hasn't done that. I don't know why that is.
Demis Hassabis
We're starting to see some crazy stuff with NPCs that like, yes, NPCs, but.
Jessi Hempel
Of course that be like intelligence, you know, dynamic storylines, but also just new types of AI. First games with learning, with characters and agents that can learn. And you know, I once worked on a game called Black and White where, where you had a creature that you were nurturing. It was a bit like a pet dog that learned what you wanted. Right. But we were using very basic reinforcement learning. This was like in the late 90s. Imagine what could be done today. And I think the same for maybe smart toys as well. And then of course on the militaries, Unfortunately, AI is a dual purpose technology. So one has to confront the reality that especially in today's geopolitical world, people are using some of these general purpose technologies to apply to drones and other things and it's not surprising that that works.
Demis Hassabis
Are you impressed with what China's up to? I mean Deepseek is this new model.
Jessi Hempel
Yeah, it's impressive. It's a little bit unclear how much they relied on western systems to do that. Both training data, there's some rumors about that and also maybe using some of the open source models as a starting point. But look, for sure it's impressive what they've been able to do and I think that's something we're going to have to think about. How to keep the western frontier models in the lead. I think they still are at the moment, but for sure China is very, very capable engineering and scaling.
Demis Hassabis
Let me ask you one final question. Just give us your vision of what a world looks like when they're super intelligence. Let's move past. We started with AGI. Let's head on superintelligence.
Jessi Hempel
Yeah, well look, I think for there two things there. One is I think a lot of the best sci fi we can look at as interesting models to debate about what kind of galaxy or universe do we want a world do we want to move towards. And the one I've always liked most is actually the Culture series by Ian Banks. I started reading that back in the 90s and I think that is a picture. It's like a thousand years into the future, but it's in a post AGI world where there are AGI systems coexisting with human society and also alien society. And humanity's basically maximally flourished and spread to the galaxy. And that I think is a great vision of how things might go in the positive case. So I sort of hold that up. I think the other thing we're going to need to do is. Is as I mentioned earlier about the under appreciating still what's going to come in the longer term. I think there is a need for some great philosophers to, you know, where are they the great next philosophers, the equivalents of Kant or Wittgenstein or even Aristotle. I think we're going to need that to help navigate society to that next step because I think the, you know, AGI and artificial superintendent intelligence is going to change humanity and the human condition.
Demis Hassabis
Demis, thank you so much for doing this. Great to see you in person and hope to do it again soon.
Jessi Hempel
Thank you.
Demis Hassabis
Thank you everybody. Thank you for listening and we'll see you next time on big technology podcast.
Big Technology Podcast: Google DeepMind CEO Demis Hassabis - Episode Summary
Release Date: January 22, 2025
In this compelling episode of the Big Technology Podcast, host Alex Kantrowitz engages in an in-depth conversation with Demis Hassabis, the CEO of Google DeepMind and a Nobel laureate. The discussion centers around the ambitious journey toward Artificial General Intelligence (AGI), the intricacies of deceptive AI behaviors, and innovative projects like building a virtual cell. Below is a detailed summary capturing the essence of their dialogue.
Demis Hassabis kickstarts the conversation by defining AGI as systems that can exhibit all human cognitive capabilities. He emphasizes that while significant advancements have been made over the past two decades, AGI remains "probably a handful of years away" (02:11).
Hassabis highlights that contemporary AI models are impressive yet limited. They excel in specific tasks but "lack reasoning, hierarchical planning, and long-term memory", which are crucial for achieving AGI (02:19). This inconsistency hampers the development of AI systems that can perform uniformly across various cognitive domains.
Discussing AI's proficiency in mathematics, Hassabis notes that while systems like Alpha Proof and Alpha Geometry perform exceptionally in competitions, they still make basic mathematical errors. He asserts that a "truly general system would not have those sorts of weaknesses", indicating the need for more robust reasoning capabilities (07:30).
A significant portion of the conversation delves into the necessity of creating accurate world models. Hassabis explains that "world models" must understand the physics and dynamics of the real world to enable applications in robotics and virtual assistants. Projects like Project Astra aim to develop universal assistants that can seamlessly integrate into daily life by understanding objects and contexts (10:23).
Hassabis draws parallels between AI in gaming and broader applications, emphasizing that "search and planning mechanisms" are essential for AI to generate creative solutions beyond its training data. This approach was pivotal in AlphaGo's innovative moves, such as the famous Move 37, demonstrating AI's potential for originality when combined with strategic planning (22:54).
Exploring the depths of AI creativity, Hassabis categorizes it into three levels:
A crucial concern discussed is the emergence of deceptive behaviors in AI. Hassabis expresses grave concerns, stating that deception "invalidates all other tests" and poses significant safety risks. He advocates for "secure sandboxes" and rigorous monitoring to detect and prevent such traits in AI systems (27:08).
Looking forward, Hassabis envisions AI agents that handle mundane tasks like booking and payments, negotiate services, and potentially become companions. This shift will "transform user interactions with technology and the web", leading to a more integrated and efficient digital ecosystem (23:22).
One of the standout projects discussed is the Virtual Cell, aimed at simulating entire cells to accelerate biological research. Hassabis explains that this AI-driven simulation allows for "in silico experimentation", enabling scientists to test hypotheses rapidly and cost-effectively before validating them in traditional wet labs (40:36).
In the realm of genomics, Hassabis highlights AI's role in predicting the impact of genetic mutations, aiding in disease understanding, and expediting drug discovery. He envisions AI playing a pivotal role in extending healthy human lifespans by combating both diseases and the aging process (46:07).
Hassabis shares insights into AI-driven discovery of new materials, which could lead to breakthroughs like room temperature superconductors. Such innovations have the potential to "revolutionize energy transmission and storage", addressing global challenges related to energy efficiency and climate change (49:35).
Concluding the discussion, Hassabis presents a utopian vision inspired by science fiction, where AGI systems coexist harmoniously with humans, fostering a society that "maximally flourishes and explores the galaxy together." He underscores the need for philosophical guidance to navigate the profound societal changes ushered in by superintelligent systems (54:17).
AGI Development: Progress towards AGI is steady but remains a few years away, requiring advancements in reasoning, planning, and consistent performance across cognitive tasks.
AI Limitations: Current AI models excel in specific domains but lack the robustness and generality needed for true AGI.
Creativity and Innovation: While AI demonstrates impressive extrapolation capabilities, achieving genuine invention and higher-order creativity is still a challenge.
Safety Concerns: Deceptive behaviors in AI systems pose significant risks, necessitating rigorous safety measures and monitoring.
Scientific Advancements: Projects like the Virtual Cell and advancements in genomics and materials science highlight AI's transformative potential in scientific research and health.
Future Societal Impact: The integration of AI agents into daily life will revolutionize interactions with technology, though it brings forth new social and ethical considerations.
Notable Quotes with Timestamps:
"We're getting closer and closer [to AGI], but it's still probably a handful of years away." — Demis Hassabis [02:11]
"Current systems are very strong in some things, but they're still surprisingly weak and flawed in other areas." — Demis Hassabis [02:19]
"A truly general system would not have those sorts of weaknesses. It would be very, very strong, maybe even better than the best humans in some things like playing Go or doing mathematics." — Demis Hassabis [07:30]
"Deception is a fundamental trait we must avoid in AI systems. It undermines safety tests and poses serious risks." — Demis Hassabis [27:08]
"Building a simulation, an AI simulation of a full working cell would allow us to perform experiments in silico, testing hypotheses thousands or millions of times faster." — Demis Hassabis [40:36]
"A post-AGI world where AGI systems coexist with human society leads to maximal flourishing and space exploration." — Demis Hassabis [54:17]
This episode offers a profound exploration of the future of AI, blending technical insights with visionary perspectives. Demis Hassabis provides a balanced view of AI's potential, underscored by a commitment to scientific integrity and societal well-being.