The NFL playoffs are better with FanDuel because right now new customers can bet $5 and get $300 in bonus bets if you win. That's $300 in bonus bets if you win your first $5 bet. FanDuel, an official sportsbook partner of the NFL. 21 plus and present in select states. First online real money wager only. $5 first deposit required. Bonus issued as non-withdrawable bonus bets which expire seven days after receipt. Restrictions for non-withdrawable bonus bets.
Restrictions apply. See terms at sportsbook.fanduel.com. Gambling problem? Call 1-800-GAMBLER.
You're listening to a podcast from Washington Post Live, bringing the newsroom to you live. Good morning, everyone. Welcome to the Washington Post's hub from D.C. to Davos. I'm Bina Venkatraman. I'm a columnist at the Washington Post, and it's my pleasure today to be joined by Nobel laureate Demis Hassabis, who is the founder of Google's DeepMind unit and now its CEO, of course. Demis, welcome. Thanks very much. Thanks for having me. Good to see you all.
So there's so much due excitement and also hype about AI today. You've been around for a while and you've been doing this. You found a deep mind 15 years ago. And I want to know from your perspective, is the progress in AI going faster or slower than you expected if you put yourself back in that mind? Yeah. Well, it's funny. I mean, thinking back to 2010, yeah.
virtually no one was really working on AI, certainly not in industry, even in academia. There was very little work. Deep learning had sort of just been invented by Jeff Hinton and colleagues in 2006. So it seemed like a really far-fetched thing to try and do, to sort of redo AI and relearn and sort of try and go for the grand challenge of AGI. But we had a 20-year mission and we're on track, I would say.
So, you know, 15 years later, which is quite, quite amazing to see. Okay. So, and I know you've, you've gone about saying we're probably still five or 10 years out from AGI. And I'm, I want to get to that a little bit later in the conversation. Yeah.
When you and I met, which is now more than a decade ago, I think you were right in the sort of pivot of deciding whether to have DeepMind to merge with Google and, you know, then Alphabet, now Google again. And I'm curious from your perspective, how is that going? How has it been to be part of a big company? Yeah, it's been great so far. And, yeah.
you know, big decisions to be made when back in 2014 and, you know, whether to be part of Google or go it alone. But what we knew at the time, and I think it's transpired is that it would require lots of compute resources to get all the way to AGI. Perhaps back then we weren't quite imagining as much compute resources as has been required,
But we knew we needed a lot and more than was possible to sort of raise back in 2014. It was just before the mega rounds were done and things like that. So maybe our timing was like one or two years. We were one or two years too early. But Google, of course, the reason we chose them was they had the most computers in the world.
And also the mission of organizing the world's information, I thought was a natural fit with our mission, which was to solve intelligence and then use it to solve everything else. Obviously, including and most importantly, organizing information and using information. So that was all a natural fit.
And then at the time, the CEO was Larry Page, and he has always been interested in AI and thought of Google ultimately as an AI company. And in fact, he told me at the time that even when they're back in their garage in the late 90s, he regarded it ultimately as an AI company.
So that convinced me it was a good fit for what we wanted to do and that's how it's transpired. Although obviously there's been lots of winds and turns in the road since then. And now, you know, I think DeepMind and Google DeepMind is the engine room really of the whole of Google. So as a founder, and I know I also think of you as, you know, you're a video game designer, you're a scientist, you're a neuroscientist by training.
You're part of a big corporation, you founded DeepMind. Are you in Founders Mode within Google or are you a manager? Are you in Manager Mode? Yes. Well, of course, there's this obsession with Founders Mode now, but I would say I'm very much acting in that way. So it's very important to me that, okay, it's a division of a bigger organization, but I still have the sort of latitude and the leeway to set the culture
set the kind of intensity and pace of the place and the overall research goals. That was the other thing that was agreed at the beginning of the acquisition is that we would run relatively independently and certainly on the research side, we would have sort of carte blanche what we wanted to go after.
And it's more complicated now because you know the whole of Google and all the product divisions that rely on what we produce So it's been quite interesting over the last couple of years and I've been learning a lot of new things and it's been very exciting But you know things of we have you know, basically we can operate however we want where how we see fit to meet the mission of building AI responsibly for the benefit of the world which is
our sort of new updated mission and things like AlphaFold and the science work we do, as well as the work we do for improving all the products is all part of that. So I want to talk about AlphaFold. So to me, what distinguishes your approach and Google's approach as one of the big companies working on AI now is your focus on science. And of course, the Nobel Prize for this predictive model that allows us to solve a really thorny problem in biology, which is how are proteins structured and a model that does that.
What's next, I guess, for AlphaFold? What's next on the horizon for solving these sort of thorny problems with science that you're after? Well, look, there's so many. We work on AlphaFold, the sort of poster child, if you like, of our approach of applying AI to science.
But we're actually applying it to many areas of science, not just biology, but also chemistry, some physics problems, fusion, weather prediction, mathematics. So, I mean, that's the whole point of AGI and general AI technologies. It could be applied to almost anything. Obviously, we're most far advanced with our biology work. And AlphaFold, you know, cracked a 50-year grand challenge in biology of predicting 3D structure proteins, as you say. And that's a key part of drug discovery.
So that was the reason I wanted to focus on that, apart from the fact it was a grand challenge for biology in itself. It's a critical component of understanding and speeding up drug discovery. So that's what we're doing. They're focusing now on a new spin out isomorphic, which we started a couple of years ago to build on the alpha volt technologies.
You just telegraphed that you expect a drug to be in clinical trials by the end of this year. Can you say more about what the breakthrough is there that the AI is enabling in drug discovery? Yeah, so we actually work on over a dozen drug programs at the same time. And obviously our most advanced ones we think will be ready by the end of the year to be in the clinic.
And mostly what we're interested in is, at the moment, problems that have very hard chemistry. So, for example, there might be a pocket in the protein that doesn't open up until something binds to it. So, you know, it's sometimes called a cryptic pocket. And we're quite interested in targets that are considered by pharma, at least, to be undruggable. So for whatever reason, they're very difficult to find a compound that will bind to the right place.
on the drug and those are the perfect things for our technologies to go in and find binding sites and then designing compounds using another AI system that we've developed to design a clean compound that will bind to that site but nothing else in the body. So you sort of think about doing like a virtual screen of the compound against all the things that it might bind to, all the proteins it might bind to in the body.
How much of this is about being able to analyze vast amounts of data on the known compounds and known druggable targets in the body versus being able to do this kind of structural space analysis?
Yeah, so it's all related. So, you know, with our new AlphaFold 3, that's the advance there, the latest version of AlphaFold, and effectively we have AlphaFold, you know, you can think of it, AlphaFold 3++ internally, is that it deals with pairwise interactions between things. So proteins and proteins, but also proteins and ligands, so compounds.
And so it can kind of predict where that compound is going to bind and what that's going to do to the shape of the protein. So with that technology, you can actually very quickly within a few seconds sort of scan through many thousands
of different proteins and to see what kind of binding attributes and actually other attributes which are important like toxicity, solubility, these compounds that you're designing might have. And then we have, you can think of it as a generative AI process to design compounds in a smart way and sort of iterate those to keep on optimizing whatever objectives you've given the system.
So what's the disease area that we can expect to see your first AI drug? Well, we're working on many disease areas. We can't talk about the partnerships that we have. We have great partnership with Eli Lilly and Novartis, but we also have internal drug programs and we're looking at some of the big major therapeutic areas, you know, cardiovascular, cancer and neurodegeneration.
Okay. And so we should just stay tuned to know. Yes. And you mentioned some of the other areas, fusion, for example, which has been sort of a holy grail and felt a little bit like a delusion for most of my lifetime. What is it the AI will do that we haven't been able to do so far?
Well, we've been using AI in a couple of ways. One much more mature than the other. We published a Nature paper recently in collaboration with our collaborators at EPFL in their test fusion reactor to control the plasma for longer in a stable state.
in a test reactor and it's you know, we've built a kind of control system, AI control system that controls the magnetic field very very adaptably around the plasma and for fusion that's the key really is to how long can you hold the plasma in a sort of stable state and that's the key for it to be generating self-sustaining reaction.
And so instead of using normal operational methods, we control methods, we use AI methods to kind of predict what the plasma is going to do next ahead of time and then change the magnetic field, you know, a few milliseconds before the plasma does that so that we make sure that it stays contained.
So I have to ask you, because as a neuroscientist and with this problem that you articulated of solving intelligence early on when you were going at DeepMind, some of these problems are not problems that it seems human intelligence actually has been able to tackle, whether it's predicting the structure of proteins or this work with plasma.
Are you still in a mode of trying to solve the problem of creating machines that match human intelligence or is this a different kind of intelligence altogether? No, so the idea from the beginning was and the inspiration from neuroscience was for some of the algorithmic ideas and architectures the brain uses to inspire. I mean, even neural networks are loosely inspired by the brain.
and things like reinforcement learning, one of the reasons we push so hard on that, which is in vogue now, is that the dopamine system in the brain uses a form of reinforcement learning to learn what we learn. And so we know in the limit this should be able to work up to human level intelligence.
And the reason that human level intelligence is sort of an important benchmark is that it's the only example of general intelligence that we have, maybe in the universe, certainly on Earth. And so it's worthy of study. And it's probably not the only way to build general intelligence, but we know it's a way. And so from the beginning, we wanted to make sure we built a system. That's what the definition of our definition of AGI is, a system that exhibits all the cognitive capabilities that humans have.
And if you have a system that does that, then you know you have a general intelligence. If it only does part of it, then you might not have a fully general intelligence. But it doesn't mean you're in many domains. Of course, it will surpass human intelligence like Play and Go, all of our alpha programs, folding proteins. Obviously, these are things that folding proteins, certainly that humans can't do. But it still uses the same underlying techniques that are based off of the same things that human intelligence is based off of.
Do you still think it's worth us trying to understand artificial intelligence as kind
coming up to the speed of human intelligence or do you think we should start to understand it as a more strange intelligence? No, I think it's, well, I think both actually are needed. So I think it's still important to understand and test it against the wide spectrum of things humans can do. We're fully general because, you know, the work that people at Alan Turing did, right? Turing machines and a more theoretical version of this is we've got to match and prove that these systems are Turing powerful. Are they able of mimicking a Turing machine?
which Turing proved could compute anything that was computable. So that's what we're after. And the brain is probably a type of maturing machine, and we want our AI systems to be as general as that. But it's also a bit of an alien intelligence as well, because as we were saying earlier,
it will go beyond the level of human intelligence in certain domains very quickly. And in fact, already is in certain narrow domains due to capabilities that humans can't have, which is extensible memory and sort of extendable compute power and things like that. But obviously, it's limited in the biological brain. Your estimates of when we will achieve artificial general intelligence are more conservative than others in the field. You've said about five to ten years, as we were talking about earlier.
What do you think are the major hurdles and why do you think your estimate is a little bit further out than others? Yeah, I mean, look, so you've got to look at, I think, first of all,
Everyone's making assumptions about what's going to be needed. My assumption is, and I wouldn't be surprised if it was sooner, so it's a probability mass rather than a definite prediction. So there's a significant chance it could be sooner. But I think there's probably one or two breakthroughs still needed.
before we have all the technologies required for AGI. - Say what those are, yeah. - Well, things like reasoning and memory and hierarchical planning. There's a bunch of things that the systems today, and the question of what can it do sort of true out of the box invention. The tests I would have for that would be like, could it have invented general relativity back when Einstein did?
with the knowledge that he had. I don't think any of today's systems are close to that. Or maybe if we take the example of AlphaGo, AlphaGo came up with amazing strategies like Move 37, which is very famously a super creative move that no one had ever seen before in one of the matches, in the big match we had with Lee Sedol.
But could a system invent go? That would be the higher level of invention to me. And so far, the answer would be clearly no. Now, I think we will be able to do that. But the question is still what's missing. And so if one or two breakthroughs are still required, in addition to scaling,
then, you know, on average, these kinds of big breakthroughs seem to come around every two to three years. So that suggests a kind of five-year timescale. But it might be, you know, it's an empirical question whether scaling on its own will be enough. And we're pushing that to the limit too, like other labs. So we'll see. Since you understand the brain a lot more than other peoples in this space, I have to ask you this.
What's the difference? So there's the creative improvisation within Go to be able to make novel moves, but what's the difference between that and inventing the game, the invention or the invention of the theory of relativity? Well, I mean, it's a higher leap of creativity and it's not clear exactly how we do that as humans, right? Obviously, we still base some of that creativity on experience. No one invents things in a vacuum.
And it's hard to know it might be a particular way that a certain person's brain like Einstein, you know Probably one of the biggest genius of all time, you know connects information together, right? Is what allowed him to see something like general relativity before others? But inventing go, you know, it's not even clear how you'd specify that to a machine, right? Probably you would have to do a very abstract level like, you know invent me a game. That's elegant that
to play, takes five minutes to learn, but a lifetime to master, but it's playable within two hours of a human afternoon. Something like that is how you'd specify it. And then you'd hope it would come up with something as beautiful and elegant and deep as Go. But it's not even clear how one would specify that today to today's systems.
But I don't think it's magic. I do think it's a computable process. At least that's the evidence we have of the brain. There's nothing non-computable in the brain as far as we know. Although people like Roger Penrose argue there's quantum effects in the brain or something, but no one's ever found anything biologically that's compelling evidence for that. So as far as we know, it is a type of Turing machine. And therefore, eventually, these types of systems that we're building should be able to have that kind of capability.
Something to stay tuned for. I'm curious, you've also issued some cautionary tales about AGI and sort of warned that there ought to be more collaboration in the industry around this. Say more about what you're actually worried about because I think there's a lot. Yeah, there's two big worries that I have and I think are looming large now as the systems get more and more powerful is first of all bad actors repurposing general purpose systems for harmful ends.
So how does one enable all of the beneficial uses of AI, like curing diseases and helping with energy and climate and so on, all these incredible things that we want, and why I spent my whole career working on AI. I think it's going to be the most transformative technology ever. But how does one also restrict those same technologies to people or even rogue nations that would do harm with those same technologies?
So that's one issue. And it's a big challenge because you'd like to openly share all this information and publish it and so on, like we've done for the last 15 years and open source and so on. All these things are great usually for disseminating knowledge and speeding up progress. But then you have this problem of access by harmful actors.
And then the second thing, which is kind of unique for AI as a technology, is the AGI risk itself. So the risk of these systems as they become more autonomous, not today's systems really, but maybe the next systems that everyone's building, agent-based systems. I think this year is going to be the year of agents, so more autonomous systems that can accomplish tasks on their own. Of course,
We're building those kinds of systems because they'll be way more useful for productivity, for consumers, for users, also in science, but they also carry more risk. And at the limit, when we get to AGI, there's questions about controllability of those systems, the values they have, the goals you set them, making sure they're not misinterpreted. And all of that is a lot more research is needed very quickly, I would say, in order to ensure that those systems are controllable and safe.
Given that the companies behind the large language models have released them without sort of putting guardrails around their use and potential dual use by bad actors,
What's the role of government? What is the check? What is the way to control this? Well, I think government, you know, I'm in favor of, you know, that we need actually, it's too important, this technology to, you know, to not regulate, but it's also too important to kind of not regulate well. That's the problem with this is that we need the right type of regulation. And it's very hard to do because it's such a fast moving technology.
What we would have sat here and talked about five years ago as being at the forefront of AI is different to today's models. And probably in a couple of years, they'll be very different again. I expect back with, as I was talking earlier about these agent-based systems and AlphaGo-like ideas combined with these large language models or multimodal models. So it's hard. I think that we've got to be kind of, governments need to be sort of nimble and fast and adaptable
which is not necessarily what they're known for, to sort of fast follow, quickly follow what is known, the latest frontier things that are known about these systems. And I think what's needed is, needs to be encouraged is increased collaboration between the leading actors, but also internationally. That's the other issue with these things is
There's no real point in having regulation only in one region of the world because AI sort of is across borders, it's going to affect all countries. And you'll get this sort of race dynamic where other countries will just race ahead if only a few places regulate it in a certain way. So I actually think that somehow we've got to get to a kind of international understanding about these models.
And that's why I was pleased to see these summits being set up, international summits, first one in the UK by the Prime Minister Sunak at the time, next one in Paris in a few weeks' time, which I'll be going to, to sort of start quickly convene this international community and start that dialogue.
Well, thank you for what you're doing and bringing this conversation to Davos as well. Seems to be a gesture in that direction. I'm told we're out of time. We are not out of questions for you, but thank you so much, Demetri Savas. Thank you all for joining us. Thanks for listening, everyone. Thank you. Thanks for listening. For more information on our upcoming programs, go to WashingtonPostLive.com.