0:03 What keeps you up at night?
0:06 For me, it's um this question of
0:08 international uh standards and
0:10 cooperation and and and also not just
0:11 between countries but also between
0:13 companies as we get and researchers as
0:17 we get um towards the final steps of um
0:19 uh AGI and and I think we're on the cusp
0:21 of that. You know, maybe we're 5 to 10
0:23 years out. Some people say shorter. I
0:24 wouldn't be surprised. It's a you know,
0:26 sort of like a probability distribution,
0:28 but it's coming. So either way, it's
0:36 Deis Sasabis co-ounded Deep Mind in 2010
0:38 and is now the CEO of the company which
0:41 was sold to Google in 2014. In 2024,
0:43 Habis shared the Nobel Prize in
0:45 Chemistry for the development of
0:47 Alphafold, an AI system that predicts
0:50 the 3D structure of proteins. In March,
0:52 Time reporter Billy Peragro interviewed
0:55 Asaris in London for the 2025 Time 100
0:57 list. For the purposes of the video,
0:59 could you explain what AGI stands for
1:01 and what it kind of means in a sentence?
1:03 AGI stands for artificial general
1:06 intelligence and we define that as a
1:08 system that is capable of exhibiting any
1:10 cognitive capability humans have. Could
1:14 you talk about AGI and when you first
1:16 realized maybe that that might be the
1:18 key to unlocking not just individual
1:21 scientific discoveries, but a whole sway
1:23 of them. So we've always been interested
1:26 in building general AI or AGI from the
1:27 beginning of Deep Mind. That was always
1:29 the aim. In fact, that was the original
1:31 aim of AI as a field back in the 1950s.
1:34 So in some senses, we're we're we're
1:36 kind of realizing that grand dream on
1:38 the way you can use those general
1:41 techniques uh for specialized solutions
1:43 to problems. So Alpha Fold is a good
1:45 example of that where um the problem's
1:47 well specified. its enormous value to
1:50 society and in this case biology and
1:52 medical research uh in its own right for
1:54 what it can do. So it doesn't really
1:56 matter what methods you use but you
1:57 start with the general methods and then
1:59 you add some specializations on top. Um
2:01 so it still uses neuronet networks and
2:02 those all the techniques we built for
2:04 our games and then it adds some new
2:07 things for protein specifically. Um, of
2:08 course we in the simultaneously we've
2:11 been advancing our general AI techniques
2:13 and now that's obviously in the world of
2:15 language but also more recently
2:17 multimodal foundational models that can
2:20 understand not just language uh or play
2:23 a game but actually um the whole entire
2:24 spatial world context that you're in and
2:26 start understanding about things like
2:28 the physics in the world and be able to
2:30 process um things like images and video
2:33 and sound.
2:35 So obviously this technology if it's
2:38 created will be very impactful. Could
2:40 you paint the best case scenario for me?
2:42 What does this world look like if we
2:44 create AGI? So the reason I've worked on
2:48 um AI and AGI my entire life really and
2:51 career is um because I believe if it's
2:53 done properly and responsibly it will be
2:55 the most beneficial technology ever
2:57 invented. So the kinds of things that um
2:59 I think we could be able to use it for,
3:00 you know, winding forward 10 plus years
3:03 from now is potentially curing maybe all
3:07 diseases with AI um and helping with
3:09 things like finding new energy sources
3:11 or helping develop new energy sources,
3:13 whether that's fusion or optimal
3:15 batteries or new materials like new
3:18 superconductors. Um I think uh some of
3:19 the biggest problems that face us today
3:21 as a society whether that's climate or
3:24 disease will be helped by uh AI
3:28 solutions. So um I think if if we wind
3:29 forward in 10 years time I think that
3:31 the optimistic view of it will be we'll
3:33 be in this sort of world of maximum
3:35 human flourishing traveling the stars
3:38 you know with with um all the
3:40 technologies that AI will help bring
3:43 about. You've also been quite vocal
3:45 about the need to do this responsibly to
3:48 avoid the risks. Uh could you paint the
3:51 worst case scenario for me? Sure. Well
3:53 look worst case I think has been covered
3:55 a lot in science fiction. Um, I think
3:58 the the two issues I worry about most
4:01 are AI is going to be this fantastic
4:03 technology if used in the right way, but
4:05 it's a dualpurpose technology and it's
4:06 unbelievably powerful. It's going to be
4:08 unbelievably powerful. So, um, what that
4:11 means though is that bad actors or would
4:13 be bad actors can repurpose that
4:15 technology for potentially harmful ends.
4:17 Um so one big challenge we have as a
4:20 field and as society is how do we enable
4:21 access to these technologies to the good
4:23 actors to do amazing things like cure
4:25 terrible diseases at the same time as
4:27 restricting access to those same
4:30 technologies to would be bad actors um
4:32 whether that's individuals to all the
4:34 way up to rogue nations uh and that's a
4:36 really hard uh conundrum to solve. Um
4:40 the second thing is uh AGI risk itself.
4:42 So risk from the technology itself as it
4:43 becomes more autonomous, more
4:45 agent-based like which is what's going
4:46 to happen over the next few years
4:48 because they'll be more useful for all
4:51 the good users. Um but how do we ensure
4:53 uh that we can stay in charge of those
4:54 systems, control them, interpret what
4:56 they're doing, understand them, put put
4:58 the right guard rails in place that are
5:01 not movable by very highly capable
5:03 systems that are self-improving. That is
5:05 also an extremely difficult challenge.
5:08 So those are the two main buckets of
5:10 risk. If we can get them right, then I
5:12 think we'll end up in this amazing
5:15 future. It's not a worst case scenario,
5:16 though. What does the worst case
5:18 scenario look like? Well, I think if you
5:20 get that wrong, then um you know, you've
5:23 got uh all these harmful use cases uh
5:26 being done with uh these systems, you
5:28 know, and that can range from uh um kind
5:31 of doing the opposite of what of what
5:32 we're trying to do without finding
5:34 cures. you could end up finding, you
5:36 know, toxins, these kinds of things with
5:40 those same uh uh same systems. And so a
5:41 lot of the cases, all the good use
5:44 cases, if you invert the the goals of
5:46 the system, uh you would get the the
5:48 sort of harmful cases. Um and as a
5:50 society, we've got to this is why I've
5:52 been sort of in favor of international
5:54 cooperation around this because the
5:56 systems wherever they're built or or
5:58 however they're built, uh they're going
6:00 to affect they can be distributed all
6:00 around the world. They're going to
6:03 affect everyone. um uh pretty much every
6:06 corner of the world. So um we need sort
6:08 of international standards I think
6:10 around um how these things systems get
6:12 built, what sort of designs and goals we
6:14 give them and how they're deployed and used.
6:15 used.
6:17 There's a lot of talk in the AI safety
6:19 world about
6:22 the degree to which these systems are
6:24 likely to do things like power seeking,
6:28 to be deceptive, to kind of, you know,
6:32 seek to uh disempower humans and and
6:34 escape their control. Do you have a
6:35 strong view on on whether that's like
6:37 the default path or is that a kind of
6:39 tail risk? Like what's your perception?
6:41 My my my feeling on that is that it the
6:44 risks are are unknown currently. So it
6:46 we you know there's a lot of people my
6:48 colleagues and um famous chewing award
6:50 winners on both sides of that argument
6:52 right some uh you know like Yan Lun
6:54 would say that there's no risk here it's
6:57 sort of um it's it's it's uh it's uh
6:59 it's all hype and then there are other
7:01 people who think we're you know it's
7:04 doomed by default right uh Jeff Hinton
7:06 and Joshua Benjamin people like that and
7:07 um I know all these people very well I
7:09 think some the right answer is somewhere
7:10 in the middle which is if you look at
7:12 that debate there's very smart people on
7:14 both sides of that debate So what that
7:16 tells me is that we don't know enough
7:17 about it yet to actually quantify the
7:20 risk. It might turn out that as we
7:22 develop these systems further, it's way
7:25 easier to align these systems or or keep
7:26 control of these systems than we thought
7:28 or we expected sort of hypothetically
7:30 from here. Some quite a lot of things
7:32 have turned out like that so far.
7:34 They've been easier than people thought
7:35 in including making them useful to the
7:37 world, you know, with just some fairly
7:40 simple RHF fine-tuning on top of these
7:41 models and then suddenly they become
7:44 useful chat bots. So that's interesting.
7:45 So there's some evidence towards the
7:47 fact that that things may be a little
7:50 bit easier than some of the uh uh uh uh
7:52 most pessimists were thinking, but in my
7:55 view there's still significant risk and
7:58 we've got to do research um carefully to
8:00 kind of quantify what that risk is and
8:02 then deal with it ahead of time with as
8:05 much foresight as possible um rather
8:07 than after the fact um which you know
8:09 with technologies this powerful and this
8:11 transformative um could be extremely risky.
8:12 risky. [Music]
8:13 [Music]
8:15 It seems like whatever the answer to
8:18 that question, the impact on society is
8:19 going to be transformative to the level
8:23 that we haven't seen in our uh you know
8:25 lives. Yeah. You're a dad. Yeah. How are
8:27 you thinking as a parent about how to
8:30 bring a child up in in a world where so
8:32 much is likely going to radically
8:33 change? Well, I think we've seen a lot
8:36 of change even in our lifetimes. um from
8:38 you know if I think back to my childhood
8:39 where it was the dawn of the computer
8:41 age and I was working on you know my
8:43 first ZX Spectrum that I got when I was
8:46 a small kid and started programming and
8:48 then to where we are today even my early
8:50 games industry work when I was doing AI
8:52 for games like theme park and then today
8:55 we've got systems like VO that creating
8:57 entire realistic video um you it would
8:59 have been hard to dream about that you
9:01 know 20 30 years ago and yet we cope
9:04 with it we seem to adapt and um and I
9:06 think Human beings are sort of inf
9:08 infinitely adaptable. I think that's a
9:09 good thing about us. We we sort of
9:11 normalize to whatever is going on today
9:13 with our technology today, smartphones
9:15 and computers and internet all around us
9:17 and we treat it as you know kids these
9:19 days just as second nature. And I
9:20 suspect that's what's going to happen
9:23 with this. What I'd recommend though is
9:25 just like we did in the computer age is
9:28 you got to embrace and and and I think
9:30 the the coming change, learn about the
9:32 tools and learn how to work effectively
9:35 uh and make the best use of them. And I
9:36 think you'll end up sort of being
9:38 superpowered in some way uh both
9:40 creatively and productivity wise um if
9:42 you use them in the right way. And I
9:44 think that's probably the the next stage
9:46 that we're going to go through.
9:48 um probably the kids these days that are
9:50 growing up with these tools um they'll
9:53 learn all sorts of new workflows that um
9:55 probably will be a lot more efficient um
9:58 than we can imagine today. Is there
9:59 anything that you do differently as a
10:02 parent that you might not have done if
10:04 AGI weren't on the horizon? No, I still
10:05 think it's I get asked this question a
10:07 lot. For example, is it worth learning
10:09 programming and mathematics and even
10:10 things like chess to train your own
10:13 mind? I think it is because um although
10:15 for example let's take programming the
10:17 nature of programming is changing and it
10:19 may well change very radically in the
10:21 next few years and actually in some ways
10:23 democratize it because we'll start
10:24 programming with natural language
10:25 instead of with programming languages.
10:28 So then the the the the the kind of
10:30 value part of that starts going towards
10:32 more the creatives and the designers. So
10:33 it's going to be pretty interesting
10:35 time. Um, but I think the people that
10:37 will get the most out of that will still
10:39 be the ones with deep technical
10:40 understanding of what these tools are
10:42 doing, how they were made, and therefore
10:44 what are their limitations, and and what
10:46 are the things they're strong at that
10:49 you can um uh that you can use. What
10:50 keeps you up at
10:54 night? For me, it's um this question of
10:56 international uh standards and
10:58 cooperation and and and also not just
11:00 between countries but also between
11:02 companies as we get and researchers as
11:05 we get um to towards the final steps of
11:08 um uh AGI and and I think we're on the
11:10 cusp of that. You know, maybe we're 5 to
11:12 10 years out. Some people say shorter. I
11:13 wouldn't be surprised. It's a you know,
11:15 sort of like a probability distribution,
11:17 but it's coming. So either way, it's
11:20 coming very soon. And u I'm not sure
11:23 society's quite ready for that yet. And
11:26 uh and we need to think that through and
11:28 also think about uh these issues that I
11:30 talked about earlier with to do with the
11:32 u controllability of these systems and
11:35 also the access to these systems and
11:37 ensuring that that all goes well. So
11:38 there's a lot of challenges ahead and a
11:41 lot of research and a lot of um uh uh
11:43 discussions that need to be had. What
11:47 TV, movies, books do you think AI uh get
11:51 AI right and why? So, my favorite uh
11:54 movies um on the on the on to show how
11:57 useful AI could be. Um I really like the
12:00 robots from Interstellar. Um extremely
12:02 helpful, extremely knowledgeable uh and
12:05 in the end self-sacrificing and um I
12:06 think that's a good example also have a
12:10 lot of humor of of um uh you know how
12:12 robot assistants or helpers could be um
12:15 very useful in the world. Um and then
12:17 maybe on the on the darker side but also
12:18 inspired me when I was young was
12:20 Bladeunner and things like that where
12:23 you know interesting questions about um
12:26 uh uh autonomous systems are they
12:28 conscious and that was the whole
12:29 dilemma. I think it was a philosophical
12:31 piece in some sense with Bladeunner uh
12:33 and and it's very interesting about the
12:35 nature of being human. I mean that's a
12:36 question that's coming up a little bit
12:38 more and more now right? Are these
12:40 systems on the verge of consciousness?
12:41 Yeah, my feeling is they're not at all
12:43 currently. Um and my recommendation
12:45 would be if we have the choice and we if
12:47 we understood what consciousness was, we
12:49 should build systems first that are not
12:51 definitely not conscious and a kind of
12:53 tools and then we can use those tools to
12:55 better understand our own minds and
12:57 maybe what this phenomena is of
12:58 consciousness that we all feel and then
13:01 once we understood that um which one of
13:02 the things I want to use AI for in the
13:05 sciences is to advance neuroscience um
13:07 then maybe we could take you know think
13:10 about taking that next step. Nice. Um,
13:12 final question. You have a an
13:14 opportunity to have a dream dinner
13:16 party. You can invite anybody alive or
13:19 dead. Uh, say say four guests. Who would
13:22 you choose? Oh, wow. That's really hard
13:25 to to to narrow it down to. Um, you have
13:27 six if you want. Yeah. I mean, I think
13:28 some I would probably invite many of my
13:30 scientific heroes. So, for sure Alan
13:35 Turing, um, uh, Richard Fineman, um,
13:39 and, uh, maybe Newton and Aristotle.
13:40 What do you reckon the the conversation
13:42 over there? Well, I'm pretty sure with
13:44 that set of people, it'll be very
13:46 philosophical around maybe these
13:47 questions about what are the limitations
13:50 of these AI systems and and and what
13:51 does it tell us about the nature of
13:53 reality and then uh you know, I think
13:56 that it does tell us a lot about uh uh
13:57 and will do tell us a lot about what
13:59 what's going on in the universe around
14:01 us. I think AI is going to be the
14:03 ultimate tool for science and certainly
14:04 that's what um has always been my
14:07 passion and what I plan to use it for.
14:08 I think actually one more question while
14:11 we have you on video. Um it's quite
14:14 clear you see yourself as a a scientist
14:16 first and foremost. What would you say
14:18 are do you see yourself more as a
14:19 scientist, a technologist? You're far
14:21 away from Silicon Valley in London. I
14:23 mean how do you identify? Yeah, I I I
14:25 identify myself as a scientist first and
14:28 foremost. Um the whole reason I'm doing
14:29 uh everything I've done in my life is in
14:31 the pursuit of knowledge and and and
14:32 trying to understand the world around
14:34 us. I've I've I've kind of been obsessed
14:37 with that I think since I was a kid of
14:39 all the big questions and and for me
14:41 building AI is my expression of how to
14:43 address those questions is to first
14:45 build a tool um that in itself is pretty
14:48 fascinating and is a statement about
14:49 intelligence and consciousness and these
14:51 things that are already some of the
14:53 biggest mysteries and then uh it could
14:55 it's dual purpose because it can also be
14:57 used as a tool to investigate the
14:58 natural world around you as well like
15:01 chemistry and physics so and biology. So
15:03 what uh what more exciting adventure and
15:06 and pursuit could you have? So I see
15:08 myself as a scientist first and then
15:10 maybe like an entrepreneur second mostly
15:12 because that's the fastest way to do
15:14 things. Uh and then finally maybe a
15:16 technologist engineer because in the end
15:18 you don't want to just theorize and
15:19 think about things in a lab. You
15:20 actually want to make a practical
15:21 difference in the world. I think that's
15:23 where the engineering uh part of me