0:02 It's a very intense time in the field.
0:03 We obviously want all of the brilliant
0:05 things these AI systems can do. Come up
0:07 with new cures for diseases, new energy
0:10 sources, incredible things for humanity.
0:11 That's the promise of AI. But also,
0:13 there are worries. If the first AI
0:14 systems are built with the wrong value
0:16 systems, or they're built unsafeely,
0:19 that could be also very bad. WY sat down
0:21 with Dennis Sabis, who's a CEO of Google
0:23 Deep Mind, which is the engine of the
0:25 company's artificial intelligence. He's
0:27 a Nobel Prize winner and also a knight.
0:30 We discussed AGI, the future of work,
0:32 and how Google plans to compete in the
0:43 Well, welcome to the big interview,
0:44 Dennis. Thank you. Thanks for having me.
0:48 So, let's start talking about AGI a
0:51 little here. Now, you founded Deep Mind
0:53 with the idea that you would solve
0:56 intelligence and then use intelligence
0:57 to solve everything else. And I think it
0:59 was like a 20-year mission. We're like
1:02 15 years into it and you're on track. I
1:03 feel like yeah, we're pretty much dead
1:05 on track actually is what would be our
1:09 estimate. That means 5 years away from
1:11 you know what I guess people will call
1:13 AGI. Yeah. I think in the next 5 to 10
1:15 years that would be my you know maybe
1:17 50% chance that we'll have what we
1:19 defined as AGI. Yes. Well, some of your
1:23 peers are saying 2 years, 3 years and
1:24 others say a little more. But that
1:27 that's really close. That's really soon.
1:30 How do we know that we're that close?
1:31 There's a bit of a debate going on at
1:32 the moment in the field about
1:35 definitions of AGI and and then of of
1:36 course dependent on that there's
1:38 different predictions for when it will
1:40 happen. Uh we've been pretty consistent
1:41 from the very beginning and actually
1:43 Shane Le, one of my co-founders and our
1:45 chief scientist, you know, he helped
1:47 define the term AGI back in I think
1:49 early, you know, 2001 type of time
1:51 frame. And we've always thought about it
1:53 as you know a system that has the
1:55 ability to exhibit sort of all the
1:58 cognitive capabilities we have as
1:59 humans. And the reason that's important
2:02 the reference to the human mind is the
2:04 human mind is the only existence proof
2:06 we have maybe in the universe that
2:08 general intelligence is possible. So if
2:09 you want to claim sort of general
2:11 intelligence AGI then you need to show
2:14 that it it generalizes to all these
2:15 domains is when everything's filled in
2:18 all the all the check marks are are are
2:20 filled in then when we have it we it's
2:22 yes so I think there are missing
2:24 capabilities right now you know that all
2:25 of us who have used the latest sort of
2:28 LLMs and chat bots will will know very
2:29 well like on reasoning on planning on
2:32 memory I don't think today's systems can
2:35 invent you know true do true invention
2:38 you know true creativity hypothesize new
2:40 scientific theories. They're extremely
2:42 useful. They're impressive. Um, but they
2:44 have holes. And actually, one of the
2:46 main reasons I don't think we're we're
2:49 at AGR yet is because of the consistency
2:52 of responses. You know, in some domains,
2:54 we have systems that are can do
2:56 international math olympiad math
2:57 problems, you know, to gold medal
2:59 standard with our alpha proof system,
3:01 but on the other hand, these systems
3:03 sometimes still trip up on high school
3:04 maths or even counting the number of
3:07 letters in a word. So that to me is not
3:09 what you would expect. That level of
3:12 sort of difference in performance across
3:14 the board is is you know not consistent
3:16 enough and therefore shows that these
3:17 systems are not fully generalizing yet.
3:20 But when we get it is then like a phase
3:22 shift that you know then all of a sudden
3:25 things are different all the check marks
3:27 are checked. Yeah. You know and we have
3:29 a thing that can do everything. Are we
3:31 then pow in a new world? I think, you
3:33 know, that again that is debated and
3:35 it's not clear to me whether it's going
3:38 to be more of a a kind of incremental
3:41 transition versus a step function. My
3:43 guess is it looks like it's going to be
3:44 more of an incremental shift. Even if
3:46 you had a system like that, the the
3:49 physical world still operates at the in
3:50 the in with the physical laws, you know,
3:53 factories, robots, these other things.
3:55 So it'll take a while for the effects of
3:57 that you know this sort of digital
4:06 imp theories on that too where it could
4:08 come faster. Yeah. Eric Schmidt who I
4:10 think used to work at Google uh has said
4:12 that it's almost like a a binary thing.
4:15 He says if if China for instance gets
4:18 AGI then we're cooked because if someone
4:21 gets it like 10 minutes before you know
4:24 the next guy then you can never catch up
4:26 you know because then it'll maintain
4:28 bigger bigger leads there. You don't buy
4:30 that. I guess I think it's an unknown.
4:32 It's one of the many unknowns which is
4:33 that you know that's sometimes called
4:35 the hard takeoff scenario where you know
4:38 the idea there is that these AGI systems
4:40 they're able to self-improve maybe code
4:41 themselves future versions of themselves
4:44 that maybe extremely fast at doing that.
4:46 So what would be a a slight lead, let's
4:49 say, you know, a few days could be could
4:51 suddenly become a chasm if that was
4:53 true. But there are many other ways it
4:54 could go too where it's more
4:55 incremental. Some of these
4:57 self-improvement things are not able to
5:00 kind of um accelerate in that way. Uh
5:03 then you know being around the same time
5:05 uh would not make much difference. But
5:07 it's important I mean these issues and
5:09 the geopolitical issues I think the
5:11 systems that are being built they'll
5:13 have some imprint of the values and the
5:16 kind of norms of the designers and the
5:18 culture that they were uh embedded in.
5:20 So you know I think there it is
5:22 important these kinds of international
5:26 questions. So when you build AI at
5:28 Google you know do you have that in
5:30 mind? Do you feel you competitive
5:33 imperative to in case that's true oh my
5:35 god we better be first? It's a very
5:37 intense time at the moment in in the
5:39 field as everyone knows. So many
5:40 resources going into it, lots of
5:42 pressures, lots of things to that that
5:44 need to be researched and there's sort
5:46 of lots of different types of pressures
5:48 going on. We obviously want all of the
5:49 brilliant things that these AI systems
5:51 can do. You know, I think eventually
5:53 we'll be able to make, you know, advance
5:55 medicine and science with it like we've
5:56 done with AlphaFold, come up with new
5:58 cures for diseases, new energy sources,
6:01 incredible things for humanity. That's
6:03 the promise of AI. Um but also there are
6:05 worries both in terms of you know if the
6:07 first AI systems are built with the
6:09 wrong value systems or they're built
6:11 unsafeely that could be also very bad
6:13 and you know there are at least two uh
6:15 risks that I worry a lot about. One is
6:17 bad actors in whether it's individuals
6:20 or rogue nations repurposing general
6:22 purpose AI technology for harmful ends
6:24 and then the second one is obviously the
6:27 technical risk of AI itself as it gets
6:28 more and more powerful more and more
6:30 agentic can we make sure we uh the guard
6:33 rails are safe around it they can't be
6:35 circumvented and that interacts with
6:37 this idea of you know what are the first
6:39 systems that are built by humanity going
6:41 to be like there's commercial imperative
6:44 there's there's national imperative and
6:46 there's a safety aspect to uh worry
6:48 about, you know, who who's in the lead
6:50 and where those uh projects are. A few
6:53 years ago, the companies were saying,
6:54 "Please regulate us. We need
6:57 regulation." And now in the US, at
6:59 least, the current administration seems
7:02 less interested in putting regulations
7:06 on AI than accelerating it so we can
7:08 beat the Chinese. Are you still asking
7:10 for regulation? Do you think that that's
7:12 a miss on our part? I I think um you
7:14 know and I've been consistent in this I
7:17 think uh there are these you know uh
7:20 other geopolitical sort of overlays that
7:21 have to be taken into account and the
7:23 world's a very different place to you
7:24 know how it was 5 years ago in many
7:27 dimensions but there's also you know I
7:29 think the idea of smart regulation that
7:31 makes sense around these increasingly
7:33 powerful systems I think is going to be
7:35 important I continue to believe that I
7:36 think though and I've been sitting on
7:38 this as well it sort of needs to be
7:40 international which looks hard at the
7:42 moment in the way the is working because
7:43 these systems, you know, they're going
7:46 to affect everyone and they're they're
7:49 digital systems. So, you know, if you
7:51 sort of restrict it in one area, that
7:53 doesn't really help in terms of the
7:54 overall safety of these systems getting
7:57 built, you know, uh for the world um and
7:59 as a society. So that's the bigger
8:01 problem I think is some kind of
8:02 international cooperation or or
8:05 collaboration I think is what's required
8:07 and then smart regulation, nimble
8:09 regulation that moves as the knowledge
8:12 about the research becomes you know
8:14 better and better. Would it ever reach a
8:16 point for you where you would feel man
8:18 we're not putting the guard rails in you
8:21 know we're competing that we really have
8:23 to stop or you can't get get involved in
8:26 that? I think the a lot of the leaders
8:28 of the of the main labs at least the
8:30 western labs you know we do it's there's
8:32 a small number of them and we do all
8:34 know each other and talk to each other
8:35 regularly and a lot of the lead
8:37 researchers do the problem is is that we
8:40 it's not clear we have the right
8:42 definitions to agree when that point is
8:44 like today's systems I although they're
8:45 you know they're impressive as we
8:46 discussed earlier they're also very
8:48 flawed um and I don't think today's
8:51 systems are posing any sort of
8:54 existential risk But um so it's still
8:56 theoretical but the problem is there a
8:57 lot of unknowns. We don't know how fast
8:59 those will come and we don't know how
9:02 risky they will be. But in my view when
9:04 there are so many unknowns then one I'm
9:06 optimistic we'll overcome them. Uh at
9:08 least technically I think the
9:09 geopolitical questions could be actually
9:11 end up being trickier given enough time
9:13 and enough care and thoughtfulness you
9:15 know sort of using the scientific method
9:18 as we in you know approach this AGI
9:20 point. That makes perfect sense. But on
9:22 the other hand, if that time frame is
9:25 there, we just don't have much time, you
9:27 know. No, we don't we don't have much
9:28 time. I I mean, we're increasingly
9:32 putting resources into security and um
9:35 things like cyber um and also research
9:38 into controllability and understanding
9:39 of these systems, sometimes called
9:41 mechanistic interpretability. You know,
9:42 there's a lot of different subbranches
9:44 of AI. That's why I want to get to
9:46 interpret that are being invested in and
9:48 I think even more needs to happen. Um
9:51 and then at the same time we need to
9:54 also have uh societal debates more about
9:56 institutional building. How do we want
9:57 governance to work? How are we going to
9:59 get international agreement at least on
10:02 some basic principles around uh how
10:04 these systems are used and deployed and
10:06 and and also built? What about the
10:09 effect on work on the marketplace? You
10:12 know how much do you feel that AI is
10:15 going to change people's jobs? You know
10:17 the way jobs are distributed in in the
10:19 workforce? I don't think we've seen my
10:20 my view is if you talk to economists
10:22 they they feel like there's not much has
10:24 changed yet you know people are finding
10:26 these tools useful certainly in certain
10:28 domains like things like alphafold many
10:29 many scientists are using it to
10:31 accelerate their work so it seems to be
10:33 additive at the moment we'll see what
10:35 happens over the next 5 10 years I think
10:37 it's there's going to be a lot of change
10:40 with the jobs world but I I think as in
10:42 the past what generally tends to happen
10:44 is new jobs are created that are
10:46 actually better that utilize these tools
10:48 or new technologies is what happen with
10:49 the internet, what happen with mobile.
10:51 We'll see if it's different this time.
10:52 Obviously, everyone always thinks this
10:54 new one will be different and it maybe
10:56 it will be. Um, but I think for the next
10:59 few years, it's most likely to be, you
11:00 know, we'll have these incredible tools
11:03 that supercharge our productivity, make
11:06 us, you know, um, uh, really useful for
11:08 creative tools and and actually almost
11:10 make us a little bit superhuman in some
11:13 ways in what we're able to produce, um,
11:15 individually. So I think there's going
11:17 to be a kind of a kind of golden era of
11:18 the next period of what what we're able
11:21 to do. Well, if AGI can do everything
11:22 humans can do, then it would seem that
11:24 they could do the new jobs, too. That's
11:27 the next question about like what AGI uh
11:29 uh brings. But, you know, even if you
11:30 have those capabilities, there's a lot
11:32 of things I think we won't want to do,
11:34 you know, with a with a machine. You
11:36 know, I sometimes give this this example
11:38 of doctors and nurses. you know, uh
11:40 maybe a doctor and what the doctor does
11:41 and the diagnosis, you know, one could
11:44 imagine that being helped by a AI tool
11:46 or or even having an an AI kind of
11:49 doctor on the other hand like nursing,
11:50 you know, I don't think you'd want a
11:52 robot to do that. I think there's
11:54 something about the human empathy aspect
11:56 of that and the care and so on that's
11:59 particularly uh humanistic. I think
12:01 there's lots of examples like that where
12:03 but it's going to be, you know, a
12:05 different world for sure. If you're you
12:08 would talk to a graduate now M what
12:11 advice would you give to keep working
12:14 through the course of of a lifetime you
12:17 know in the age of AGI? My my view is
12:19 currently and of course this is changing
12:21 all the time with with with the
12:24 technology developing but right now you
12:26 know if you think of the next 5 10 years
12:28 as being um the the most productive
12:30 people might be 10x more productive if
12:33 they are native with these tools. So I
12:36 think kids today, students today, my
12:39 encouragement would be immerse yourself
12:41 in these new systems, understand them.
12:43 So still I think it's still important to
12:44 study STEM and programming and other
12:46 things so that you understand how
12:48 they're built. Maybe you can modify them
12:50 yourself on top of the models that are
12:51 available. There's lots of great open
12:53 source models and so on. and then
12:56 become, you know, um, incredible at
12:58 things like fine-tuning, system
13:00 prompting, you know, system
13:02 instructions, all of these additional
13:04 things that anyone can do and really
13:06 know how to get the most out of those
13:09 uh, tools and do it for your, you know,
13:10 your research work, programming, things
13:12 that you're doing on your course and
13:14 then come out of that being incredible
13:17 at utilizing and those new tools for
13:18 whatever it is you're going to do. Let's
13:21 look a little beyond the five and and 10
13:24 year range. Tell me what you envision
13:27 when you look at the our future in 20
13:30 years and in 30 years if this comes
13:32 about. What's the world like when AGI is
13:34 everywhere? Well, if everything goes
13:37 well, then we should be uh in an era of
13:39 what I like to call sort of uh radical
13:42 abundance. So, you know, AGI solves some
13:44 of these key what I sometimes call root
13:46 node problems in the world facing
13:49 society. So good one examples would be
13:51 curing diseases, much healthier longer
13:53 lifespans, um finding new energy
13:55 sources, uh you know, whether that's
13:58 optimal batteries and better better you
14:00 know room temperature superconductors,
14:03 fusion um and then if that all happens
14:05 um then you know we should be it should
14:07 be a kind of era of maximum human
14:10 flourishing where we travel to the stars
14:13 um and colonize the the galaxy. Um
14:15 that's that's that's you know I think
14:17 the beginning of that will happen in the
14:19 next 20 30 years if if if if the next
14:21 period goes well. I'm a little skeptical
14:24 of that. I think we have an unbelievable
14:27 abundance now but we don't distribute it
14:29 you know fairly. I think that we kind of
14:31 know how to fix climate change right we
14:33 don't need a AGI to tell us how to do it
14:35 yet we're not doing it. I I agree with
14:38 that. I think I think we've been as a as
14:40 a species, a society, not good at
14:42 collaborating. And I think climate is a
14:44 good example. But I think we're still
14:46 operating, humans are still operating in
14:48 a zero sum game mentality because
14:50 actually the earth is quite finite
14:53 relative to the amount of people there
14:55 are now and our cities. And I mean this
14:57 is this is why our natural habitats are
14:59 being are being destroyed and and it's
15:01 infecting, you know, wildlife and and
15:03 the climate and everything. And it's
15:04 also partly because people are not
15:07 willing to accept we do now to to to
15:09 figure out climate but it would require
15:11 people to make sacrifices and people
15:13 don't want to. But this radical
15:15 abundance would mean would be different.
15:18 We would be in a finally like it would
15:20 feel like a non-zero sum game. How would
15:22 we get Jordan into that? Like you talk
15:23 about disease. I give you an example. We
15:25 have we have vaccines and now people are
15:26 some people think we should let me give
15:29 you a very simple example. Water access.
15:31 This is going to be huge issue in the
15:32 next 10 20 years. It's already an issue.
15:34 Countries and different you know poorer
15:35 parts of the world, drier parts of the
15:37 world also obviously compounded by
15:39 climate change. We have a solution to
15:42 water access. It's desalination. It's
15:43 easy. There's there's plenty of sea
15:44 water. Almost all countries have a
15:46 coastline. But the problem is it's salty
15:49 water. But desalination only very rich
15:51 countries, some countries do do that.
15:53 Use desalination as a solution to their
15:55 freshwater problem. But it costs a lot
15:57 of energy. But if energy was essentially
16:00 zero, there was renewable free clean
16:03 energy, right? Like fusion, suddenly you
16:05 solve the water access problem. Water is
16:07 who controls a river or what you do with
16:09 that does not becomes, you know, much
16:12 less important than it is today. I think
16:14 things like water access, you know, if
16:15 you roll forward 20 years and and there
16:17 isn't a solution like that could lead to
16:18 all sorts of conflicts probably that's
16:20 the what the way it's trending,
16:21 especially if you include further
16:23 climate change and there's many many
16:24 examples like that. You could create
16:26 rocket fuel easily because you just
16:28 separate that from seawater, hydrogen
16:30 and oxygen. It's just energy again. So
16:34 you feel that these problems get solved
16:40 by AGI by AI. Then we're going to our
16:43 outlook will change and we will be
16:45 that's what I hope yes that's what I
16:46 hope but it would that's still a
16:48 secondary part. So the AGI will give us
16:50 the radical abundance capability
16:52 technically like the water access. I
16:54 then hope and this is where I think we
16:56 need some great philosophers or or or
16:58 social scientists to be involved that
17:01 should hopefully um shift our mindset as
17:04 a as a society to nonzero sum. You know
17:06 there's still the issue of do do you
17:08 divide even the radical abundance fairly
17:09 right of course that's what should
17:10 happen but I think there's much more
17:13 likely once people start feeling and
17:15 understanding that there is this almost
17:18 limitless um supply of of of raw
17:19 materials and energy and things like
17:22 that. Do you think that, you know,
17:24 driving this innovation by profitm
17:26 companies is the right way to go? We're
17:28 most likely to reach that optimistic
17:30 high point through that. I think it's
17:32 the current, you know, capitalism or,
17:33 you know, is the current or the western
17:35 sort of democ democratic kind of, you
17:39 know, systems uh have so far been proven
17:41 to be sort of the best drivers of
17:43 progress. So I think that's true. My
17:45 view is that once you get to that sort
17:47 of stage of radical abundance and post
17:51 AGI, I think economics starts changing
17:53 even the the notion of value and money.
17:55 And so again, I think we need I'm not
17:57 sure why economists are not working
17:58 harder on this if they maybe they don't
18:00 believe it's that close, right? But but
18:03 but if they really did that like the the
18:05 AGI scientists do, then I think there's
18:07 a lot of economic new economic theory
18:10 that's required. You know, one final
18:13 thing. I actually agree with you that
18:15 this is so significant and it's going to
18:17 have a huge impact, but when I write
18:20 about it, I always get a lot of response
18:24 from people who are really angry already
18:26 about artificial intelligence and and
18:29 and what's happening. Have you tasted
18:32 that? Have you gotten that push back and
18:34 and and anger by a lot of people? It's
18:36 almost like the industrial revolution
18:38 people Yeah. I mean I think that anytime
18:40 there's I I haven't personally seen a
18:42 lot of that but obviously I've you know
18:43 read and heard a lot about it's very
18:45 understandable that's that's happened
18:47 many times you say industrial revolution
18:49 when there's big change a big revolution
18:50 and I think this will be at least as big
18:52 as the industrial revolution probably a
18:53 lot bigger that's surprising there's
18:56 unknowns it's scary things will change
18:57 but on the other hand when I talk to
18:59 people about the passion of why I'm
19:01 building AI which is to advance science
19:03 and medicine and understanding of the
19:05 world around us and then I explain to
19:07 people you know and I've demonstrated
19:09 it's just talk here. Here's AlphaFold,
19:10 you know, Nobel Prize winning
19:12 breakthrough can help with medicine and
19:13 drug discovery. Obviously, we're doing
19:15 this with isomorphic now to extend it
19:17 into drug discovery and we can cure
19:18 diseases, terrible diseases that might
19:20 be afflicting your family. Suddenly,
19:22 people are like, well, that's of course
19:24 we need that. It would be immoral not to
19:26 have have that if that's within our
19:28 grasp. And and and the same with climate
19:31 and energy, you know, many of the big
19:33 societal problems. It's not like we're,
19:35 you know, we we know, we've talked about
19:37 there's many bigish challenges facing
19:39 society today. And I often say I would
19:42 be very worried about our future if I
19:43 didn't know something as revolutionary
19:46 as AI was coming down the line to help
19:48 with those other challenges. Of course,
19:50 it's also a challenge itself, right? But
19:52 at least it's one of these challenges
19:53 that can actually help with the others
19:57 if we get it right. Well, I hope I your
19:59 optimism holds out and is justified.
20:00 Thank you so much. I'll do my best.
20:03 Thank you. [Music]