0:09 So I think one of my favorite parts of
0:11 the film is that part where the team has
0:14 just told you, well we could just find
0:16 the structures for all the proteins and
0:19 just release those. I and then you
0:21 release them to the world and you see
0:23 the the map of the globe light up as
0:26 people in real time are getting all of
0:28 those structures. What was that like?
0:30 Tell me what was the what was the
0:32 feeling that you had? I mean look there
0:33 there was so many amazing moments and
0:35 the team will remember this of um but
0:37 that was one of the big highlights. It
0:39 was very satisfying to see that this
0:41 sort of idea that we maybe if we crack
0:44 this really important problem you know
0:45 potentially millions of researchers
0:47 around the world will will make use of
0:49 it and um you know to see that sort of
0:52 lighting up all across the globe um is
0:54 really a kind of humbling and amazing
0:57 experience. I came here for the uh AI
1:01 for science forum which you held and I
1:04 think this the thing that shocked me is
1:08 that for 50 years the work of tens of
1:10 thousands of scientists revealed the
1:15 structures of 150,000 proteins. That was
1:18 the grand sum of human effort. And then
1:21 in a few years your team small 15 20
1:24 people was able to find the structures
1:26 of 200
1:28 million. Yeah. Well look I mean first of
1:30 all the first thing to say is we
1:32 couldn't have done it without the first
1:35 150,000 right. So that incredible, you
1:36 know, we need to thank the the
1:38 structural biology community, you know,
1:41 thousands of researchers painstakingly
1:43 putting together these structures using
1:46 very exotic and pretty expensive and
1:49 complicated equipment um over 50 years
1:51 like you say and the sum totals 150,000
1:55 but it was enough to uh kickstart us to
1:57 be able to create a system like Alphold
2:00 to learn from those 150,000 and then
2:03 actually uh learn further from its own
2:05 predictions, the best ones of its own
2:06 predictions and sort of feeding that
2:08 back into the system and then eventually
2:10 being good enough to kind of understand
2:12 something about protein, something
2:14 fundamental about protein structure. So
2:16 then eventually we could do all 200
2:18 million and and I think as John says in
2:19 the film, you know, it usually takes a
2:22 PhD student their their whole PhD.
2:23 That's kind of a rule of thumb of like
2:25 to find the structure of one protein. So
2:28 you know 200 million times 5 years a
2:30 billion years of PhD time which is quite
2:32 something you know to have done in in a
2:35 year. See like I feel like I didn't get
2:37 it before I came here and I heard those
2:39 numbers and I was like oh things have
2:42 fundamentally changed and I don't think
2:45 the world gets it yet. Um so I think
2:46 that's one of the exciting things about
2:49 this film. And I think, you know,
2:50 another thing that's really important to
2:52 keep in mind is you figured out all 200
2:55 million now they're out there, but the
2:56 discoveries and the breakthroughs that
2:58 are going to come from that, they're
2:59 going to take decades, but we are going
3:01 to be reaping the the rewards of that
3:04 for for decades, centuries, I think. So,
3:06 I mean, it's sort of opened up uh and
3:07 and this is why we put out there into
3:08 the world. We knew we could only think
3:10 of a tiny fraction of what the entire
3:12 scientific community might do with it.
3:14 And it's really gratifying to see the
3:16 whole range of things that people are
3:18 already doing. Over two and a half
3:19 million researchers from pretty much
3:21 every country in the world working on
3:23 their really important biology and
3:25 medical uh problems and making great
3:28 progress with that. And and right now I
3:30 think it's super well known in the
3:31 scientific community but as you say I
3:33 don't think it's it's appreciated yet in
3:34 the general public what this is going to
3:36 do. And I think that will come in the
3:38 next 5 10 years as we start getting uh
3:40 you know AI designed drugs that were
3:42 helped by things like Alpha Fold and
3:44 many many other amazing things for
3:46 society that will come as a downstream
3:48 consequence of us knowing what these
3:51 structures are. Now can you think of any
3:53 examples that have happened since the
3:55 film? Uh well there's many in fact a few
3:56 of them were were were mentioned in
3:58 those headlines you know these these
4:00 ideas of um designing enzymes which are
4:02 types of proteins you know that catalyze
4:06 certain reactions and maybe we could uh
4:08 modify some of these enzymes to help
4:09 deal with some environmental issues we
4:11 have like the amount of plastics in the
4:13 oceans or perhaps doing even carbon
4:16 capture things like this um I think
4:18 incredible opportunity and obviously the
4:20 main reason I was I I was interested in
4:22 doing uh protein folding was to
4:24 accelerate drug discovery
4:27 And uh and we spun out a company, a
4:28 sister company called Isomorphic Labs
4:30 that actually is developing other
4:33 technologies around AlphaFold and and
4:34 the newer versions of AlphaFold to
4:36 actually start not only do you know do
4:37 you understand the structure of a
4:39 protein, but then you could design a
4:41 drug compound to bind to the right part
4:43 of the protein surface once you
4:44 understand what it structure, what its
4:46 function is. And that's the beginning of
4:48 understanding disease and maybe trying
4:50 to cure some of these terrible diseases.
4:52 And we're working on, you know, uh,
4:54 cancers and cardiovascular diseases, all
4:55 sorts of things, you know, more than a
4:57 dozen drug programs. And one day, I
4:59 hope, uh, you know, we'll be able to
5:01 reduce drug discovery down from taking
5:03 like 10 years on average to go from
5:05 understanding a target to having a drug
5:08 in in the clinic to, you know, maybe a
5:10 matter of months, perhaps even weeks,
5:12 just like we did with the protein
5:14 structures. Yeah, that's extraordinary.
5:17 I wanted to ask you about your origin
5:19 story. um you know something that
5:23 occurred to me well here's my thinking
5:26 right AI in a way is not new dates back
5:29 to the 40s and maybe 50s and it went
5:31 through a series of sort of booms and
5:34 then busts or AI winters as people refer
5:37 to them um I think in the film you said
5:39 there's no point in being born you know
5:40 ahead of your time 50 years ahead of
5:42 your time so I think that my question
5:46 for you is when you were graduating from
5:49 Cambridge that was kind of an AI winter
5:52 Um, did you see something that other
5:55 people didn't see that led you to know
5:59 the time for AI was coming or were you
6:01 just obsessed with this idea of
6:04 intelligence and just ridiculously lucky
6:06 to be born in this moment?
6:08 Well, look, it's a bit of both, I would
6:10 say. So, and actually there's many
6:11 people in the audience, many of my
6:13 colleagues and friends who've been with
6:14 me that almost that entire journey. you
6:16 saw some of them, David Silver, Ben Copy
6:19 and Shane Le and um they'll remember
6:21 this very well and Tim Stevens and it's
6:23 um look I I have to be honest I would
6:26 have done it no matter what because um I
6:28 when I was growing up and you saw that
6:30 with the chess and other things I just
6:32 felt that intelligence and and therefore
6:33 artificial intelligence was the most
6:35 fascinating thing one could work on. I
6:38 always wanted my passion was was to try
6:40 and understand the universe around us
6:42 you know sometimes call it the nature of
6:44 reality all the big questions. So
6:45 physics was my favorite subject at
6:47 school and all the big physicists
6:48 Richard Feman and Steven Weinberg all
6:51 the great physicists Carl Sean. Um but I
6:52 sort of thought that we needed another
6:55 helping hand like a tool that could help
6:57 us help us as human scientists
6:59 understand the world better around us.
7:02 And uh and that for me was obvious to me
7:03 from the beginning as I was when I was a
7:06 teenager that um it would be AI and it
7:08 would be the you know not only the most
7:10 uh maybe most powerful tool to help us
7:12 do science but the most interesting uh
7:14 thing to develop in itself you know
7:16 interrogate what intelligence is and try
7:18 to understand what it is uh and while
7:20 you're trying to build something that is
7:22 intelligent. So I think I was always
7:24 going to do that. Um but also when you
7:26 look at these AI winters and you look at
7:28 the state of technologies you find it
7:31 you have to have a good reason why you
7:33 think you might be able to try it in a
7:35 new way those winters are are in a way
7:38 learning you know opportunities to learn
7:39 why did those methods not work those
7:41 deep blue methods that we saw that beat
7:43 Gary Kasparov amazing they could win the
7:44 chess but really was a little bit of a
7:46 dead end because they were hard
7:49 programmed hardcoded to only do that one
7:51 thing play chess so it wasn't some sense
7:53 was missing the essence of intelligence
7:55 in in many ways this this general
7:57 generalness and this learning cap
7:59 capability and we knew we had these
8:01 these techniques they were very nent you
8:03 know neural networks became deep
8:05 learning and then reinforcement learning
8:08 as you heard we we knew those techniques
8:10 um could potentially scale why did we
8:13 know that because actually the the the
8:15 human brain is a form of those you know
8:17 we're a neural network obviously that's
8:18 what inspired neural artificial neural
8:21 networks in the first place was was was
8:22 you know neurons in the brain
8:24 And reinforcement learning is one of the
8:27 main ways that animals including humans
8:28 do learn. You know the dopamine system
8:30 in the brain implements this form of
8:32 reinforcement learning. So you know in
8:34 the limit this must be possible using
8:36 these types of learning techniques. But
8:37 of course you don't know at that point
8:40 if you're 50 years ahead or not right
8:42 with your time. But I just want to be
8:44 clear on what you're saying. In essence,
8:47 you're saying that the AI models that
8:49 you're currently working with are in
8:50 some sense analogous to the human brain
8:52 or the human brain is analogous very
8:54 very loosely speaking they're inspired
8:56 by the same types of techniques and
8:58 approaches uh you know biological
9:00 learning systems use right that's the
9:02 key it's the learning and the generality
9:04 do you think then at some point AI will
9:07 be conscious well that's a that's a huge
9:09 question and and obviously you know we
9:11 have to you know they're not not
9:12 necessarily agreed upon definitions of
9:13 consciousness obvious Obviously there
9:15 are aspects of it like self-awareness
9:18 and things that are agreed upon. Um I
9:20 think that's part of the I always felt
9:22 actually answering that question was one
9:24 of the things that will come about being
9:26 on this journey with AI trying to build
9:28 artificial minds and then comparing them
9:31 to what we know about about uh uh the
9:33 human brain and then seeing what the
9:36 differences are if any and those
9:38 differences might tell us what uh and
9:39 certainly help us understand our own
9:41 minds better. things like dreaming,
9:43 emotions, creativity, and things like
9:45 consciousness, all the mysteries of the
9:48 mind uh and uh and then uncover help us
9:50 understand them and then maybe
9:52 understand how special they are to the
9:54 substrate that we're in. You know, we're
9:57 carbon based versus the silicon based
9:59 systems that we're building.
10:02 You started DeepMind here in London and
10:04 you had certain forces, investors maybe
10:07 trying to pull you to Silicon Valley,
10:10 but you resisted. Tell me what it was
10:12 about this place or the culture that
10:15 that made you want to stay here. Well,
10:17 look, I I I've been I I was born in
10:18 London. I've lived in London my whole
10:20 life and you know, I think there's a lot
10:22 of amazing things about the cultures
10:24 that I was immersed in. You know, you
10:26 saw me going to Cambridge and the sort
10:27 of golden triangle of Oxford, Cambridge
10:29 and Imperial as we're nearby and UCL,
10:33 all these august institutes. I think um
10:34 the UK has always been very strong in
10:36 science and innovation. We punch well
10:38 above our weight. There's also obviously
10:41 a rich history in computers with Charles
10:43 Babage and Alan Turing. So I feel we're
10:45 trying to carry on in that tradition.
10:46 But there was some practical reasons.
10:49 One is that uh uh I at the time when we
10:51 started in 2010, there was a lot of
10:54 talent trained by these top places that
10:56 um unless they wanted to go and work for
10:57 a hedge fund or something in the city in
10:59 finance, they wanted to do something
11:01 really intellectually challenging. There
11:03 weren't there aren't that many companies
11:05 doing that kind of stuff in the UK or
11:07 actually in Europe really. So I felt
11:09 that we could um gather a lot of talent
11:11 together very quickly that was probably
11:14 being underutilized in in Europe and
11:16 that that's how it transpired. But the
11:18 second reason was that I think AI is so
11:20 important. It's going to affect the
11:21 whole world. Obviously you've heard me
11:22 talk about in the film that you know I
11:23 think it's going to be one of the most
11:25 important things ever invented. I felt
11:28 that I do think it's needs the
11:30 international sort of approach and
11:32 cooperation around what we want to do
11:34 with this technology. how we want it to
11:37 be deployed, how we want it to um affect
11:39 our society. I it's going to affect
11:41 everyone in all countries. Um so I don't
11:44 I think it needs to be uh uh built with
11:47 more uh voices and stakeholders uh than
11:49 just sort of 100 square miles of um
11:51 California, you know, in Silicon Valley
11:53 and also beyond technologists and the
11:55 scientists just building it. think it
11:57 needs um social scientists, economists,
11:59 psychologists, you know, governments,
12:03 academia, all to be involved um in in in
12:05 defining how this this this enormously
12:07 transformative technology should go.
12:09 Yeah. Well, it's clearly going to be
12:11 very powerful and one of the issues that
12:14 the the film addresses is the morality
12:17 and ethics around that and I think
12:20 particularly the safety of it. What
12:21 keeps you up at night when you think
12:24 about AI? Well, many things and and um
12:26 you know, I don't get much sleep these
12:28 days, but I I for many reasons, but I
12:31 think um Shane and I, you know, will
12:33 remember this is that we we actually uh
12:36 when we started out 2010, um it's only
12:38 15 years ago. It's kind of amazing to
12:40 see how the world's changed. And in
12:42 2010, no one was talking about AI.
12:45 Nobody was doing industry. Um but we
12:47 knew that this was a, you know, this had
12:49 the kernel of something incredibly
12:51 important. And uh and we planned for
12:53 success. So, we thought it was going to
12:55 be a 20-y year journey and often when
12:56 you do that in technology and in
12:59 startups and and hard sciences that that
13:01 it always stays 20 years away, right?
13:03 So, somehow, but for us, it's it was
13:05 actually it really has been 20 years and
13:07 we're sort of 15 years in now. Um, and
13:09 we planned for success, but we knew that
13:10 success meant all these amazing things,
13:13 curing diseases, you know, um, solving
13:16 energy crisis, climate, using AI to
13:18 help, all of these things. Um but also
13:21 it came with these risks, risks of harm,
13:23 enormous risks of misuse. And so from
13:25 the beginning we've been very cognizant
13:28 of that responsibility. Um but also
13:30 trying to push that debate and be role
13:31 models about how to develop this
13:36 technology in a responsible way. Is this
13:39 potentially unstable in that you could
13:42 have a hundred companies who have the
13:45 utmost ethics and morality and they
13:48 think about safety to an extreme level
13:51 and you have one actor who doesn't.
13:54 Yeah. And then it ruins it for everyone. Yeah.
13:55 Yeah.
13:57 Well, that's the huge that's one of the
13:59 huge risks that I worry about today is,
14:00 you know, so-called race dynamics,
14:02 right? Race to the bottom. You know
14:04 there's many uh examples of this in
14:06 history right and even if all the actors
14:09 are good in that environment let alone
14:11 if you have some bad actors you know
14:14 that can drag everyone to to to to rush
14:17 too quickly to um cut corners these
14:19 kinds of things because in individual
14:20 it's a sort of tragedy of the common
14:23 situation for any individual actor sort
14:25 of makes sense but but as an aggregate
14:27 it doesn't and um and I've been saying
14:29 that for a long time and Shane and
14:31 others many others uh Helen and people
14:32 work on respons responsibility at deep
14:34 mind. We've been talking a lot about
14:35 this and that's why I was so pleased to
14:37 see some of these international summits
14:38 being set up. the first one in the UK,
14:40 Bletchley Park, and then just recently
14:43 in Paris uh that Macron hosted,
14:45 President Macron. And I think we need
14:48 those kinds of uh international debates
14:52 uh about where this is going and um and
14:54 one of the big problems is how do we uh
14:56 give access to these technologies and
14:57 you've seen with AlphaFold you know open
14:59 to the world open science obviously
15:02 that's better for progress than amazing
15:03 uh all the all the all the good
15:05 researchers and the good people around
15:07 the world can can can build on top of
15:08 that work and do amazing things with it
15:10 but at the same time you want to
15:12 restrict access to to that same
15:15 technology to would be bad actors
15:17 whether that's individuals or even rogue
15:19 nations and it's very hard balance to
15:22 get right like it's there's no one's yet
15:24 got a good answer for how you you know
15:26 do both of those things I think
15:29 initially I was encouraged by the amount
15:31 of effort required to develop AI so
15:33 there's many references in the film to
15:35 the Manhattan project and I think it's
15:37 one of the benefits of nuclear weapons
15:39 that in order to develop them you
15:41 actually need basically state
15:43 sponsorship or you know a huge huge
15:45 undertaking and initially AI looked the
15:47 same way. This is going to take you know
15:50 the huge tech companies or or states to
15:52 develop this but lately there's these
15:54 new developments like deepseek and
15:56 there's an Alibaba model and they look
15:59 much more sort of thrifty. Yeah. Which I
16:01 think there could be a fear that that
16:03 really democratizes the access to this
16:05 technology increasing the probability of
16:08 a bad actor. Yeah. So look that that you
16:10 you know you're exactly right and I feel
16:12 like this it's it's sort of it's very
16:13 good on the one hand you know more
16:16 people accessing these technologies um
16:18 you know hobbyists you know kids like I
16:20 was back when I was tinkering around
16:22 with theme park can now you know uh uh
16:24 work on some really interesting AI
16:25 systems and probably come up with
16:30 amazing new uh uh applications. Um but
16:32 yeah it's it's sort of uh it's you know
16:34 it's it's it's available to everyone and
16:37 it is worrying and I feel like you know
16:40 maybe we need some new uh uh approaches
16:42 you know where maybe uh the market
16:44 environment or something else is set
16:46 where it kind of incentivizes the right
16:48 behavior right so you know I was talking
16:50 to some economist friends of mine and
16:52 maybe they need to get involved now to
16:54 set up the right incentive structures so
16:57 that uh actually the players that and
16:59 the actors that are are are have the
17:01 right intentions, you know, backed by
17:03 government society are actually the ones
17:06 that that get successful and and those
17:08 AI systems are more powerful and and and
17:10 more productive. Um, and maybe we have
17:11 to start thinking about those kinds of
17:13 approaches to deal with the practicality
17:15 that we're in, which you know, I'd much
17:18 rather there be a a calm CERN-like
17:20 effort towards AGI, these final few
17:22 steps, but given the geopolitical
17:23 framework we're in, maybe that's not
17:25 possible. So, we have to be more
17:28 pragmatic about it. For sure.
17:29 In the film, you talk about how the
17:32 future will be radically different. So,
17:33 I want to ask for myself and for
17:35 everyone in this audience. Given that
17:36 you were one of the leaders at the
17:38 forefront of this, what do you think the
17:41 world will be like in 5 to 10 years? Do
17:42 you do you have an outlook on that? And
17:44 I guess further to that, I have four
17:47 kids. I'm like, what do I do with like
17:48 do do I send them to school? Is that
17:52 even worthwhile anymore? Like, so so you
17:53 are the guy that I want to ask this
17:54 question to more than anyone in the
17:55 world. Sure. Well, let's start with the
17:56 same question. For sure. send them to
18:00 school. I I I I I say my kids too. Look,
18:02 I think the next 5 10 years is going to
18:05 be um it's a bit you know what I would
18:06 say to kids these days is embrace the
18:09 new technologies and as parents I think
18:11 let your kids play with them that
18:13 they're coming and they're going to
18:15 increase productivity, creativity. I
18:16 think it's going to be amazing. It's a
18:18 bit like my era, my generation with
18:20 computers, the advent of computers, you
18:21 know, there was a lot of fears about
18:24 that too and even gaming. And then um
18:25 people work out, you know, if you're
18:26 growing up with that, it feels natural
18:29 to you, second nature. And then um
18:30 they're often the ones that can extend
18:32 it into new ways we couldn't even dream
18:34 of today. So I think a lot of that's
18:35 going to happen. So I still think it's
18:37 important to do maths and computer
18:39 science because you'll be best placed to
18:41 take advantage of these frontier
18:43 technologies and use them in new ways.
18:46 So um so that I don't the recommendation
18:48 I think is the same as it's always been.
18:50 um maybe just be prepared that things
18:52 are going to move even faster and to
18:54 learn you know about adapting and
18:56 learning to learn actually learning
18:58 quickly to adapt to a new technology
18:59 that's going to come out it seems like
19:01 almost every week uh in terms of like
19:03 society what I see happening is I mean 5
19:05 10 years is a long time in AI hard to
19:07 predict that far ahead but um what I
19:09 certainly imagine in the areas of
19:11 science is I think a new renaissance
19:13 almost a new golden age which I hope
19:15 alpha fold is just the beginning of um
19:18 of us understanding and making lots of
19:20 breakthroughs in many areas of science
19:21 uh and helping us with all the biggest
19:23 questions, you know, from curing all
19:26 diseases to helping with uh new energy
19:28 sources and and and climate. And I think
19:30 we're going to start seeing that all in
19:32 the next 10 years. That's extraordinary.
19:34 Well, I look forward to it. I hope you
19:36 do as well. Uh we're going to leave it
19:39 there. Um but yeah, congratulations on
19:41 all your great work and winning the
19:44 Nobel Prize and it's just tremendous. Seriously,