0:01 But really what separates great
0:04 scientists from good scientists is their
0:06 creativity and their sort of maybe you
0:08 could call it a taste for what's the
0:10 right question um what's the right
0:12 hypothesis and it's much harder to come
0:14 up with the right question and the right
0:17 hypothesis than it is to solve a
0:20 conjecture and um you know I would call
0:21 that the highest level of sort of
0:24 creativity and so far today's systems
0:27 you know they don't have that capability
0:28 um you know one way we maybe would test
0:30 that is you could imagine training a
0:32 foundation model with a knowledge cutoff
0:36 of something like 1911 uh and then see
0:37 if it could come up with general
0:41 relativity like Einstein did in 1915. Um
0:42 but that would be a good test I think
0:45 for AGI and I think today's systems
0:47 clearly would not be capable of doing
0:49 that but I think this will be solved in time.
0:50 time.
0:53 >> Okay that's that's great. So you talked
0:55 about uh scientific tool a tool for
0:59 scientific discovery right and so we see
1:02 a lot of you know headlines about AI you
1:03 know making great breakthroughs in
1:07 science sometimes but uh uh you know how
1:10 would this become the norm right I mean
1:13 so do you see AI becoming the scientific
1:15 tool for discovery soon
1:18 >> yeah the reason I spent my whole life um
1:22 and career working on AI is um I saw
1:25 quite early on as um if we could build
1:27 these kind of general models that were
1:29 good at uh pattern recognition um they
1:32 would be incredibly useful uh scientific
1:34 tools maybe the ultimate tool for
1:36 science um that's what I was thinking a
1:38 you know a lot of science is about
1:40 finding insights and structure in vast
1:44 amounts of data that's perfect for AI
1:46 and uh so I think we're going to enter
1:50 in the next probably 10 years this new a
1:52 new golden era for scientific discovery
1:55 almost a new renaissance using these
1:59 tools incredible tools um like Alphafold
2:00 but I hope that will be the first of
2:03 many uh that can massively speed up our
2:05 research and accelerate scientific
2:09 discovery across almost uh any subject
2:12 area. So I think that's going to be uh
2:14 the next period is is using these
2:16 systems as tools and then after that
2:18 we'll see as they become more autonomous
2:21 can they be kind of like co-scientists
2:23 with you like a PhD student something
2:25 like that I think we're still quite away
2:27 from that but maybe in 10 plus years
2:28 that will be possible
2:30 >> so just to reassure a good fraction of
2:32 the audience you still see a role for
2:33 humans in that future right
2:36 >> yes I think um you know the next phase
2:39 is going to be uh incredible I think for
2:42 uh human experts and scientists the
2:44 amount of work they'll be able to do and
2:45 I'm actually really excited about
2:48 cross-disciplinary science which is
2:50 quite hard because you have to
2:52 understand you know more than one
2:54 subject area maybe two three four
2:56 subject areas and then find some
2:58 interesting connections between that I
3:00 actually think that's where a lot of the
3:02 really valuable advances are going to
3:04 happen in the next few years these
3:06 combinations of subject areas and I
3:08 think having a tool like AI will really
3:12 help um scientists uh learn about and be
3:15 able to understand and process uh all of
3:17 that information from you know multiple
3:19 different uh domains.
3:23 >> So so in some sense science is is is a
3:26 well definfined field if you will I mean
3:27 you can recognize success and so on so
3:29 forth. So what do you think would be the
3:32 role of AI in more abstract domains like
3:35 policy and other kinds of public
3:36 decision making?
3:40 >> I think we're uh I think things like um
3:42 science but especially if you take uh
3:45 coding and maths um are more amunable to
3:47 the current systems we have today mostly
3:48 because especially coding and
3:50 mathematics but also things like games
3:54 like chess um they are verifiable. So
3:57 the answer that the AI system outputs uh
4:00 can be checked for correctness. And so
4:02 um it's very uh useful for when you're
4:06 training these systems, you can have uh
4:08 uh databases of questions and check to
4:12 see if um 100% if it's right or not
4:14 right. Um I think of course when you get
4:16 into the arts and the humanities, things
4:18 like decision-m policy, you know, I'm
4:20 not sure what you necessarily had in
4:22 mind, but they're much more subjective.
4:24 uh they're hard to run the same
4:27 experiment twice. So it's um it's
4:29 difficult to get data about what is a
4:31 good decision in those cases. So I think
4:33 they'll be um those errors will be a lot
4:39 harder for AI to um to sort of um model.
4:41 >> So as a neuroscientist, so what do you
4:42 think we have learned about human
4:44 intelligence itself with all these
4:46 advances in AI that we're seeing?
4:48 Well, I think when you're in the early
4:52 days of of of this uh I guess modern AI
4:54 phase, which maybe you could say is the
4:57 last 20 years um with the advent of uh
4:59 of you know things like deep learning
5:02 from from Jeff Hinton. Um a lot in the
5:04 when we started at Deep Mind too, we
5:05 took a lot of inspiration from the
5:08 brain. Uh high level systems
5:10 inspiration, not not the direct
5:11 mechanics of the brain, but things like
5:13 episodic memory, what does the
5:15 hippocampus do, which is what I studied.
5:18 um how uh obviously neural networks,
5:20 reinforcement learning, your area um
5:22 which we know the brain dopamine system
5:24 in the brain implements. So we kind of
5:27 took inspiration from the brain which is
5:29 the only general in example of a general
5:32 intelligence that we have know of maybe
5:35 in the universe um as as a starting
5:38 point and so we sort of knew that it was
5:40 possible in the limit right and uh and I
5:42 think that's why neural networks and
5:43 reinforcement learning have been so
5:47 successful because um uh uh uh uh I
5:49 think learning is the key to the these
5:51 modern AI systems working not being
5:53 programmed with the answer like experts
5:56 systems were in the '90s um like deep
5:58 blue but actually allowing the systems
6:01 to learn directly from data and I think
6:04 now what I what I've I I view looking
6:06 back at at neuroscience is you know how
6:08 efficient the brain is um sample
6:11 efficient uh it doesn't need you know to
6:12 ingest the whole of the internet to
6:14 understand things so it's a it's a
6:16 different what we've built now today it
6:17 uses some of the same principles but
6:20 it's been uh manifest in a in a very
6:22 different type of um system than
6:25 probably the way the brain works.
6:29 >> Okay, thank you. So, uh, so
6:30 one of the things that people always
6:33 talk about is the safety risk, security
6:35 risk from AI and so on so forth, right?
6:40 And so, what does GDMs stand on this and
6:42 how do you approach making sure that AI
6:43 is safe?
6:46 Well, we've been um when we again when
6:48 we started Deep Mind, we we actually,
6:50 you know, were planning for success. Our
6:52 mission statement was uh solve
6:54 intelligence step one and then step two,
6:56 use it to solve everything else, which
6:58 at the time, you know, was sounded like
7:00 science fiction, but I think now is
7:01 becoming clearer how that might be
7:04 possible. Um and and uh applying AI to
7:06 almost every subject area, but we
7:09 planned for success. So um even though
7:10 we were just starting out and the field
7:13 was mostly uh just building um building
7:16 up ahead of steam um we understood also
7:18 the implications of that if that did
7:19 turn out to be the case and we thought
7:22 about it as a 20-year mission and I
7:24 think we're basically on track for that
7:27 you know around 2030 um then it would
7:29 come with these attendant risks as well
7:32 as the enormous benefits to science and
7:34 medicine and all these things that I
7:37 think we need as a society um to help
7:39 with our other challenges, many other
7:41 challenges that we have uh around the
7:44 world. Um but also as these systems
7:46 become more powerful, I think there's,
7:47 you know, at least two risks that we've
7:49 always worried about. One is um bad
7:52 actors um human actors, but individuals,
7:55 but could be also nation states using
7:57 these systems uh and repurposing them
8:00 for harmful ends um because they're dual
8:02 purpose. Um but then also as we get
8:05 closer to AGI and I think we're entering
8:07 maybe a kind of agentic era should we
8:08 call it where systems are more
8:11 autonomous I think we'll see a lot of
8:13 that happening this year and next year
8:15 um then we have to make sure as well
8:18 that the guard rails uh are in place
8:20 that these systems do what we expect
8:24 them to do and don't veer off um into uh
8:27 areas that uh we didn't we hadn't
8:29 planned for um and that could also be uh
8:31 problematic. So I think those are the
8:33 two challenges. There's a very sort of
8:36 societal one which I think also is going
8:38 to require international dialogue and
8:41 maybe ideally a minimum set of standards
8:43 um that is agreed internationally. Um
8:45 and then the second one is more of a
8:48 technical risk of how can we make sure
8:50 these systems are robust and reliable.
8:52 >> So I I mean setting aside all
8:54 existential risk questions. So what do
8:58 you think is the top two risks that we
8:59 should address with with AI systems today?
9:01 today?
9:03 >> Well, I mean I I I I think I mentioned
9:05 the two classes of problems. I think we
9:08 need to worry about um things like bio
9:13 and cyber risk um uh very soon. I think
9:14 that the current systems are getting
9:18 pretty good at um cyber and I think we
9:21 need to make sure cyber defenses are
9:23 more powerful than the attack vectors
9:25 and it's something that you know we work
9:27 on quite a lot at at at Google and at
9:29 DeepMind is um sort of using AI for
9:32 cyber security um to make sure that of
9:34 course it's very useful tool for cyber
9:36 defense too but you need to make sure
9:37 that the defenses are stronger than the
9:40 offenses. So um you know I think those
9:42 are kind of near-term risks. Uh but
9:44 there are many that we need to think
9:46 about and and do a lot more research on
9:49 and some of them are to do with uh um
9:51 agreeing a sort of set of standards
9:52 which you know I know many people
9:54 including yourself are trying to work on.
9:55 on.
9:58 >> Great. Thank you. So again talking about
10:01 international collaborations right? So
10:02 you you said that we really have to
10:06 worry on that work on that. So an
10:08 audience like this or or or or a
10:10 gathering like this right where we are
10:13 trying to involve uh the global south
10:15 very much in the dialogue right so what
10:17 do you think would be the impact of such
10:20 gatherings on the overall direction of
10:23 AI do you think and are these going to
10:26 make a big change going forward
10:28 >> I think so and I and I think that's why
10:31 um it's important we convene this summit
10:33 around the world because it's going to
10:34 affect this technology is going to
10:37 affect everyone. It's um it's a digital
10:39 technology so it can't really be
10:41 contained by borders. Um there's things
10:44 like open source so which is generally
10:46 very good but also one has to think
10:49 about if you found a vulnerability or or
10:51 some issue with an open-source piece of
10:53 software how do you recall it? How do
10:55 you patch it? There's no recall. So we
10:57 need to think about that. These are new
11:00 issues with something like AI where it's
11:02 hard ahead of time to fully understand
11:05 if there are any vulnerabilities and um
11:08 I think for the global south and and
11:09 countries like India I think there's
11:12 huge opportunity for the youth of today
11:14 you you all have access to pretty much
11:16 the most cutting edge tools in the world
11:18 right I don't think that's ever sort of
11:20 happened before maybe only 3 to 6 months
11:22 after they've been invented in the
11:24 frontier labs and I think and I can say
11:26 as someone working on the coldface of
11:29 this um we barely have time to
11:32 understand what are the amazing uh
11:35 capabilities that could be supported by
11:37 these models you know in products and
11:39 applied research so I think there's so
11:42 much um uh potential there uh to be
11:44 explored and I think we'll see a lot of
11:46 that um and hopefully many of you in the
11:49 audience too entrepreneurs here uh uh
11:51 you can do incredible things maybe 10x
11:53 of what you could do before because
11:55 these tools are so capable and they're available
11:56 available
11:59 almost instantly around the world.
12:01 >> So I I have a question. It's more
12:03 specific because you are right now in
12:06 India, right? So so the Indian
12:08 population like as we one of the things
12:10 that a lot of people have been remarking
12:12 is that
12:16 the crowd at this summit is
12:19 extraordinarily young, right? So this
12:20 youthful energy, right? So you think
12:23 what what what would be the role that
12:25 India can play effectively in the future
12:26 given the resource constraints of India
12:29 and but also the availability of this
12:31 talent pool. Uh well look I've been
12:33 incredibly impressed already about the
12:35 energy that's here and uh we heard from
12:37 the minister that you know the youth of
12:39 today and in India especially I think
12:41 when you see the polls on this are very
12:44 positive about AI which I think is great
12:47 and um what I'd recommend to the
12:49 students of today is to really lean into
12:53 becoming incredibly uh proficient with
12:56 these new AI tools and I think over the
12:58 next 10 years that will almost kind of
13:01 make them superpowered in terms of what
13:02 they're able to do, you know, whether
13:05 that's business or science. Uh, and it's
13:07 a little bit um like the dawn of the
13:10 computer age or uh maybe mobile or
13:12 internet that we went through. Those um
13:15 the the the the the generation that
13:17 grows up native with that technology
13:19 will end up doing sort of incredible
13:21 things that we can only dream of right
13:23 now. And I think that's going to happen
13:25 with AI. And I think India and and the
13:27 youth here can be at the vanguard of that.
13:29 that.
13:31 Uh since this is a research symposium,
13:32 can I can I ask you to get a little
13:36 technical on the next? Sure. Great. So
13:40 uh uh so we we we we saw the evolution
13:44 of how alpha fold right started off by
13:46 you know building on top of existing
13:48 work from the Baker lab and and then
13:50 moving on to evolving all on its own
13:52 right. So what do you think is the next
13:54 stage technically in the evolution of
13:57 these kinds of co-scientist models?
14:00 >> Yeah. So well with AlphaFold what we
14:01 what we we actually built a new
14:03 completely new system but it would it
14:06 required the PDB the protein database.
14:09 So it needed the 150,000 structures that
14:13 um humanity had painstakingly uh found
14:15 over the last 50 years through
14:17 experimental work right and that turned
14:20 out to be only just enough data to
14:22 actually you know build a solve the
14:24 problem and build a system like
14:26 AlphaFold. The interesting thing is in
14:29 the debate we have uh at deep mind and
14:30 and other places is what's the
14:32 difference between the general system
14:34 which you can think of like the brain
14:37 and the tools that it uses. So um for us
14:39 as as humans there's no debate about
14:41 what's our minds and what are our tools
14:43 because obviously it's physically
14:45 separated. But if both things are
14:47 digital and in some cases both things
14:49 are AI the tool and the and the
14:52 orchestrating system uh then what do you
14:54 put in the main system and what do you
14:56 leave as a specialized tool now in my
14:58 opinion I think you'll end up with in in
15:00 our case with foundation models like
15:02 Gemini will use alpha things like
15:06 alphafold as a tool so if you if you if
15:09 Gemini wanted to or needed to fold a
15:11 protein understand the structure of a
15:13 protein I think it would be better for
15:16 it to call alpha fold as a tool then put
15:18 all of that protein information into the
15:22 main um system and I think the the if
15:23 you want to talk about it technically I
15:26 think the choice comes down to if you
15:28 put that data into the main system does
15:29 it help with other tasks does it
15:32 transfer to other tasks or does it
15:33 actually degrade the performance on
15:35 those other tasks so it's actually an
15:37 empirical question so that's why for
15:40 example coding and maths we put that in
15:42 the general foundation models because
15:45 Um, it turns out if you get good at
15:47 coding or maths, you're actually better
15:49 at planning uh and reasoning in general.
15:52 So, it's a it's it's a useful t skill,
15:55 but it also generalizes and uh but
15:57 something like you know the the folding
15:59 proteins probably is a is a very
16:01 specialized skill that that wouldn't
16:03 necessarily transfer to other domains.
16:04 So, I would be of the opinion that we
16:07 should leave that as a specialized tool.
16:09 Oh, that's interesting because a lot of
16:11 learnings from robot path planning was
16:13 used for trying to robot path planning
16:15 was try used to try to solve protein
16:17 folding. So you think protein knowing
16:19 how to protein fold will not transfer
16:21 back to other domains. It might do but
16:24 but we would um I I think it's in fact
16:25 we do these experiments all the time on
16:28 smaller scale models where we ablate
16:30 different data sets and we try and mix
16:34 in certain data sets and see if they
16:36 help or if they regress some benchmarks
16:38 like it would be no use if you put all
16:39 the protein data in and then it got
16:42 worse at language for example um which
16:44 is uh probably currently what would
16:46 happen. So maybe over time with an AGI
16:49 system, you just have everything in in
16:51 the one system. But uh I think for the
16:54 foreseeable maybe you know future, I
16:55 think it'll be more efficient to still
16:58 have um uh separate tools. Also those
17:00 tools by the way might be hybrid systems
17:02 in that they might just not they might
17:04 not be just learning systems. They might
17:08 also have uh built-in structures like
17:10 Alphafold did actually about physics and
17:12 chemistry and chemical bonds that you
17:14 could learn but would be more efficient
17:16 to just tell the system or or program
17:18 that indirectly.
17:21 >> So now that I mentioned the word robots,
17:22 so I'd like to ask you what's next in
17:25 physical AI?
17:27 >> Uh well look, I think I'm getting
17:30 increasingly excited by robotics. um you
17:31 know I probably wasn't so interested in
17:35 that 10 years ago because I I felt that
17:38 the issue was the algorithms not the
17:40 physical uh construction of the bodies.
17:41 I thought the algorithms were the thing
17:44 that was um behind. Now I think
17:46 algorithmically and actually we're very
17:48 excited about Gemini robotics because we
17:50 built our foundation model to be really
17:52 good multimodally. So it could
17:54 understand vision, image, the world
17:56 around us. So it has a very good
17:58 understanding of the physical world and
17:59 that's exactly what you'd want for
18:02 robotics um is a kind of general system
18:03 that understands the physical context
18:06 the robots in. So I think uh we're
18:08 probably in the next two three years
18:10 we're going to see some very interesting
18:12 new breakout moments for robotics. I
18:13 think there's still quite a bit more
18:15 research to be done. I don't think we're
18:17 there yet like some robotics companies
18:19 are claiming. Um and I think we'll have
18:22 humanoid robots and also non-humanoid. I
18:24 think both will be useful. Um but uh I
18:26 think in the next couple of years
18:28 there'll be some really breakthrough uh
18:30 moments. So I I think it's a very
18:32 exciting space to watch and a good area
18:34 to get into right now.
18:38 So a little tricky question. So a lot of
18:41 people are you know I mean that is a lot
18:42 of fear-mongering around AI. We we know
18:46 that most of it is unnecessary. Uh but
18:49 um if you start getting humanoids that
18:51 are running off foundation models, do
18:53 you think the the fear factor would go up?
18:55 up?
18:57 >> Um potentially. I mean it depends how we
18:59 design those humanoids but I think um
19:01 some risks go up too, right? again
19:04 depends on what you deploy them for and
19:06 I think increasingly especially if the
19:09 humanoids are pretty capable um and
19:11 they're heavy you know that there are uh
19:13 dangers and risks with that so I think
19:16 we need to um have those guardrails that
19:18 we were discussing earlier in place by
19:21 the time uh there's uh a lot of robots
19:24 roaming around
19:27 >> thank you so um we we talked enough
19:29 about risk so let's talk a little bit
19:31 more about the positive positivity of
19:33 things. So you know a lot of the
19:36 benefits of of all this cutting edge AI
19:40 right now is still seems to acrue to you
19:42 know the countries that have more
19:44 resources right that can that have GPUs
19:46 that can run their models there and so
19:47 on so forth. So what do you think would
19:51 take for AI to reach the global south
19:53 benefit much more larger fraction of the
19:54 population? So what kind of initiatives
19:56 that we should be looking at? Well,
19:58 look, I think we kind of touched on it
20:02 earlier, but um these the leading
20:03 foundation models, maybe there's three
20:06 or four of them uh and and perhaps five
20:07 or six uh if we include the Chinese
20:10 models too, then um they're pretty much
20:14 available uh very uh cost- effectively
20:17 for um uh you know a few months behind
20:20 also open source I mean we work on um
20:22 our own open source models Gemma which
20:24 uh uh we'll be releasing a new version
20:26 of soon which are very powerful for edge
20:28 devices. So I think that's a very
20:30 interesting area really efficient models
20:33 uh for you know computing on the edge
20:36 whether that's um you know your phone uh
20:39 or a lap single laptop or you know
20:42 eventually robotics I think uh there's
20:44 huge opportunities there for optimizing
20:47 uh what those kinds of uh models do and
20:49 the types of products or applications
20:51 you can build on top of that. So I think
20:53 there's um you know a lot of uh
20:55 potential there for for for those types
20:57 of um work to happen.
20:59 >> The entire audience the auditorium went
21:03 dark for a bit. So no no that's that is
21:06 not signaling any kind of ominous uh
21:09 thing so don't worry about that.
21:11 >> So I mean it's amazing. So I was
21:13 actually there when you did the first uh
21:17 uh game playing demo at one of the new
21:20 side events and that was with this tiny
21:22 room even then it was packed
21:25 >> and and this large auditorium is packed.
21:27 >> So what's the largest size hall you
21:29 think you can pack if you are talking
21:30 nowadays Madison Square Garden?
21:33 >> I don't know this is um pretty big one
21:35 and I I hear it's streaming to many many
21:38 people online. So um but yes I remember
21:41 that that uh that Europe's event very
21:43 well. It was a It was a hall maybe maybe
21:45 a third of this size, but it was I think
21:48 it was standing room only. It was packed
21:49 outside the door. And that was really
21:53 the first success that we had with um
21:54 these deep reinforcement learning
21:57 systems that we sort of pioneered that
21:59 could play Atari games, very simple
22:01 games now, but um but just directly from
22:03 the pixels, not giving any other
22:05 information, just maximize the score.
22:07 Here are the pixels on the screen. And
22:09 it was the first demonstration I would
22:11 say maybe of the modern AI era of an
22:14 agentic system doing something kind of
22:17 challenging and interesting that um you
22:19 know in this case a task that was
22:21 designed for humans to find interesting
22:23 and and enjoyable and somewhat
22:26 challenging and uh I think it was kind
22:29 of a watershed moment uh of course for
22:31 us but also for maybe for the industry.
22:34 who's back in 2013 I think and um that
22:37 actually this thesis of uh learning
22:40 systems learning algorithms this idea of
22:43 generality that you don't special case
22:46 uh uh the information or give give it um
22:48 privilege prior information to the
22:50 system that maybe was the way
22:52 traditional AI had been done or old good
22:53 old-fashioned AI had been done until
22:56 then expert systems that actually that
22:59 could scale to something interesting in
23:01 this case an Atari screen with 20,000
23:03 pixels on the screen. You know,
23:05 trivially small by today's standards,
23:08 but very uh you know, a very large uh
23:11 action space and uh data space for the
23:13 types of systems we had then.
23:15 >> Yeah. Now they've become like the hello
23:17 world of reinforcement learning now.
23:18 >> Yes. Yes. And then of course that
23:21 encouraged us to then do uh go on with
23:23 Alph Go, which was I think really the
23:26 the big watershed moment that made the
23:28 field and the industry set up a notice
23:31 in 2016. and and I think started a lot
23:33 of the commercial interest in in these
23:35 technologies that we could scale this
23:37 kind of deep reinforcement learning and
23:39 learning systems to actually you know
23:41 beat the world champion and the
23:43 legendary you know Lisa doll in in our
23:45 South Korea match.
23:47 >> So and one thing I really have to say
23:49 that I mean you brought up everything
23:52 about the Atari game player and also
23:54 later AlphaGo that allowed the rest of
23:55 the reinforcement learning community to
23:58 catch up. So thank you for that. And of
24:00 course my first success was I was
24:02 actually in the room and not outside
24:03 clamoring to get into the room to watch
24:05 the demo. But uh so that was that's
24:08 that's that's great. So again so let's
24:10 forget AI for a minute. Let me ask about
24:12 reinforcement learning. So you know that
24:14 Richton has been talking a lot about how
24:17 how and and and David and Rich wrote
24:19 this amazing amazing article on how
24:21 reinforcement learning is going to drive
24:25 AI forward. So what's your take on that?
24:27 Well, yeah, obviously we've we've had
24:30 many um debates over the years and I
24:33 think it's uh for me if I was to say and
24:34 maybe we can take this question more
24:37 generally like um what do I think about
24:39 today's foundation models and
24:40 reinforcement learning of course
24:41 reinforcement learning is an integral
24:44 part of the post-training of these um uh
24:46 these models and I think the inference
24:48 time compute the thinking part of the
24:51 the models uh could actually benefit a
24:53 lot more from the ideas we pioneered in
24:55 alpho the multi Carlos research and
24:56 other things. So in actually many
24:59 respects we need to combine the ideas we
25:01 had with Alph Go with today's foundation
25:03 models. Of course it's harder because
25:05 you don't have a perfect model of the
25:06 world, right? You need a better world
25:08 model. Uh in games it's trivial, right?
25:11 The the transition matrix. So I think
25:13 that's an issue. But if you're talking
25:16 about um uh if I was to guess today, I
25:18 think the foundation models are going to
25:20 like Gemini are going to be a critical
25:23 part of the ultimate AGI solution. Um
25:25 and then I think we'll have lots of
25:26 interesting reinforcement learning on
25:30 top. Um I I think eventually one day
25:33 maybe 20 years from now we'll have a
25:35 kind of more like an alpha zero type
25:38 system where uh you know reinforcement
25:39 learning can learn everything from
25:43 scratch right um but I think uh actually
25:44 that's not going to be the fastest way
25:47 to AGI. I think it makes sense to use
25:50 the foundation models uh and you know
25:51 all the information that's already out
25:53 there and learn that as almost like a
25:56 model of how the world works and then do
25:58 your reinforcement learning and planning
26:00 uh on top of that. I think that will be
26:02 uh more efficient in the first case.
26:03 >> Is it still going to be the cherry on top?
26:05 top?
26:06 >> No, I think it's going to be I think
26:08 it's in I mean you know you have to ask
26:10 Yan about his cherry comments. I'm sure
26:12 he's he can talk at length on that but
26:14 I've never really agreed with it. I
26:15 think it's just a fundamental I don't
26:18 really you know I think um obviously if
26:21 you measured it in terms of bits uh then
26:22 one can say well how much bits of
26:23 information are you getting from the
26:25 reinforcement learning but in my view
26:28 and presumably your view is that not all
26:30 bits are equal in terms of information
26:32 of course if you get a bit about you
26:33 know did you win the game or not win the
26:35 game that's much more important than
26:38 some random pixel on the screen right so
26:41 to equate the theformational value of
26:43 those bits um in just you know trivial
26:45 your way is clearly incorrect in my
26:48 opinion. But I do think that the
26:51 foundation models um are going to be the
26:54 question is is are they um going to be
26:57 all that's needed or just a critical
26:59 part of what's needed. I think there's
27:00 no question they're going to be at least
27:04 a critical component of um the first AGI systems.
27:06 systems.
27:10 >> Uh so so I I I so we're almost out of
27:12 time. So I just want to ask you what's
27:15 your message for the attendees of this summit?
27:16 summit?
27:19 >> Well, look, I think my message is uh one
27:22 of I would say cautious optimism. So, I
27:24 think we're on the cusp of an absolutely
27:27 incredible transformation
27:30 um that's going to uh uh uh bring
27:33 incredible benefits in science and
27:34 medicine specifically is what I'm
27:37 passionate about and I can see uh
27:40 revolutionizing the way uh we we we deal
27:42 with human health. Um I think there's
27:44 many amazing companies and and and tools
27:47 and products to build on top of uh these
27:49 systems and I think uh everyone in the
27:52 world can can build on uh these AI
27:55 systems to do that. Um but then also I
27:56 would just add a note of caution which
27:59 is um I think we will solve these
28:01 technical issues given enough time and
28:03 enough um brain power on it. I believe
28:06 in human ingenuity and if the best minds
28:07 work towards that I think we'll solve
28:10 the technical risks. Um but uh we also
28:12 need to do this internationally. So the
28:14 societal challenges of that may actually
28:17 end up being the harder problem than the
28:18 technical ones.
28:35 Thank you uh sir and ravi for such a