AI is poised to revolutionize scientific discovery and various industries by acting as powerful tools, accelerating research, and enabling new forms of human-AI collaboration, though significant challenges related to safety, ethics, and equitable access remain.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
But really what separates great
scientists from good scientists is their
creativity and their sort of maybe you
could call it a taste for what's the
right question um what's the right
hypothesis and it's much harder to come
up with the right question and the right
hypothesis than it is to solve a
conjecture and um you know I would call
that the highest level of sort of
creativity and so far today's systems
you know they don't have that capability
um you know one way we maybe would test
that is you could imagine training a
foundation model with a knowledge cutoff
of something like 1911 uh and then see
if it could come up with general
relativity like Einstein did in 1915. Um
but that would be a good test I think
for AGI and I think today's systems
clearly would not be capable of doing
that but I think this will be solved in time.
time.
>> Okay that's that's great. So you talked
about uh scientific tool a tool for
scientific discovery right and so we see
a lot of you know headlines about AI you
know making great breakthroughs in
science sometimes but uh uh you know how
would this become the norm right I mean
so do you see AI becoming the scientific
tool for discovery soon
>> yeah the reason I spent my whole life um
and career working on AI is um I saw
quite early on as um if we could build
these kind of general models that were
good at uh pattern recognition um they
would be incredibly useful uh scientific
tools maybe the ultimate tool for
science um that's what I was thinking a
you know a lot of science is about
finding insights and structure in vast
amounts of data that's perfect for AI
and uh so I think we're going to enter
in the next probably 10 years this new a
new golden era for scientific discovery
almost a new renaissance using these
tools incredible tools um like Alphafold
but I hope that will be the first of
many uh that can massively speed up our
research and accelerate scientific
discovery across almost uh any subject
area. So I think that's going to be uh
the next period is is using these
systems as tools and then after that
we'll see as they become more autonomous
can they be kind of like co-scientists
with you like a PhD student something
like that I think we're still quite away
from that but maybe in 10 plus years
that will be possible
>> so just to reassure a good fraction of
the audience you still see a role for
humans in that future right
>> yes I think um you know the next phase
is going to be uh incredible I think for
uh human experts and scientists the
amount of work they'll be able to do and
I'm actually really excited about
cross-disciplinary science which is
quite hard because you have to
understand you know more than one
subject area maybe two three four
subject areas and then find some
interesting connections between that I
actually think that's where a lot of the
really valuable advances are going to
happen in the next few years these
combinations of subject areas and I
think having a tool like AI will really
help um scientists uh learn about and be
able to understand and process uh all of
that information from you know multiple
different uh domains.
>> So so in some sense science is is is a
well definfined field if you will I mean
you can recognize success and so on so
forth. So what do you think would be the
role of AI in more abstract domains like
policy and other kinds of public
decision making?
>> I think we're uh I think things like um
science but especially if you take uh
coding and maths um are more amunable to
the current systems we have today mostly
because especially coding and
mathematics but also things like games
like chess um they are verifiable. So
the answer that the AI system outputs uh
can be checked for correctness. And so
um it's very uh useful for when you're
training these systems, you can have uh
uh databases of questions and check to
see if um 100% if it's right or not
right. Um I think of course when you get
into the arts and the humanities, things
like decision-m policy, you know, I'm
not sure what you necessarily had in
mind, but they're much more subjective.
uh they're hard to run the same
experiment twice. So it's um it's
difficult to get data about what is a
good decision in those cases. So I think
they'll be um those errors will be a lot
harder for AI to um to sort of um model.
>> So as a neuroscientist, so what do you
think we have learned about human
intelligence itself with all these
advances in AI that we're seeing?
Well, I think when you're in the early
days of of of this uh I guess modern AI
phase, which maybe you could say is the
last 20 years um with the advent of uh
of you know things like deep learning
from from Jeff Hinton. Um a lot in the
when we started at Deep Mind too, we
took a lot of inspiration from the
brain. Uh high level systems
inspiration, not not the direct
mechanics of the brain, but things like
episodic memory, what does the
hippocampus do, which is what I studied.
um how uh obviously neural networks,
reinforcement learning, your area um
which we know the brain dopamine system
in the brain implements. So we kind of
took inspiration from the brain which is
the only general in example of a general
intelligence that we have know of maybe
in the universe um as as a starting
point and so we sort of knew that it was
possible in the limit right and uh and I
think that's why neural networks and
reinforcement learning have been so
successful because um uh uh uh uh I
think learning is the key to the these
modern AI systems working not being
programmed with the answer like experts
systems were in the '90s um like deep
blue but actually allowing the systems
to learn directly from data and I think
now what I what I've I I view looking
back at at neuroscience is you know how
efficient the brain is um sample
efficient uh it doesn't need you know to
ingest the whole of the internet to
understand things so it's a it's a
different what we've built now today it
uses some of the same principles but
it's been uh manifest in a in a very
different type of um system than
probably the way the brain works.
>> Okay, thank you. So, uh, so
one of the things that people always
talk about is the safety risk, security
risk from AI and so on so forth, right?
And so, what does GDMs stand on this and
how do you approach making sure that AI
is safe?
Well, we've been um when we again when
we started Deep Mind, we we actually,
you know, were planning for success. Our
mission statement was uh solve
intelligence step one and then step two,
use it to solve everything else, which
at the time, you know, was sounded like
science fiction, but I think now is
becoming clearer how that might be
possible. Um and and uh applying AI to
almost every subject area, but we
planned for success. So um even though
we were just starting out and the field
was mostly uh just building um building
up ahead of steam um we understood also
the implications of that if that did
turn out to be the case and we thought
about it as a 20-year mission and I
think we're basically on track for that
you know around 2030 um then it would
come with these attendant risks as well
as the enormous benefits to science and
medicine and all these things that I
think we need as a society um to help
with our other challenges, many other
challenges that we have uh around the
world. Um but also as these systems
become more powerful, I think there's,
you know, at least two risks that we've
always worried about. One is um bad
actors um human actors, but individuals,
but could be also nation states using
these systems uh and repurposing them
for harmful ends um because they're dual
purpose. Um but then also as we get
closer to AGI and I think we're entering
maybe a kind of agentic era should we
call it where systems are more
autonomous I think we'll see a lot of
that happening this year and next year
um then we have to make sure as well
that the guard rails uh are in place
that these systems do what we expect
them to do and don't veer off um into uh
areas that uh we didn't we hadn't
planned for um and that could also be uh
problematic. So I think those are the
two challenges. There's a very sort of
societal one which I think also is going
to require international dialogue and
maybe ideally a minimum set of standards
um that is agreed internationally. Um
and then the second one is more of a
technical risk of how can we make sure
these systems are robust and reliable.
>> So I I mean setting aside all
existential risk questions. So what do
you think is the top two risks that we
should address with with AI systems today?
today?
>> Well, I mean I I I I think I mentioned
the two classes of problems. I think we
need to worry about um things like bio
and cyber risk um uh very soon. I think
that the current systems are getting
pretty good at um cyber and I think we
need to make sure cyber defenses are
more powerful than the attack vectors
and it's something that you know we work
on quite a lot at at at Google and at
DeepMind is um sort of using AI for
cyber security um to make sure that of
course it's very useful tool for cyber
defense too but you need to make sure
that the defenses are stronger than the
offenses. So um you know I think those
are kind of near-term risks. Uh but
there are many that we need to think
about and and do a lot more research on
and some of them are to do with uh um
agreeing a sort of set of standards
which you know I know many people
including yourself are trying to work on.
on.
>> Great. Thank you. So again talking about
international collaborations right? So
you you said that we really have to
worry on that work on that. So an
audience like this or or or or a
gathering like this right where we are
trying to involve uh the global south
very much in the dialogue right so what
do you think would be the impact of such
gatherings on the overall direction of
AI do you think and are these going to
make a big change going forward
>> I think so and I and I think that's why
um it's important we convene this summit
around the world because it's going to
affect this technology is going to
affect everyone. It's um it's a digital
technology so it can't really be
contained by borders. Um there's things
like open source so which is generally
very good but also one has to think
about if you found a vulnerability or or
some issue with an open-source piece of
software how do you recall it? How do
you patch it? There's no recall. So we
need to think about that. These are new
issues with something like AI where it's
hard ahead of time to fully understand
if there are any vulnerabilities and um
I think for the global south and and
countries like India I think there's
huge opportunity for the youth of today
you you all have access to pretty much
the most cutting edge tools in the world
right I don't think that's ever sort of
happened before maybe only 3 to 6 months
after they've been invented in the
frontier labs and I think and I can say
as someone working on the coldface of
this um we barely have time to
understand what are the amazing uh
capabilities that could be supported by
these models you know in products and
applied research so I think there's so
much um uh potential there uh to be
explored and I think we'll see a lot of
that um and hopefully many of you in the
audience too entrepreneurs here uh uh
you can do incredible things maybe 10x
of what you could do before because
these tools are so capable and they're available
available
almost instantly around the world.
>> So I I have a question. It's more
specific because you are right now in
India, right? So so the Indian
population like as we one of the things
that a lot of people have been remarking
is that
the crowd at this summit is
extraordinarily young, right? So this
youthful energy, right? So you think
what what what would be the role that
India can play effectively in the future
given the resource constraints of India
and but also the availability of this
talent pool. Uh well look I've been
incredibly impressed already about the
energy that's here and uh we heard from
the minister that you know the youth of
today and in India especially I think
when you see the polls on this are very
positive about AI which I think is great
and um what I'd recommend to the
students of today is to really lean into
becoming incredibly uh proficient with
these new AI tools and I think over the
next 10 years that will almost kind of
make them superpowered in terms of what
they're able to do, you know, whether
that's business or science. Uh, and it's
a little bit um like the dawn of the
computer age or uh maybe mobile or
internet that we went through. Those um
the the the the the generation that
grows up native with that technology
will end up doing sort of incredible
things that we can only dream of right
now. And I think that's going to happen
with AI. And I think India and and the
youth here can be at the vanguard of that.
that.
Uh since this is a research symposium,
can I can I ask you to get a little
technical on the next? Sure. Great. So
uh uh so we we we we saw the evolution
of how alpha fold right started off by
you know building on top of existing
work from the Baker lab and and then
moving on to evolving all on its own
right. So what do you think is the next
stage technically in the evolution of
these kinds of co-scientist models?
>> Yeah. So well with AlphaFold what we
what we we actually built a new
completely new system but it would it
required the PDB the protein database.
So it needed the 150,000 structures that
um humanity had painstakingly uh found
over the last 50 years through
experimental work right and that turned
out to be only just enough data to
actually you know build a solve the
problem and build a system like
AlphaFold. The interesting thing is in
the debate we have uh at deep mind and
and other places is what's the
difference between the general system
which you can think of like the brain
and the tools that it uses. So um for us
as as humans there's no debate about
what's our minds and what are our tools
because obviously it's physically
separated. But if both things are
digital and in some cases both things
are AI the tool and the and the
orchestrating system uh then what do you
put in the main system and what do you
leave as a specialized tool now in my
opinion I think you'll end up with in in
our case with foundation models like
Gemini will use alpha things like
alphafold as a tool so if you if you if
Gemini wanted to or needed to fold a
protein understand the structure of a
protein I think it would be better for
it to call alpha fold as a tool then put
all of that protein information into the
main um system and I think the the if
you want to talk about it technically I
think the choice comes down to if you
put that data into the main system does
it help with other tasks does it
transfer to other tasks or does it
actually degrade the performance on
those other tasks so it's actually an
empirical question so that's why for
example coding and maths we put that in
the general foundation models because
Um, it turns out if you get good at
coding or maths, you're actually better
at planning uh and reasoning in general.
So, it's a it's it's a useful t skill,
but it also generalizes and uh but
something like you know the the folding
proteins probably is a is a very
specialized skill that that wouldn't
necessarily transfer to other domains.
So, I would be of the opinion that we
should leave that as a specialized tool.
Oh, that's interesting because a lot of
learnings from robot path planning was
used for trying to robot path planning
was try used to try to solve protein
folding. So you think protein knowing
how to protein fold will not transfer
back to other domains. It might do but
but we would um I I think it's in fact
we do these experiments all the time on
smaller scale models where we ablate
different data sets and we try and mix
in certain data sets and see if they
help or if they regress some benchmarks
like it would be no use if you put all
the protein data in and then it got
worse at language for example um which
is uh probably currently what would
happen. So maybe over time with an AGI
system, you just have everything in in
the one system. But uh I think for the
foreseeable maybe you know future, I
think it'll be more efficient to still
have um uh separate tools. Also those
tools by the way might be hybrid systems
in that they might just not they might
not be just learning systems. They might
also have uh built-in structures like
Alphafold did actually about physics and
chemistry and chemical bonds that you
could learn but would be more efficient
to just tell the system or or program
that indirectly.
>> So now that I mentioned the word robots,
so I'd like to ask you what's next in
physical AI?
>> Uh well look, I think I'm getting
increasingly excited by robotics. um you
know I probably wasn't so interested in
that 10 years ago because I I felt that
the issue was the algorithms not the
physical uh construction of the bodies.
I thought the algorithms were the thing
that was um behind. Now I think
algorithmically and actually we're very
excited about Gemini robotics because we
built our foundation model to be really
good multimodally. So it could
understand vision, image, the world
around us. So it has a very good
understanding of the physical world and
that's exactly what you'd want for
robotics um is a kind of general system
that understands the physical context
the robots in. So I think uh we're
probably in the next two three years
we're going to see some very interesting
new breakout moments for robotics. I
think there's still quite a bit more
research to be done. I don't think we're
there yet like some robotics companies
are claiming. Um and I think we'll have
humanoid robots and also non-humanoid. I
think both will be useful. Um but uh I
think in the next couple of years
there'll be some really breakthrough uh
moments. So I I think it's a very
exciting space to watch and a good area
to get into right now.
So a little tricky question. So a lot of
people are you know I mean that is a lot
of fear-mongering around AI. We we know
that most of it is unnecessary. Uh but
um if you start getting humanoids that
are running off foundation models, do
you think the the fear factor would go up?
up?
>> Um potentially. I mean it depends how we
design those humanoids but I think um
some risks go up too, right? again
depends on what you deploy them for and
I think increasingly especially if the
humanoids are pretty capable um and
they're heavy you know that there are uh
dangers and risks with that so I think
we need to um have those guardrails that
we were discussing earlier in place by
the time uh there's uh a lot of robots
roaming around
>> thank you so um we we talked enough
about risk so let's talk a little bit
more about the positive positivity of
things. So you know a lot of the
benefits of of all this cutting edge AI
right now is still seems to acrue to you
know the countries that have more
resources right that can that have GPUs
that can run their models there and so
on so forth. So what do you think would
take for AI to reach the global south
benefit much more larger fraction of the
population? So what kind of initiatives
that we should be looking at? Well,
look, I think we kind of touched on it
earlier, but um these the leading
foundation models, maybe there's three
or four of them uh and and perhaps five
or six uh if we include the Chinese
models too, then um they're pretty much
available uh very uh cost- effectively
for um uh you know a few months behind
also open source I mean we work on um
our own open source models Gemma which
uh uh we'll be releasing a new version
of soon which are very powerful for edge
devices. So I think that's a very
interesting area really efficient models
uh for you know computing on the edge
whether that's um you know your phone uh
or a lap single laptop or you know
eventually robotics I think uh there's
huge opportunities there for optimizing
uh what those kinds of uh models do and
the types of products or applications
you can build on top of that. So I think
there's um you know a lot of uh
potential there for for for those types
of um work to happen.
>> The entire audience the auditorium went
dark for a bit. So no no that's that is
not signaling any kind of ominous uh
thing so don't worry about that.
>> So I mean it's amazing. So I was
actually there when you did the first uh
uh game playing demo at one of the new
side events and that was with this tiny
room even then it was packed
>> and and this large auditorium is packed.
>> So what's the largest size hall you
think you can pack if you are talking
nowadays Madison Square Garden?
>> I don't know this is um pretty big one
and I I hear it's streaming to many many
people online. So um but yes I remember
that that uh that Europe's event very
well. It was a It was a hall maybe maybe
a third of this size, but it was I think
it was standing room only. It was packed
outside the door. And that was really
the first success that we had with um
these deep reinforcement learning
systems that we sort of pioneered that
could play Atari games, very simple
games now, but um but just directly from
the pixels, not giving any other
information, just maximize the score.
Here are the pixels on the screen. And
it was the first demonstration I would
say maybe of the modern AI era of an
agentic system doing something kind of
challenging and interesting that um you
know in this case a task that was
designed for humans to find interesting
and and enjoyable and somewhat
challenging and uh I think it was kind
of a watershed moment uh of course for
us but also for maybe for the industry.
who's back in 2013 I think and um that
actually this thesis of uh learning
systems learning algorithms this idea of
generality that you don't special case
uh uh the information or give give it um
privilege prior information to the
system that maybe was the way
traditional AI had been done or old good
old-fashioned AI had been done until
then expert systems that actually that
could scale to something interesting in
this case an Atari screen with 20,000
pixels on the screen. You know,
trivially small by today's standards,
but very uh you know, a very large uh
action space and uh data space for the
types of systems we had then.
>> Yeah. Now they've become like the hello
world of reinforcement learning now.
>> Yes. Yes. And then of course that
encouraged us to then do uh go on with
Alph Go, which was I think really the
the big watershed moment that made the
field and the industry set up a notice
in 2016. and and I think started a lot
of the commercial interest in in these
technologies that we could scale this
kind of deep reinforcement learning and
learning systems to actually you know
beat the world champion and the
legendary you know Lisa doll in in our
South Korea match.
>> So and one thing I really have to say
that I mean you brought up everything
about the Atari game player and also
later AlphaGo that allowed the rest of
the reinforcement learning community to
catch up. So thank you for that. And of
course my first success was I was
actually in the room and not outside
clamoring to get into the room to watch
the demo. But uh so that was that's
that's that's great. So again so let's
forget AI for a minute. Let me ask about
reinforcement learning. So you know that
Richton has been talking a lot about how
how and and and David and Rich wrote
this amazing amazing article on how
reinforcement learning is going to drive
AI forward. So what's your take on that?
Well, yeah, obviously we've we've had
many um debates over the years and I
think it's uh for me if I was to say and
maybe we can take this question more
generally like um what do I think about
today's foundation models and
reinforcement learning of course
reinforcement learning is an integral
part of the post-training of these um uh
these models and I think the inference
time compute the thinking part of the
the models uh could actually benefit a
lot more from the ideas we pioneered in
alpho the multi Carlos research and
other things. So in actually many
respects we need to combine the ideas we
had with Alph Go with today's foundation
models. Of course it's harder because
you don't have a perfect model of the
world, right? You need a better world
model. Uh in games it's trivial, right?
The the transition matrix. So I think
that's an issue. But if you're talking
about um uh if I was to guess today, I
think the foundation models are going to
like Gemini are going to be a critical
part of the ultimate AGI solution. Um
and then I think we'll have lots of
interesting reinforcement learning on
top. Um I I think eventually one day
maybe 20 years from now we'll have a
kind of more like an alpha zero type
system where uh you know reinforcement
learning can learn everything from
scratch right um but I think uh actually
that's not going to be the fastest way
to AGI. I think it makes sense to use
the foundation models uh and you know
all the information that's already out
there and learn that as almost like a
model of how the world works and then do
your reinforcement learning and planning
uh on top of that. I think that will be
uh more efficient in the first case.
>> Is it still going to be the cherry on top?
top?
>> No, I think it's going to be I think
it's in I mean you know you have to ask
Yan about his cherry comments. I'm sure
he's he can talk at length on that but
I've never really agreed with it. I
think it's just a fundamental I don't
really you know I think um obviously if
you measured it in terms of bits uh then
one can say well how much bits of
information are you getting from the
reinforcement learning but in my view
and presumably your view is that not all
bits are equal in terms of information
of course if you get a bit about you
know did you win the game or not win the
game that's much more important than
some random pixel on the screen right so
to equate the theformational value of
those bits um in just you know trivial
your way is clearly incorrect in my
opinion. But I do think that the
foundation models um are going to be the
question is is are they um going to be
all that's needed or just a critical
part of what's needed. I think there's
no question they're going to be at least
a critical component of um the first AGI systems.
systems.
>> Uh so so I I I so we're almost out of
time. So I just want to ask you what's
your message for the attendees of this summit?
summit?
>> Well, look, I think my message is uh one
of I would say cautious optimism. So, I
think we're on the cusp of an absolutely
incredible transformation
um that's going to uh uh uh bring
incredible benefits in science and
medicine specifically is what I'm
passionate about and I can see uh
revolutionizing the way uh we we we deal
with human health. Um I think there's
many amazing companies and and and tools
and products to build on top of uh these
systems and I think uh everyone in the
world can can build on uh these AI
systems to do that. Um but then also I
would just add a note of caution which
is um I think we will solve these
technical issues given enough time and
enough um brain power on it. I believe
in human ingenuity and if the best minds
work towards that I think we'll solve
the technical risks. Um but uh we also
need to do this internationally. So the
societal challenges of that may actually
end up being the harder problem than the
technical ones.
Thank you uh sir and ravi for such a
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.